903 Comments
User's avatar
Deiseach's avatar

Dear God Almighty, I don't know whether to be infuriated on behalf of my nation (due to the reactions by some online), apologise on behalf of my nation (because apparently we've rejuvenated Joe), or what. I don't much like Biden's politics or his administration, but I have to admit, the man's visit here has provided me with some of the best laughs I've had all year.

Oh, and he's definitely running for a second term, he told us that before he left. You're welcome, no need to thank us:

https://www.independent.ie/irish-news/politics/joe-biden-says-ireland-trip-reinforced-optimism-for-a-second-term-as-us-president-42434115.html

"US President Joe Biden has credited his trip to Ireland with reinforcing his rationale for an expected run for a second term in office.

Mr Biden has already indicated his plan to run for the presidency again in 2024 but has not yet officially announced it.

At the close of a historic four-day trip to the island of Ireland, Mr Biden said the announcement would be made "relatively soon".

"I told you my plan is to run again," he told White House reporters before flying home to the United States.

He said the Irish trip had not affected the timing of an announcement. But he added: "We'll announce it relatively soon. But the trip here just reinforced my sense of optimism about what can be done."

That's not the reason for my hilarity, though (but can you just picture it? Biden versus Trump II: Electric Boogaloo? The invention of AI has come just in time to handle the amount of meme generation needed to cover this one).

No, it has to do with the cunningly coded slogan "Mayo for Sam" and what this allegedly reveals (what it reveals is that there are either some massive trolls or what I can only describe with le mot juste as 'batshit insane' people online; of course there is always ¿Por qué no los dos?

The message:

https://www.youtube.com/watch?v=sHbatf9Fx_U

The bewilderment:

https://www.balls.ie/the-rewind/joe-biden-mayo-for-sam-550311

The decoding:

https://old.reddit.com/r/ireland/comments/12ncom5/the_truth_about_mayo_for_sam/

What with pissing off the Brits and now this, it has been a week of glorious achievement, and here was me thinking it was just going to be a mix of cringey American politicking and Irish boot-licking 😁

Well, and some genuinely moving personal connections:

https://www.rte.ie/news/2023/0414/1377055-biden-knock-priest-last-rites/

Expand full comment
chaph's avatar

Reposting a question I posted as u/Sea-Sun504 on r/slatestarcodex's March "Monthly Discussion Thread", which was the post that made me realize I was shadow-banned from there. I hope that was made in error, although I am most disappointed by the decision.

"Has China succeeded in fulfilling their ballpoint pen tip needs domestically? This 2017 [WaPo article](https://www.washingtonpost.com/news/worldviews/wp/2017/01/18/finally-china-manufactures-a-ballpoint-pen-all-by-itself/) says:

>Taiyuan Iron and Steel Group (TISCO) announced that it would begin mass-producing ballpoint pen tips and replace imports within two years

It's now been 6 years, how did they fare out? Couldn't find anything on English Google. I'm curious whether it was just not economically viable and the success announcement was solely for patriotic reasons which was what created this goal."

Expand full comment
Benjamin Ikuta's avatar

What was that one Slate Star Codex post where Scott talked about various patients who had trouble getting treatment, even a rich one?

Expand full comment
Andrew Van Wagner's avatar

I have so many questions about psychiatry. I worry that I won't ever get answers to them because: (1) the relevant experts are ultra-busy; (2) the relevant experts are worried (I think?) about talking about psychiatry because it might get them in trouble legally if someone acts based on their comments; and (3) the media has a noticeable tendency toward harsh anti-psychiatry criticism, so there's a tendency to not talk to anyone lest comments get twisted or used against the expert in question somehow.

I wonder if one day I could even interview Scott Alexander about some topics in psychiatry. I'm interested in many things in the field. Here's a paper that I found incredibly interesting: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3488343/.

Expand full comment
Gwendolyn's avatar

Missouri just effectively banned gender affirming care for *adults*

Expand full comment
Bi_Gates's avatar

Googling the exact text of your comment turns up an AP article about how the state just wants to require 18 months of counselling before the """gender care""", which strikes me as a far cry from "banning" it entirely. It also strikes me as entirely reasonable, insofar as any medical regulation is reasonable at all, and far more tame than a whole lot of other medical regulations and requirements.

From https://apnews.com/article/transgender-gender-affirming-care-restrictions-missouri-4def2189dac9979a00d298efb3baf12a :

>The rule, which incudes a required 18 months of therapy before receiving gender-affirming health care, is set to take effect April 27 and expire next February.

It's not even a permanent law, it's a temporary timeout.

When you keep crying wolf like that, you effectively train people to stop paying attention to anything you say again.

Expand full comment
Martin Blank's avatar

It seems odd "gender affirming care" is care which involves hormone replacement and surgery in antagonism to someone's actual gender.

Seems more like gender queering or destroying care is the apt name, but people love Orwellian turns of phrase these days.

People don't have "gender souls". Most of them are just sick and/or trying to be transgressive/trendy. It is much healthier for society if Tim who likes ponies and wants to wear dresses can just be Tim who likes ponies and wants to wear dresses and we don't all pretend that deep down in his gender soul he is secretly a girl.

Given all that 18 months seems reasonable. We don't allow people to kill themselves or allow medical care amputating arms or legs (in most cases) no matter how much therapy.

Trans activists talked a big talk about X and Y only happen after the most careful deliberation and weighing of options, but it is has become clear that like say ADHD medication the therapists mostly just hand out puberty blockers like candy if people say they need them and are in distress.

Even the success stories reveal this fact if you actually pay attention to what they say. "I changed my gender 4 times in 6 months at age 13, and then went on blockers a couple weeks into my therapy at age 14, and I have since changed genders two more times, but I am happy with my treatment and this is all great!"

...wait I thought blockers were only for people with stable identities and long term therapeutic exhaustion? Not literally this is the first thing we tried...

Expand full comment
Pangolin Chow Mein's avatar

Some plastic surgeons advertise breast reduction surgery to young women involved in sports because reducing breast size will help them perform at a higher level.

Expand full comment
User's avatar
Comment deleted
Apr 15, 2023
Comment deleted
Expand full comment
Martin Blank's avatar

In 99%+ of cases the one that they were born with? It is not that complicated.

Expand full comment
User's avatar
Comment deleted
Apr 15, 2023
Comment deleted
Expand full comment
Bi_Gates's avatar

Biology doesn't care, whether you're eaten by lions or crushed under an earthquakes or enslaved by captors, Biology couldn't care less. But it's not Biology passing the laws.

People care about body mutilation being a new norm and a luxury belief, and they want to pass laws to tighten a belt around it. People also mostly don't care if the limits of your delusions go only as far as names or pronouns, they start being alarmed when the delusions start venturing into surgeries and hormones and cutting body parts, which are the things being limited here, not names and pronouns.

Expand full comment
User's avatar
Comment deleted
Apr 15, 2023
Comment deleted
Expand full comment
Mallard's avatar

They still allow it - after 18 months of therapy. Other forms of self-mutilation are illegal even after therapy. E.g. if someone identifies as a furry and wants their hands turned into paws (yes, really a thing) that is still illegal. Of course, the range of "gender-affirming care" includes more violent and less violent options.

It's obviously a delicate issue from a libertarian perspective. On the one hand, adults should be free to do what they want to their own bodies, including, mutilating themselves. On the other hand, mutilating mentally ill people seems questionable.

From a statist perspective, however, it does not seem particularly remarkable. People are already banned from taking all sorts of drugs, and doing all sorts of things to their own bodies - let alone to others' bodies, so this would not be a huge departure.

Expand full comment
Jacob Steel's avatar

I think a really basic perspective of regulating medical treatment is that it should only ever be done by people who are well-disposed towards the person whose treatment is being regulated.

If you look at Republican discourse, it's obvious that most of them (by which I mean "more than 50%", not "all") /hate/ trans people, and that this legislation is, in the main, motivated not by a sincere desire to ensure that people considering medical transition don't make decisions they'll regret, but by a desire to lash out and to hurt their outgroup - to "own the libs" in current parlance.

That doesn't /prove/ that there couldn't also be good reasons for supporting this legislation, but it should make you suspicious, and in particular I think it's important to put that acknowledgement front and centre in any discussion of it, and to subject it to much stricter scrutiny than you would legislation passed in good faith.

Expand full comment
Benjamin Ikuta's avatar

From a libertarian perspective, mental health treatment should be consensual, and the state should not mandate or ban it.

https://www.youtube.com/watch?v=EZ0FT2pDIGQ

Expand full comment
Mallard's avatar

Most people would prefer text to a YouTube video. Nothing in your comment references the inherent tension between individual rights and the diminished mental capacity of the mentally ill, so my hopes aren't high that the video even would either.

While I tend to agree with the general gist, and think that if e.g. a group of doctors tell their patients that they can defeat cancer by cutting off their genitals, eating horse-dewormer, or fervently praying to Jesus, and the adult patients believe them and do so, that the State should not get involved, even if the results are horrific, but it's not hard to imagine how someone might disagree in that case, or think that doctors should be required to make some sort of disclaimer, or disagree about where the line should be drawn, including whether it should make a difference if the patients are mentally ill and may lack the ability to meaningfully consent to a given treatment.

Expand full comment
Benjamin Ikuta's avatar

And even if some are incapable of deciding for themselves, how can you know the government would make a better decision?

Expand full comment
Benjamin Ikuta's avatar

Well, that could be fraud.

But anyway, even assuming for the sake of argument that some trans people are incapable of making decisions for themselves, presumably at least some of them are capable, right?

Expand full comment
Mallard's avatar

That gets into some of the actual issue at hand. First, you mention fraud. Typically fraud involves *deliberate* misrepresentation, but even a relatively libertarian inclination would still not necessarily preclude that being the only sort of forbidden communication about a product or service.

E.g. if I genuinely believe in homeopathy, it could still be argued that it should be illegal for me to sell homeopathic treatments with the claim that they work, since that is false advertising - they don't actually work.

If one adopts the latter approach, the State would have to get in the business of making factual determinations about the efficacy of treatments.

They might determine that eating horse dewormer does not effectively cure COVID, that praying to Jesus does not effectively cure cancer, or that cutting off genitals does not effectively relieve the symptoms of gender dysphoria.

If they would, then from the perspective of fraud, the practitioner should arguably at least be required to disclose that *the State* believes that the treatment is ineffective.

Obviously you could disagree with the state about a given treatment, e.g. whether "gender affirming care" is an effective treatment, but that doesn't fundamentally preclude the state from declaring certain statements about treatments to be fraudulent.

As far as whether any "trans people" are capable of consenting to treatments, that was probably in part what the requirement for therapy before treatment was for - to help them work through their issues and best understand their bodies, the treatment options, and their ramifications.

And if the State thinks that by definition "trans people" are mentally ill, then their position would seem to make a lot of sense - perhaps it would even be too liberal (in allowing them to consent to treatment post therapy).

Again, you can agree or disagree with the particulars, but it's not obvious a priori that a rights respecting government has no place regulating statements about treatments, or treatments of the mentally ill.

Of course, there is an inherent tension that I already referenced. If mentally ill people are given less freedom than regular adults, in order to protect their welfare, like children are, the state needs to determine who is mentally ill, which obviously threatens people's rights.

Ultimately, the same issue pertains to children. Some young people are mature, so paternalistic laws meant to protect them ultimately just rein in their freedom. However, while maturity is hard to measure, at least age, which is used as its proxy, is easy to measure, which allows for admittedly somewhat arbitrary lines to be drawn.

When it comes to protecting the mentally ill, however, the issue is inherently messy, as not only is "mental wellness" hard to define or measure, there is not even any obvious objective proxy like age.

So you can disagree with a given application, but the issue of the state's role in the treatment of the mentally ill - including the determination of who belongs in that category - is an inherent problem that does not have an obvious solution, even from a rights respecting perspective.

Expand full comment
Benjamin Ikuta's avatar

I think it makes very straightforward sense from a libertarian perspective that even the mentally ill deserve freedom too.

Expand full comment
Deiseach's avatar

So Joe Biden is visiting Ireland right now, and while I'm generally jaundiced about the kind of gushing welcomes we give to dignitaries on these occasions, I do understand why it's important that we suck up to the Yanks - without you guys, our economy is in shambles:

https://www.irishtimes.com/business/economy/2023/03/28/ireland-set-to-avoid-implementing-15-headline-corporate-tax-rate/

And of course pretty much every American president, including Obama, claims Irish ancestry. I don't know how important the Irish proportion of the Democratic Party vote remains, but there still seem to be politicians in office in certain areas with good Irish names.

Plus we rely on America to exert influence on the Brits when it comes to things like the Good Friday Agreement and Brexit and so on.

But President Biden has been good for a laugh, at least, be it 'insulting' Unionists or mixing up just who it was his cousin beat the hell out of (*almost* the best part is watching Michéal Martin standing there with a rictus grin on his face):

https://www.rte.ie/news/biden-in-ireland/2023/0413/1376706-joe-biden-gaffe/

https://www.youtube.com/watch?v=ZQl6clq4FPw

The Black and Tans were these guys:

https://www.youtube.com/watch?v=RPTjmFbp3rk

The All Blacks are these guys:

https://www.youtube.com/watch?v=t4eHd5ym9uo

Expand full comment
Martin Blank's avatar

Come out you black and tans come out and fight me like a man. Show your wife how you won medals down in Flanders!

Expand full comment
Purpleopolis's avatar

I saw some of those pics and was amazed that you have a hobbit for a president. Doesn't that anger the leprechauns?

Also, once again, the dogs have better judgement than the humans.

Expand full comment
Deiseach's avatar

A common mistake, but Michael D. *is* our representative of the leprechaun community.

This dog is a replacement for one of the President's dogs which died a year or two ago; he has always had Bernese Mountain Dogs while in office:

https://extra.ie/2021/03/07/news/all-of-michael-d-higgins-dogs

It is strange that this dog ran off, but then again it is the youngest dog and probably is fed-up of strangers wandering around taking attention of his master away.

Expand full comment
Purpleopolis's avatar

Berners are extremely popular in my town (the local dog park has nine that I'm on a petting basis with). I could never have a dog with that short of a lifespan.

Expand full comment
Urstoff's avatar

I have nothing to add, I just want to post the best Irish-related gaffe ever:

https://pbs.twimg.com/media/FrbaiK3WwAAJqmI?format=jpg&name=medium

Expand full comment
Deiseach's avatar

😂🤦‍♀️

That it's Chicago just makes it *perfect*.

Expand full comment
Martin Blank's avatar

That is hilarious.

Expand full comment
Arrk Mindmaster's avatar

Not 33%?

Expand full comment
Viliam's avatar

A short summary, please? I see it is about COVID. What is the main point?

Expand full comment
Wanda Tinasky's avatar

I found this interesting and want to signal-boost it. It's an editorial that explodes the white-guilt narrative that surrounds the Atlantic slave trade:

"Capitalism not slavery made Britain rich. It’s time we stopped apologising for our past"

https://archive.ph/MAb3U#selection-2833.0-2841.212

One nice excerpt:

"In the 1840s, King Ghezo of Dahomey, played by John Boyega in the 2022 film, The Woman King, fiercely resisted such pressure.

'The slave trade is the ruling principle of my people. It is the source and the glory of their wealth,' he complained. 'The mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery.' "

I'm sick of the narrative that slavery was unilaterally imposed on the world by evil white men. It was a ubiquitous global practice that no single ethnic, religious, or national group has particular responsibility for.

Expand full comment
beleester's avatar

Okay, but buying slaves was still bad, right?

Like, I don't know what horrible corner of Twitter you found this "narrative" you're attacking, but I've encountered approximately zero people arguing that slavery was solely practiced by white Europeans, or that this in particular is the cause of white guilt today.

The reason people in the US feel sensitive and guilty about race is because half the country bought slaves, built their economy on slavery, went to war to defend the right to own people, and after slavery was outlawed continued to expend considerable effort repressing black people. Feeling bad about that era of our nation's history seems like a reasonable reaction even if we didn't literally invent the concept of owning people.

Expand full comment
Pangolin Chow Mein's avatar

What America practiced wasn’t “slavery” but white supremacy in which a group of people with an obvious physical feature different than the ruling class were imported to be a permanent lower class. I think characterizing it as slavery doesn’t do the system justice because the key is people with Black skin could never run away and blend in with the rest of society. Qatar does something similar today but people are free to leave the country and move back home which wasn’t an option for what we called “slaves”. We even fought a war to get more desert so that slaves would die from exposure and lack of water before they got to Mexico!?! Pretty sick shit. Oh, and one of the funniest things is people that make a big deal about Dred Scott—so one additional guy got screwed while 4 million people were being held in slavery.

Expand full comment
User's avatar
Comment deleted
Apr 21, 2023
Comment deleted
Expand full comment
Pangolin Chow Mein's avatar

There were quite a few ways to enslave whites…none of them resulted in their offspring being slaves.

Expand full comment
Martin Blank's avatar

>I've encountered approximately zero people arguing that slavery was solely practiced by white Europeans, or that this in particular is the cause of white guilt today.

I seems like you don't get out around the internet or Ivy league campuses then. because for sure there is a lot of this.

Expand full comment
Purpleopolis's avatar

Nah, one side seceded from the union to defend the right to own people. The other side went to war for the right to rule everyone.

Expand full comment
Wanda Tinasky's avatar

"Okay, but buying slaves was still bad, right?"

Yes, slavery was 'bad'. So were wars of conquest, child brides, medicinal leeches, bloodletting, and feudalism. The past was a fucked-up place. So what?

The point of the perspective in that article is that, while slavery was bad, it wasn't directionally bad. It was simply a global custom that everyone willingly participated in, and if inherited guilt attaches to the cultural descendants of the buyers then it should equally attach to the cultural descendants of the sellers. If you and I bet our homes over a game of poker, I don't get to later call you an evil thief just because I'm now homeless. I believe it's intellectually dishonest and destructively divisive to preferentially blame one group for the evils of history. There's one world and, past or present, we are all partners in its development.

"went to war to defend the right to own people"

An equally valid way to frame that would be to say that half of the country went to war to free people. And given that our present institutions are the descendants of the half that won, I would say that's the more appropriate framing. This country paid a terrible price in both life and property in order to free a people that they had neither ethnic nor cultural ties to. I think that's pretty remarkable. I wonder if you could point to another historical example of anything remotely like that.

The reason people feel guilty about slavery is that there is a strong political incentive to make them feel that way. It synergizes very well with redistributionist political agendas, and the diversity-academic complex has been prospering from it for many decades now. Sadly this is a destructive policy, particularly for black Americans, as it fosters an attitude of victimhood and learned helplessness. Neither historic nor current discrimination are responsible for the various racial achievement gaps, so obsessing about those causes merely serves to distract the culture from making any actual progress on the real issues.

Expand full comment
Kenneth Almquist's avatar

“An equally valid way to frame that would be to say that half of the country went to war to free people.”

The Confederacy fought to preserve slavery while the Union fought to preserve the Union. That changed a bit over the course of the war. For Jefferson Davis, the difference between victory and defeat was the difference between being remembered as a founding father of a new country versus going down in history as a traitor. So independence gradually became a goal itself, rather than just a means of preserving slavery. On the Union side, destroying slavery became a way to preserve the Union. After Lincoln signed the Emancipation Proclamation, every advance of the Union army meant more slaves were freed. The 13th Amendment meant that a Union victory in the Civil War settled the issue for all time. There could never be another rebellion motivated by a desire to preserve slavery because there was no longer any slavery to preserve.

So the best that can be said for the claim that, “half of the country went to war to free people,” is that the claim is mostly false rather than utterly false. To quote from Lincoln's Gettysburg address, Union soldiers “gave their lives that nation might live.”

(I should add that, like beleester, I'm not familiar with the narrative Wanda Tinasky refers to, so I'm not attempting to address that narrative.)

Expand full comment
Pangolin Chow Mein's avatar

Ask yourself—what happens if the CSA won??? Slaves only have to make it to PA to be free. Prior to secession a slave had to make it to Canada because the Constitution contains the Fugitive Slave Clause. Jefferson Davis never wanted to actually form a new country he just wanted to inflict so much pain on the country that the North would bend to his will. So he lost at the ballot box and war is just politics by other means.

Expand full comment
Wanda Tinasky's avatar

If your standard for evaluating the motivations of the respective sides is their stated political goals, then the South went to war not for slavery but to protect states' rights. However everyone knows that to be nothing but political sophistry.

If the North's only goal was to protect the Union then they could have done that much more simply by acceding to the South's demands to protect and expand the institution of slavery. The reason the Union was in danger in the first place was because of unresolvable differences between the South and abolitionist fervor in the North. The Civil War was about what everyone knows it was about: slavery.

While I find it highly unlikely that an American citizen possessed of basic sensory function could be ignorant of it, the 'narrative' I'm referencing can be found in many well-known movements such as Critical Race Theory, the 1619 Project, and the vast race-based moral panic in the summer of 2020 involving the public condemnation and removal ('cancelation') of various public and historic figures who failed to adhere to a particular narrow conception of historical US race relations. In the unlikely event that you're truly ignorant of these ideological trends, feel free to plug those terms into the search engine of your choice. Enjoy.

Expand full comment
Gunflint's avatar

> Neither historic nor current discrimination are responsible for the various racial achievement gaps, so obsessing about those causes merely serves to distract the culture from making any actual progress on the real issues.

I don’t think this is correct. Let me tell you a bit about my hometown.

1. Minneapolis racial covenants:

https://www.minnpost.com/metro/2019/02/with-covenants-racism-was-written-into-minneapolis-housing-the-scars-are-still-visible/

2. Twin Cities redlining:

https://www.mnrealtor.com/blogs/mnr-news1/2020/10/01/mapping-the-legacy-of-racism-in-twin-cities-real-e

3. Why racial segregation is so prominent in the Twin Cities:

https://m.startribune.com/how-twin-cities-housing-rules-keep-the-metro-segregated/600081529/

4. Racism is still very much with us in Minneapolis

Today April 13, 2023, Mpls settled yet another 8.9 million dollar lawsuit for an earlier undo force lawsuit for behavior of former officer Derek Chauvin before George Floyd’s death. Newly released body cam video shows him using his signature knee on the neck move on a boy in his own home.

Mpls police chief, Brian O’Hara:

"This is an example of the cancer that has infected this department," O'Hara said, while apologizing to Pope, Code and their families. "Nearly six years after these two incidents occurred we are forced once again to reckon with the deplorable acts of a person (Chauvin) who has proven to be a national embarrassment to the policing profession, and the continued harm he has caused members of our community."

https://www.kare11.com/amp/article/news/local/minneapolis-approves-multi-million-dollar-settlements-for-derek-chauvin-victims/89-3f37e485-d5bd-4fb5-a3b7-78d675390383

5. My personal take on this

A black kid growing up in largely black North Minneapolis is going to face much stiffer headwinds than a similarly gifted white kid. He or she may just make it but it will be a lot tougher than it was for me, a white guy or my white wife.

Expand full comment
Martin Blank's avatar

Honestly I am tiny bit of an expert on these particular issues, and like articles from almost anyone selling a particular viewpoint, those stories are a bit overblown.

Not that the covenants weren't real, or the redlining or whatever. But it was just never universal enough, or consistently enforces enough to totally distort the market in the way that the left likes to be portray.

You will have evidence black people were barred from buying say 15% of the properties in a given decade, which is terrible. But then the articles (or ones like them) will often lay huge amounts of economically disparate outcomes at the feet of this fact. Often ignoring that similar covenants regarding Jews, or Japanese, or Swedes (yes there was heavy discrimination against Swedes/Finns and pretty much any group seen as "laborers up through the 1930s), simply didn't seem to have similar impacts. If normally you would be bale to buy in 10 neighborhoods, but now you can only buy in 8, you have surely been harmed, but it is not clear fatally so. That isn't even getting into the question of whether some of these methods of discriminating between borrowers/neighbors actually made rational sense in an era before big data and computers. The idea of a racial covenant might seem repugnant, but it is actually crystal clear such covenants were in large part used because they were effective and the presence of Italians/Chinese/Hispanics/whatever would drive down property values.

once again doesn't mean the policies were good. I am glad they are gone. Or that they didn't do some damage. But it is such a politically easy and appealing story to discover the answer to why North Minneapolis is the way it is, and be able to place all the blame on racism.

Being able to see ourselves as the righteous, and those who we blame for our ills as irredeemable monsters is an intoxicating fruit.

As for your comments about headwinds. Not sure that is really the case in 2023 anymore, there are a truly insane amount of level of the playing field that happens. And sort of regardless of whether it is true or not, very few are facing such headwinds that they cannot make it out of their station with some relatively minor personal sacrifices and decision making (pretty much taking your education seriously and waiting to have kids until you are married leads to near perfect equality and proportionality in life outcomes, even for African-Americans).

Expand full comment
Wanda Tinasky's avatar

I appreciate this comment. Could you give some good references for good objective (ie non-woke) historical/economic analyses here? I find this a very tiresome argument and it's hard to search the literature using Google without wading through a leftist tide of misinformation.

Expand full comment
Martin Blank's avatar

Well first off I would say just read the actual original source documentation. And read the original research, even by woke researchers, often the facts belie their narrative around the facts. So you will see an article describing the widespread pervasive presence of covenants and how they destroyed African Americans ability to buy homes, and then 8 paragraphs later they will mention how they were present in 11% of mortgages from that era (or whatever). Then if you read some of the actual covenants on the mortgages, you will notice the covenants don't just call out African Americans, but people of many races/backgrounds (Jews/Japanese/Finns).

Honestly on the old HOLC loan maps everyone is so bent out of shape about, the maps seem most worked up about "laborers" in most of the country. Not necessarily anything else. The maps are trying to project future property value growth, and their projections for the neighborhoods involve notes about the type of people/industries/land use there.

So just for example read the research and materials of the "mapping inequality" project. But instead of going into it with an attitude of "where is the racism I want to find the maximum amount of racism possible!".

Go into it with the attitude of "what is this and what are these people trying to accomplish" (new flash it is what is on the tin: advise lenders about which neighborhood will have loans which get paid off). You will see some patterns that certainly look like racism. But the maps are also pretty hard on the future value of disused farmland for example (remember this is during the great depression). Can maps be racist against disused farmland?

Here is the example of text taken from a random "redlined" area:

The major portion of this area is undeveloped to the south and west. It is one of the oldest parts of the city, ranging in age from 10 to 50 years, the newer development being in the extreme south and west. Considerable reconditioning is necessary. The laboring class of German and Bohemian nationalities live here. Fifty percent of the homes are owner occupied. Rentals run from $10 to $30. The type of construction is some frame and some old brick and stone houses of 1 and one and a half stories. It is very unfavorable due to the sub-strata of stone and little or no sub-soil. Depreciation here is from 60% to 70% which took place largely before 1929 with very little recovery. The Joe* Schmidt Brewery is located at West 7th between Oneida and Webster streets.

Here is another random example:

This was formerly an old residential part of Minneapolis. It is about 40 years old; there were many fine homes in this area occupied by Germans, Irish, French and Scandinavian families. Today the buildings are badly in need of rehabilitation. On the west between Lyndale and Humboldt Avenues from Plymouth to the southern boundary of this area, most of the population today is of the poorer class of Jew and colored people. Various industries have sprung up in this neighborhood and particularly on the easterly half of this area adjacent to the business district there are some duplexes and apartment houses of the older class in poor physical condition. Most of the buildings range from 15 to 50 years; it is very difficult to place the valuation; and it is a most undesirable location for residential purposes.

Do those scream "tool of racist oppression to you"?

Expand full comment
Wanda Tinasky's avatar

You misunderstand my view. I'm not arguing that discrimination didn't happen. I'm arguing that it has zero explanatory power for current racial disparities. For example, if discrimination was the dominant factor, then why do income disparities disappear when you control for IQ:

https://www.researchgate.net/publication/229357056_Occupation_and_income_related_to_psychometric_g

And how do you explain the relative success that recent African immigrants have today? By some measures Nigerian immigrants are one of most highly achieving ethnic groups in the country today. How is that possible in the face of anti-black discrimination? The average income for black households in the US is almost $50k. I challenge you find a population of blacks anywhere in the world who earn more. If US culture is so terrible for black people, why do they achieve their best success here?

You would also expect to see a clear dose-response relationship in the historical record, either by era or by region. Whatever your views on racism or systematic discrimination, I think it's impossible to say that culture isn't more accepting of blacks today that it was in 1960, but in fact none of the racial achievement gaps have closed over the past 60 years. In fact, the era that saw the largest gains in terms of escaping poverty and closing the income gap came in the period between 1940 and 1960.

Other historical counterexamples for the power of discrimination are the Japanese:

https://en.wikipedia.org/wiki/Internment_of_Japanese_Americans

Wherein a significant proportion of Japanese Americans were illegally detained and stripped of property. Yet Japanese Americans today are one of the highest-achieving groups as measured by both educational attainment and average income.

Ashkenazi Jews also offer clear evidence for the economic ineffectiveness of discrimination:

https://en.wikipedia.org/wiki/Antisemitism

Despite being an almost universally persecuted group throughout the entirely of European history, Jews are persistently high achievers in terms of educational attainment and income.

Almost every ethnic group in the US has experienced discrimination, yet every ethnic groups prospers today except for blacks. I think that it is special pleading to argue that discrimination had a particular effect on them.

The George Floyd incident is a sad example of moral panic run amok. His death sparked an intellectually incoherent political movement that has zero factual basis. Black deaths at the hands of police are vastly lower than what their overrepresentation in violent crime would predict. HALF of all murders in this country are committed by blacks, so if anything the 27% figure (so indignantly touted by outraged BLMers) is a badge of anti-racist honor and restraint on the part of the police.

My personal take on this is that black Americans are the victims of an unfortunate genetic hand and simply have low IQs (as countless studies - despite the best efforts of left-leaning researchers - show). This deficit has been exacerbated by a culture, enabled by politically-motivated narratives of victimhood, that eschews personal responsibility, hard work, education, respect for authority, or intact families. The current political dialogue, which paints them as helpless victims, ensures that no more effective culture will ever be created to replace it. The left can cancel and taboo as much as it wants (as always, a sure sign that the truth is not on your side), but no amount of social policy will alter the genetic and cultural realities - as the last 60 years of racially-informed policy failures clearly demonstrates.

Expand full comment
Gunflint's avatar

Wanda, I've slept on it and I'm going to give you this round. You have statistics at hand and I'm guessing this isn't the first time you have made this case. I am just going by 40 years of first hand experience. I can't see any explicit racism in your argument. I do see a remarkable lack of empathy.

A superficial web search shows Helmuth Nyborg isn't recognized as being correct. We've been through this before with William Shockley. The racist history of psychometric stats make me queasy and I'm not up to digging into that now.

The Anti Defamation League say that the 50% of the murders committed by blacks is a white supremacy trope, but I need to dig some more. The CapsLock of the word HALF is a bit of a tell though.

Ashkenazi Jews have a wonderfully adaptive social structure and do produce people with some eye popping IQ's but unless you go back to the Old Testament their oppression while terrible, didn't include chattel slavery with new slaves created with the help of the owner - see Sally Hemings.

I'm not sure what the 27% figure in your take on the George Floyd murder means exactly but "badge of honor"? No. You did see the video didn't you? Again queasiness.

Finally it's not that discrimination happened, it's ongoing.

But for now I'm going to say you are ahead on points. The match isn't over though.

Expand full comment
Mr. Doolittle's avatar

Without weighing in to the broader conversation here, I can help with the 27% figure. I believe that's a reference to the percent of individuals who are shot/killed by police. Wanda is saying that because blacks make up ~50% of murders and violent crimes, the fact that *only* 27% of those killed by police are black is a sign of restraint. We would expect, a priori, that deaths at the hands of police would be a function of police interactions and police interactions would be a function of crime, especially serious crimes. If all of those things are true, then Wanda would have a strong point there. You could also look at it from the angle that blacks make up ~13% of the population, so they are doubly-represented in those killed. I'm sympathetic to parts of both arguments and don't have a strong feeling that one is obviously correct.

Expand full comment
User's avatar
Comment deleted
Apr 14, 2023
Comment deleted
Expand full comment
Gunflint's avatar

I don’t think you mean the ‘crime is legal’ thing completely seriously. It, like everything else in life is a lot more complicated.

Mpls police force is short about 200 officers. Saint Paul is down about 70.

There aren’t enough candidates to fill the jobs.

If you ask blacks in largely black neighborhoods they definitely would like to see better enforcement.

As would I and all of my for the most part white neighbors. The handful of black people on my block are on board too.

Expand full comment
beowulf888's avatar

While Hannan is correct that the west African Kingdoms happily enabled the export of slaves in return for trade goods, guns, and gold, it's delusional thinking on his part that slavery was not a significant contributor to the success of Britain's capitalist expansion.

For instance, by 1800 11% of Britain's economy was derived from the plantation economies that used slave labor. That's about the same percentage of revenue that US Banking industry adds to our economy today. So Britain would have been much less wealthy without slave labor. And I don't know if anyone has done an analysis of the downstream impacts of the products produced by the plantation system, but I suspect that the Crown would have seen a significant reduction in tariff revenues without it—which in turn would have impinged on their ability to grow their empire—which in turn allowed Britain to grow their capitalist system.

Moreover, even after the Slavery Abolition Act 1833, US cotton (produced by slave labor in the US) continued to supply the mills of the Industrial Revolution in Britain. So Hannan's argument that capitalism and not slave labor made Britain rich seems specious to me.

OTOH, Toby Green in his history of the African slave trade _A Fistful of Shells_ shows how important the export of slaves was to the economies of West African kingdoms. The Arabs started the trade in medieval times, but getting large numbers of slaves across the Sahara was a difficult enterprise. The Europeans with their ships, trade goods, and gold made the slave trade ever so much more lucrative for the West African rulers. In fact, in the 16th and 17th Centuries Spain and Portugal would exchange ambassadors with these kingdoms, and the West African rulers were seen as equals to the Iberian aristocracy. So the argument that European countries were solely responsible for the slave trade is specious as well.

Anyway, there are true believers on both sides who will never be convinced. Personally, I don't believe in ancestral innocence. We all have ancestors the did horrible things to others, and who had horrible things done to them. After the NY Times 1619 Project was published (which I think is significantly flawed), a Danish friend of mine was chastising us in the US for not providing reparations for slavery to our black citizens. Being more than half Irish, I asked for reparations from the Danes for their Viking ancestors pillaging, raping and enslaving my people. That went over like a lead balloon.

Expand full comment
Nick R's avatar

Non-sequitor. That fact that X% of an economy "derives" from industry Y does not imply that the economy would be X% poorer without industry Y. Without Y, the resources used in Y would be freed up and be used in industries W, X, and Z, and maybe be richer than it was with industry Y, if industry Y uses the resources inefficiently. When considering a counterfactual, you always have to ask: "compared to what?" When slavery was ubiquitous, it was entangled in a significant proportion of global GDP. But global GDP accelerated in the 19th century as slavery was abandoned.

Expand full comment
Pangolin Chow Mein's avatar

It’s all about sugar and calories. Sugar cane is a store for solar energy and sugar is what fueled the Industrial Revolution in 1760. So people consumed more calories which led to more productivity. Sugar was displaced by coal and coal by oil and now oil by natural gas as the most important fuel. Slavery was at the foundation to the sugar trade.

Expand full comment
Wanda Tinasky's avatar

"I asked for reparations from the Danes for their Viking ancestors pillaging"

Love this. Whenever reparations come up, I do something similar and claim that Italy owes all of Europe for all of the plundering that Rome did.

"So Hannan's argument that capitalism and not slave labor made Britain rich seems specious to me."

Not to me. Given that slavery was ubiquitous then why were the Brits the only ones who got rich? I just don't buy the argument that slavery was some uniquely enriching institution. It was just the common technology of the era. The sine qua non of the industrial revolution was clearly cultural.

More broadly, I'm generally skeptical of these types of economic arguments because slavery is definitionally coercive and, as a rule, coercive policies lead to economic misallocations. It seems likely to me that, absent slavery, production would have shifted from cotton to something potentially even more valuable and in the counterfactual world where slavery never happened we'd be talking about the pivotal role that <random natural resource> played in the IR. Plus, I mean, imagine the scenario where slave labor never existed and the plantations used paid/indentured labor instead. That just redistributes profits from owner to a broader class of laborers, which plausibly (likely?) would have resulted in more downstream economic activity. Is that net-bad for the Crown? Seems dubious.

Of course that also would have raised the price of raw cotton which would have affected the downstream industries with knock-on effects that are hard to know. Which is why I'm always skeptical of just-so stories like this, especially when it comes to economics. If there's one thing that's universally true about capitalism is that it's unpredictably adaptive. It's like a stream - if you put an obstacle in its path it'll just find another route. No one thing can stop it and no one thing enabled it. It uses whatever's available. 400 years ago that happened to be slavery. If there hadn't been slavery then it would've been something else.

Expand full comment
Erusian's avatar

The issue with the 11% figure is that it assumes that without slavery that economic activity wouldn't have happened. Which is false. If slavery had never existed there would still have been cotton plantations. They just would have used free labor.

However, cotton would have been more expensive and less money would have accrued to the aristocrats who owned the plantations. So the question is to what extent cheaper cotton (and indigo and sugar) and the profits of plantations were central to the development of a modern industrial economy. (And even that assumes that slavery had a long term effect of keeping prices low and profits high. There's an argument to be had that it didn't in the long term. Though not uncontroversially!)

There's no obvious answer because I agree (and everyone agrees) that cotton farms and international trade were necessary to capitalism. The question is to what extent slavery accelerated it. Unfortunately it's difficult to find base cases because industrialization was such a concentrated phenomenon and slavery such a broad one that it's hopelessly confounded. You just can't find a region that had significant international trade, significant industrialization, and didn't trade with slave holding regions to compare against.

Expand full comment
Wanda Tinasky's avatar

I have no idea what economic historians have to say about this, but it seems to me that slavery can be economically modeled as free labor plus an extreme regressive tax. Isn't the argument that slavery enabled capitalism equivalent to an argument that regressive taxation and income inequality helps growth? The sorts of people who are likely to make the first argument seem unlikely to be receptive to the second argument. You seem to be knowledgable about the academic dialogue here, is this a point that's acknowledged or discussed?

"cotton farms and international trade were necessary to capitalism."

Could you expand on this a bit? Cotton was _necessary_? I get that it was the most important commodity in those days, but so what? It's not like capitalism would cease to exist without oil today. Wouldn't they have just discovered another supply chain to industrialize around?

Expand full comment
Melvin's avatar

> I have no idea what economic historians have to say about this, but it seems to me that slavery can be economically modeled as free labor plus an extreme regressive tax

I laughed recently when I read that the primary means by which the Spartans oppressed and lived off the labour of the helots was a tax of 50%. That's right, if you were a helot, the most oppressed people in history, you knew you were oppressed because the Spartans would take half of what you produced and leave you only with the other half to feed your family.

Whereas I am a free man living in a first-world society so I only get taxed at (something asymptotically approaching) 47% and I get to keep a whopping 53% to feed my family.

Expand full comment
alesziegler's avatar

I would say that Erusian's rendering of left-wing argument isn't overly steelmaned, to put it mildly.

Actual good left-wing argument in the direction of "slavery enabled capitalism" is roughly as follows: slavery system shifted relative economic balance among European elites from traditional feudal aristocracy to bourgeoise, which enabled bourgeoisie to increase its political power, which ultimately lead to a breakdown of feudal order and its replacement by capitalism. There are various problems with that argument, but I am not going to go into them here.

As for inequality, while it is true that some pro-egalitarian policies are bad for economic growth, it does not follow from that there cannot be an inequality decreasing policy that is good for growth. Like, in the second half of the 20th century, economic growth in both US and Western Europe was higher than in 19th century, under much more egalitarian policies. It is possible that with 19th century levels of (non)egalitarianism, growth in the latter period would be even higher, but that isn't, you know, slam dunk.

Expand full comment
Erusian's avatar

I didn't actually represent the left's argument that slavery enabled capitalism. I simply pointed out that you can't just do a simple "X% of the economy involved slavery" analysis. So saying it wasn't steelmanned is true only in a technical sense.

Your second point on inequality is wrong on the statistics. Average American economic growth as a percent was higher in the Gilded Age and persisted for a longer period (even through the Long Recession). Also, inequality sharply increased in the 1950s until the recession in 1960. It started from a lower level due to the Great Depression, New Deal, and WW2. Though it also started at a lower level in the Gilded Age due to the Civil War and freeing the slaves along with things like land distribution. In both cases inequality and wealth increased through the period rather sharply. Likewise inequality growth slowed or reversed during WW1 and Stagflation. Then grew again in the boom times of the '80s and '90s. In a simplistic "does this correlate" analysis then economic growth does correlate with increasing inequality.

Your argument is typical of the literature though. I agree it's possible that some pro-equality policies are good for growth (in fact I could name some) but setting the bar at "it is possible" is giving the side you want waaay too much freedom.

Expand full comment
Martin Blank's avatar

Mostly inequality is driven by the rich getting richer, not them literally taking from the poor. Which is the opposite of how people conceive of it. The pie grows, and the rich capture most of the growth for themselves.

Instead it is mostly told as the story of a fixed pie and that inequality comes from the rich stealing.

Expand full comment
alesziegler's avatar

>I didn't actually represent the left's argument that slavery enabled capitalism.

I was referring to your endorsement of the idea from OP that “the argument that slavery enabled capitalism [is] equivalent to an argument that regressive taxation and income inequality helps growth”. It isn’t, or at least I feel that it is a weakmaning of the standard Marxist argument. But I didn’t thread my comment correctly, sorry about that.

>Your second point on inequality is wrong on the statistics. Average American economic growth as a percent was higher in the Gilded Age and persisted for a longer period (even through the Long Recession).

I wasn’t referring to the Gilded Age, but to 19th century, and just eyeballing the graph here (https://ourworldindata.org/grapher/gdp-per-capita-maddison-2020?tab=chart&stackMode=relative&country=~USA; EDIT: I have grown concerned that my sophisticated statistical method of eyeballing the graph is not sufficient, but I still don't have the time do a proper analysis; Wikipedia (https://en.wikipedia.org/wiki/List_of_regions_by_past_GDP_(PPP)_per_capita) cites Maddison project as economic growth per capita from 1820 to 1913 being indeed slower than from 1950 to 2008, in 1990 international dollars, but data from 19th century are not as easy to find as I thought, so I have to downgrade my certainty about that), it sure looks like US growth was higher in the second half of the 20th century than in the 19th century, and the same seems to hold for the Gilded Age (Wikipedia tells me that it was „roughly“ from 1877 to 1896). Or am I missing something? I thought I was citing an uncontroversial point of economics. Perhaps I should clarify that I am using a growth per capita, since an overall economic growth is confounded by a population growth.

Now you are right that inequality increased in the second half of the 20th century, so I was careless with my terminology. I was gesturing towards the fact that in the second half of the 20th century US, as well as Western Europe, had the welfare state, which in 19th century it mostly hadn’t.

But I do think that Western style welfare state does decrease inequality compared to the laissez-faire counterfactual and that is far from clear whether such welfare state is bad for economic growth compared to “night-watchman state” closer to the 19th century economic policies; although I admit this is confounded due to actual 19th century policies deviating in many respects from economic libertarianism.

Expand full comment
Erusian's avatar

> Isn't the argument that slavery enabled capitalism equivalent to an argument that regressive taxation and income inequality helps growth? [...] You seem to be knowledgable about the academic dialogue here, is this a point that's acknowledged or discussed?

Yes. Right wing thought generally tends to emphasize that accumulation of capital in the most productive classes increases aggregate growth which ultimately benefits everyone. Far left thought (as in the Soviet Union or China) meanwhile follows a rather complex Marxist idea that nets out to the same (or a similar) idea. Tax the poor and concentrate resources in the wealthy, productive regions with pro-growth policies. And this will eventually benefit everyone. (The big difference is the right tends to think this will happen without state intervention.)

It's only the western left who, in roughly the 1960s/70s, recoiled from the idea. They were increasingly concerned with income inequality as a social justice issue. But if they admitted the pretty universal economic consensus of the time (that inequality was good for growth, at least in developed countries) then they were facing a stark trade off. Reduce income inequality or economic growth.

Instead they did what appears to me to be a bunch of highly motivated research to prove that inequality was bad for growth. A lot of which ends up going off into tangents. Things like, "Well, we can't prove that inequality is bad for growth. But inequality could lead to underinvestment in education. Or social unrest. And that's bad for growth!" In one example I saw that it'd create a higher savings rate and that'd be somehow (they didn't explain) bad for growth. And this is without getting into dodgy statistics.

> Could you expand on this a bit? Cotton was _necessary_?

Cotton farms and international trade are meant to be general terms for the kind of increased production and trade of that production that required complex financing and coordination mechanisms. I agree cotton itself was not strictly necessary (though it had several useful traits).

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Being fully Irish I don’t believe that the British working class benefitted from slavery, but we know who did because records were kept. The solution seems obvious to me.

Expand full comment
Ian's avatar

Obviously true, but the left has decided that racism against white people is a winning card; so they're not gonna stop playing it anytime soon.

I'll mention two quotes I think are worthwhile to heed in the context of this situation; Jeffery Tucker's "If you don't have a faith, I recommend you acquire one immediately." And Saint Paul's "For in Christ you and neither Jew nor Gentile"

Lest ye be tempted down dark paths of retribution.

Expand full comment
Gunflint's avatar

I know this isn’t everyone’s cup of tea here. But I think there are at least a couple “Succession” fans in ACXdom.

Logan Roy obituary in the LA Times

https://www.latimes.com/entertainment-arts/tv/story/2023-04-09/logan-roy-succession-dead-obituary

Expand full comment
Deiseach's avatar

Flip's sake. With all the publicity, I thought this was a real actor who died (can you tell I haven't watched this thing?), like Lance Reddick.

If they're going to play cutesy games with reality like this, we shouldn't be surprised the ChatGPTs of the world do the same and we should worry about ourselves, not a machine that only copies what it sees others doing.

Expand full comment
Gunflint's avatar

I had recorded the episode where Roy dies and had read an interview with Brian Cox discussing his character’s demise before I watched it

The show has some earnest fans with betting pools predicting which of the Roy kids will take over the empire.

Yeah it’s kinda goofy, but it’s fun.

Expand full comment
Nancy Lebovitz's avatar

How have LLMs been marketed? I see people saying there was a claim that LLMs are a source of accurate information, but I thought it was more like "Here is the cool thing, See what you can do with it."

Expand full comment
Dirichlet-to-Neumann's avatar

Yudkovsky actually made grandiose claims that ChatGPT had made Google search obsolete.

Expand full comment
TasDeBoisVert's avatar

>How have LLMs been marketed?

By their providers? Not at all. By their users? In every way imaginable. Most, I believe, as a "here's a cool thing", but there is also a decent subgroup that really does see it as a modern-day Oracle and will trust any answer it gives.

Expand full comment
Kei's avatar

They're being marketed by their providers quite a bit. A number of orgs that own particularly large LLMs (and other large generative AI models) are competing with one another and selling furiously to businesses to get them to use their model APIs and/or GUIs.

Expand full comment
Viliam's avatar

The first few times this was posted on ACX, I wrote something in response. Now I just wish we had some functionality to automatically link the old discussions.

Expand full comment
Drethelin's avatar

Taleb is a bombastic asshole who hasn't had a good idea since the black swan, and his current pastime is literally picking fights on purpose.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

All a bit of a gish gallop.

Expand full comment
Cornelius's avatar

Here's the Pallesen piece that Taleb berates at the start of his article: https://archive.is/kzy5i. Pallesen identified many clear mistakes in Taleb's argument. I haven't read the current version of Taleb's article, perhaps it's better now.

Expand full comment
Deiseach's avatar

That one made me laugh because about two paragraphs in, the amount of "*somebody* got a bad score on an IQ test back when they were ten and have never stopped brooding over it since" was too much, it slopped out of all the buckets and I'm mopping up my floor right now 😁

While I'm glad Mr. Taleb thinks IQ tests are baloney so that means so what if they tell me I'm dumb, *I* know I'm smart, and there really is too much emphasis on "what score did you get when you were ten?", I think he might possibly be motivated by more than mere love of science here.

I can't say because I have no way of knowing, but I've only heard people being this vehement about something when it touched them on the raw.

Expand full comment
Nobody Special's avatar

Couldn't he just throw a riff the same dismissal back at IQ adherents, though?

"This defense that surely IQ is valuable makes me laugh - looks like *somebody* got a good score on an IQ test when they were ten, and since then has been unsuccessful in career, social, and other status dimensions and is left grasping at self-esteem straws. 'So what if they tell me I'm low-status, *I* know I'm smart and special on a metric that has value - they proved it with a test when I was ten.' Seems like the apologist might possibly be motivated by more than mere love of science here."

Expand full comment
Wanda Tinasky's avatar

Sure, he could. People can insult each other and question motives all day long, but none of that gets us any closer to understanding the underlying reality. To understand the underlying reality we have to look at the actual data, and once you do that then everything Taleb says can be shown to be objectively wrong - as Pallesens' point-by-point rebuttal demonstrates.

Expand full comment
Deiseach's avatar

I do think the Mensa-style groups are precisely that: good scores when young, failed to achieve when mature, huddling together in a clique to mutually reassure each other of their smarts. And hey, if it's acting as a support group for fragile people, I won't mock them (much).

At the same time, I don't think IQ is meaningless - even Taleb thinks they're mostly useful for identifying morons (to use the old-style mental health classifications). Having a giant blow-up about it in the manner of that article does sound like it's personal, rather than "Hm, I believe we place too much emphasis on measurements and here's why" in a more dispassionate way.

Expand full comment
JDK's avatar

He's a mathematical statistician, among other things.

But I like how you went directly to an ad hominem (see reference to Popper).

Expand full comment
The Ancient Geek's avatar

I dont' think it's classic Ad Hom if you invent the personal characteristic you are objecting to. Not that its any better.

Expand full comment
JDK's avatar

The accusation was the Taleb was suffering from sour grapes due to some imagined childhood experience.

I think Taleb beef has nothing to due with personal experience but to do with his contempt for scientistic work that lacks of statistical understanding. What he would label as BS is even worse because of use to justify stereotyping and racism.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Do you understand his use of statistics or do you assume he’s correct? Taleb is a horrible writer. Polemic is fine but I have to wipe the spittle of my screen after reading him.

Expand full comment
JDK's avatar

I understand quite of bit of the statistics. His writing style is in part a reflection of his connection to Europe and the Mediterranean and to ancients. He is an academic/scholar in the way that word would have been used 100 years ago.

Expand full comment
Deiseach's avatar

Oh no, did I transgress the sacred and inviolable name of POPPER???? (kneel, beat my breast, bewail my sins).

How can I be forgiven this heinous offence, even in the holy season of Eastertide? Let me go cry down salt tears over my many and various failings, in the words of "The Big Sleep":

"I don’t mind if you don’t like my manners. They’re pretty bad. I grieve over them during the long winter evenings.”

Expand full comment
JDK's avatar

Did you actual read taleb article?

Because in it there is reference to Popper and situation where psychologists faced with hard science critic (eg statistical critique of IQ measure) will resort to ad hominem labeling essentially like you did.

Expand full comment
Viliam's avatar

Naturally, not all psychologists are qualified to discuss statistics on advanced level. In a perfect world, everyone would be an expert on everything, but sadly we are not there yet.

However, Taleb is not discussing honestly. If you disagree with him about the statistical claims and debunk his arguments, he will simply block you on Twitter and pretend that it did not happen, and that all arguments he ever gets are the ad-hominems. (Yes, those happen too. Everyone who is on internet gets ad-hominems. That alone does not make you right about stuff.)

Expand full comment
JDK's avatar

"In a perfect world, everyone would be an expert on everything" - that would not be a perfect world!

Expertise, subject to vigorous cross-examination.

Expand full comment
Deiseach's avatar

Still sounding more like "I got some low score on some dumb test when I *know* I'm Really Really Smart, so it's all dumb and they're giant poopy-heads", to be honest.

I don't think IQ scores are all that, but they are measuring *something* (probably mathematical ability/spatial processing), and when someone as thin-skinned as Taleb throws a giant hissy fit about something where they say everyone in the field is a giant poopy-head and only they are right, my prior on that is "Who stepped on his toes?"

Expand full comment
JDK's avatar

Really?

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Deiseach called herself stupid, she admits to doing badly on IQ tests herself. The only person on here to do so.

Expand full comment
Deiseach's avatar

I yield to nobody in my proud flaunting of my lack of anything between my ears! I got a sub-average score on a Ravens Matrices online test and I'm proud, baby, proud! We're here, we're dumb, deal with it! 🤣

Expand full comment
JDK's avatar

I have no idea what my IQ is. It is just a nonsensical metric.

Expand full comment
Hank Wilbon's avatar
User was temporarily suspended for this comment. Show
Expand full comment
Scott Alexander's avatar

In the past I haven't banned posts like this because it seems bad to ban criticism and maybe playing into their hands. But I'm pretty tired of hearing this same contentless insults, again and again, with no supporting evidence or interesting arguments. I'll ban Hank for one week this time but probably the next person will get it worse.

I continue to be interested in anyone with more contentful or better-thought-out criticism.

Expand full comment
Daniel B. Miller's avatar

I don't see how to link to a comment but I think the thread I started a couple days ago led to some nuanced discussion. The subject line was "What if Eliezer Yudkowsky is wrong?"

Would love to get your feedback.

Expand full comment
Yug Gnirob's avatar

Clicking on the post I was expecting it to be the one about the scholarship, but this one managed to spawn a discussion about what being a cult actually means.

Expand full comment
Carl Pham's avatar

If it was genuinely contentless, the several people below arguing with the comment would have had no place to start a response, and there would have been no back and forth with the OP. That doesn't seem to be the case.

Expand full comment
dionysus's avatar

If you wanted to refute the OP's claim that rationalists are a cult, the worst thing you could have done was to ban him for saying it, especially after admitting that you'll continue to decrease your tolerance for criticism. That's classic cult behavior.

Expand full comment
Drethelin's avatar

Nah that's total fucking bullshit. "Banning people who come into your place to call you stupid and brainwashed" is not cult behavior, it's standard behavior. One is not obligated to "refute" every single accusation randoms on the internet make.

Expand full comment
dionysus's avatar

Read the original post again. Is this a place specifically for doomers? I thought Scott was not himself a doomer, and that even if he was, he'd try to foster a forum where non-doomers are welcome. Also, is saying "many Rationalists are even more irrational than the average person on the street" (an opinion that most Rationalists share, given the wide divergence in opinion in the group) the same thing as calling Scott stupid and brainwashed, especially as Scott is not a doomer? And does not banning something require one to refute it?

Expand full comment
Viliam's avatar

Is there a difference between "agrees with someone's opinion" and "is a cult member"? If yes, what makes you believe that rationalists are more of the latter?

Expand full comment
Melvin's avatar

Calling yourself "rationalist" for agreeing with someone's opinion (and hence implying that agreeing with those opinions is a necessary component of being rational) is a little bit in the grey area between, I guess. The Objectivists have a similar problem.

Actually if I could grasp towards the somewhat culty property that both "Rationalists" and "Objectivists" have, it's not a problem of agreeing with a person, it's agreeing with a person on multiple unrelated subjects to an unreasonable extent.

To pick on the Objectivists for an example, let's suppose we have three propositions:

1. Capitalism is the ultimate good

2. God doesn't exist

3. Cheating on your spouse is morally acceptable as long as you do it with someone who is smarter than your spouse

It's perfectly reasonable that people who agree with Ayn Rand on all three of these are more likely to wind up as Objectivists, that's not what we're objecting to. And there may be some natural degree of correlation between beliefs in these three propositions. What makes Objectivism at least slightly culty is the fact that the degree of correlation between these beliefs becomes meaningfully and unnaturally higher than it would be in a world where Ayn Rand doesn't exist. It means that these people have started buying their opinions in bulk from Ayn Rand rather than reasoning about each issue separately. They are attracted to the group because they agree with A and B, and through social pressure they wind up believing C (and D, E, F, G etc) as well.

Expand full comment
Viliam's avatar

Well, the Objectivist propositions 1 and 2 are related, because if God exists, he is supposed to be the best, so 1 implies 2, even if 2 does not necessarily imply 1.

The third proposition is confusing. I mean, if capitalism is the ultimate good, you should be cheating on your spouse with someone who is *richer* than them, not smarter, right? Unless you believe those are the same thing, of course.

More seriously, was it actually cheating, or rather something like polyamory? The difference is, whether the spouse knows, and whether the spouse is also free to do the same thing.

Expand full comment
Ian's avatar

Honestly, C was definitely where Rand lost me when I was reading her. Still like her books, but, pretty major moral error.

Expand full comment
Eremolalos's avatar

We bicker constantly on here, ergo we are not a cult.

Expand full comment
The Ancient Geek's avatar

Lots of them agree with one persons opinion, on a wide range of subjects, that he isn't an official authority on.

Expand full comment
Eh's avatar

There is more to religion that the mere sharing of opinions. At a minimum you need eschatology, well defined rituals, and a way to separate true believers from heretics. For now we have only the first element. The second does not seem anywhere near to emerging.

Expand full comment
Odd anon's avatar

Counter doomers being a cult:

* Non-doomers are a minority, at every level. People are concerned: From the most prominent experts, to the general category of informed people, to the general public at large. (I'm assuming you mean "doomers" in the sense of "people who think AI-related doom is a worrying thing" rather than the "99% of apocalypse" line of thought.)

* I've read EY's writings pretty heavily, and largely agree with his ideas on AI, while having a very negative opinion of his views and attitudes in general (and the views of a lot of the people around him), and I suspect this combination is very common.

* EY's "charisma" is awful. Like, really awful. Further, he clearly does his best (which is way better than most people's best would be, on this issue in particular) of making sure that nobody *ever* defers to him, including by having written half a book arguing against "modest epistemology" and generally pushing the "consider yourself the best person to figure things out" line.

Now, this isn't much of an argument against there being a cult of "internet rationalists" which is a very small subset of both AI doomers and readers of EY's writings, and which has an unusually non-deferential relationship with its uncharismatic cult leader and low levels of groupthink. But that's a separate issue.

Expand full comment
The Ancient Geek's avatar

>Non-doomers are a minority,

Doomers in the Yudkowskian sense, people who think we are all going to die soon unless we take drastic action , are a minority, even among people concerned with AI.

> EY's "charisma" is awful.

By Normie standards....but we can tell that the sort of people who people who would join a rationality movement find him charismatic, because they say so[*]. (Also, they admire people like von Neumann, who was pretty horrible by many standards).

One of the main errors in understanding cults is the idea that the leader has powers that work over anybody, and exerts them consciously. In fact leaders need followers, followers need leaders, there has to be a fit between the two, and a lot of the dynamics are unconscious.

[*] Eg.,

"I have been accused repeatedly of being a cultist whenever I wage the rationalist crusade online, and naturally I refute such allegations. However, I cannot deny that I take whatever arguments Yudkowsky (makes whose reasonability I can not ascertain for myself as by default true; an example is the Many Worlds Interpretation of quantum mechanics whose Science is far above my head, but I nonetheless took it as truth—the probabilistic variety and not the absolute kind as such honour I confer only to Mathematics—and was later enlightened that MWI is not as definitive as Yudkowsky makes it out to be, and is far from a consensus in the Scientific community). I was surprised at my blunder considering that Yudkowsky is far from an authority figure on Physics, and even if he was I was not unaware of Huxley's maxim:

The improver of natural knowledge cannot accept authority as such; for them scepticism is the highest of virtues—blind faith the one unpardonable sin.

This was the first warning flag. FUrthermore, around the time after I was introduced to RAZ (and the lesswrong website) I started following RAZ with more fervour than I ever did the Bible; I went as far as to—on multiple occasions—proclaim:

Rationality: From AI to Zombies is my Quran, and Eliezer Yudkowsky my Muhammed.

Someone who was on the traditional rationality side of the debate repeatedly described me as "lapping up Yudkowsky's words like a cultist on koolaid." I was warned by a genuinely good meaning friend that I should never let a single book influence my entire life so much, and I must admit; I never was sceptical towards Yudkowsky's words.

Perhaps the biggest alarm bell, was when I completely lost my shit and told the traditional rationalist that I would put him on permanent ignore if he "ever insults the Lesswrong community again. I am in no way affiliated with Eliezer Yudkowsky or the Lesswrong community and would not tolerate insults towards them". That statement was very significant because of its implications:

1. I was willing to tolerate insults towards myself, but not towards Yudkowsky or Lesswrong.

2. I was defensive about Yudkowsky in a way I'd only ever been about Christianity.

3. I elevated Yudkowsky far above my self and put him on a pedestal; when I was a Christian, I believed that I was the best thing since John the Baptist, and would only ever accord such respect to Christ himself.

That I—as narcissistic as I am—considered the public image of someone I've never interacted with to be of greater importance than my own (I wouldn't die to save my country) should have well and truly shocked me.

I did realise I was according too much respect to Yudkowsky, and have dared to disagree with him (my "Rationality as a Value Decider" for example) since. Yet, I never believed Yudkowsky was infallible in the first place, so it may not be much of an improvement. I thought it possessed a certain dramatic irony, that a follower of the lesswrong blog like myself may have become a cultist. Even in my delusions of grandeur, I accord Eliezer Yudkowsky the utmost respect; such that I often mutter in my head —or proclaim out loud for that matter:

Read Yudkowsky, read Yudkowsky, read Yudkowsky—he's the greatest of us all.

As if the irony were not enough, I decided to write this thread after reading "Guardians of Ayn Rand" (and the linked article) and could not help but see the similarities between the two scenarios."

Expand full comment
Eremolalos's avatar

"their extraordinary tendency toward group assimilation". Haven't you noticed that we disagree and bicker constantly? You don't find much "yeah, me too" in response to substantive posts. For instance present thread has much discussion of Yudkowsky doom prediction, and my impression is that considerably more than half of posters do not believe Yudkowsy's right. Except for being oblivous to the diversity of opinion here, and how free people feel to argue and disagree, you make some really great points about . . . um . . .

Expand full comment
The Ancient Geek's avatar

I've always thought that we, the Codexians, are the liberal branch.

Expand full comment
Hank Wilbon's avatar

Scott isn't the leader.

Expand full comment
Hank Wilbon's avatar

I give the Yud/doomer/rationalists the same odds of being correct as I do the Scientologists. Hubbard was also brilliant and well read in the sci-fi world. Many of his early followers were smarter than your average bear.

Yet, believe it or not, I think Yud and El Ron are both too-smart-by-half fools, who both unfortunately have influenced too many young smart people who could be doing better things with their lives than worrying about AI Safety or Theta Audits.

Expand full comment
Hank Wilbon's avatar

Yud seems the most like a leader. I'm not sure who is in the cult and who isn't. But many seem to be. He keeps his loyal followers in constant fear.

To be absolutely clear, I am just speculating from far away. I don't know anything for sure. It all just seems real freaky and cult-like that so many people would have the same crazy beliefs.

Expand full comment
Thwap's avatar

The funny thing here is that you’re calling Yud a “charismatic leader”... when in reality his lack of charisma has been a massive detriment to him getting his message across. Believe it or not, a lot of people listen to what he has to say because they actually agree with his ideas and he makes a compelling case. What do you disagree with specifically?

Expand full comment
Eremolalos's avatar

Yeah, you're speculating from far away. How about you come closer read this thread carefully, and assess how much agreement there is with Yudkowsky, and how willing people are to doubt his ideas? Otherwise, when you talk about our "extraordinary tendency towards group assimiation" you're just another internet bloke generalizing about something you haven't examined thoughtfully.

Expand full comment
Hank Wilbon's avatar

Fair enough.

Expand full comment
Hank Wilbon's avatar

I discovered Yudkowsky's phrase "glorious transhumanist future" during searches in this open thread and plan to use that phrase many times in the future.

I think it's clear that doomers are insane in a Stalinist/Guyana punch/Millennialism way. I think you should all leave your cult and live in a less online world over the next few years. Read classic novels like War and Peace, Don Quixote and On the Road.

Expand full comment
Eremolalos's avatar

I'm listening to the most recent Ezra Klein podcast, about AI, and he's said several things I thought were quite smart. I thought I'd put them up here to see what people think of them:

-There's another alignment problem besides the AI/human race one that's important: The corporation/member of the public alignment problem. Needs of corporations and needs of most individuals are not aligned as regards AI: Corporations need to make money, and right now the best way to do that is to develop AI as fast as possible, and develop forms of it most likely to make money, which is currently Chat type AI. Public, on the other hand, needs various kinds of safety and stability -- also for AI to do things of substantial benefit to people, for example the figuring out all the protein foldings, doing other things in the realm of science that will improve medical care, air quality, methods of construction, etc.

-Asking for 6 months pause in development is kind of a stupid way to put on the brakes. What, exactly, do you think the AI-building companies are going to be doing with those 6 months? Do you think they're all going to be reading Yudkowsky and holding 6 month meditation retreats? Seems likely they will be some form of continued work on AI development that's less obvious than what they've been doing so far. Also, why set a limit that has to do with time? Would be much better to set one that has to do with meeting certain criteria that have to do with figuring out more about what goes on inside these black boxes. It seems to me that one thing on the list of things to figure out is wtf is going on -- EXACTLY wtf is going on -- when these things hallucinate? And what's going on when clever users are able to get the AI to break some of the rules it was given? Regarding both of these phenomena, it seems like madness to me to just plug the holes with chewing gum -- just find some ways to block particular kinds of hallucinations or deviation from the developers' guidelines. We need general info about why these things happen, and some kind of restructuring that blocks all instances of them. It would take someone with knowledge and skills I do not have to come up with a way to test how much insight we have into how these LLM's arrive at their responses, but surely there could be some criteria set -- maybe something on the order of a Turing test, except this one would be a Transparency test? Or maybe use the methods of cognitive psychology to assess what's going on under the hood. Cognitive psychologists have been able to figure out a fair amount about how the human brain works by purely cognitive tests: Things that come to mind: Priming: If I have you read some words that have to do with destruction, like wreck, ruin, etc., then give you a test where I say a word and you say the next word that comes to mind, it makes it likelier you will respond to "glass" with "break" rather than "window" or "water." Mental rotation: more complex rotations take longer, approximately as much longer as doing the physical rotation would. Apologies for these examples being kind of sparse, but I'm dragging my memory for things read quite a long time ago. But you get the idea; you can do tests that tell you something about how things are set up under the hood.

Expand full comment
Michael's avatar

I think we do understand LLM hallucinations. It's the default state: the model being wrong about something, and it's reduced significantly with larger, smarter models.

GPT-2 would spout nonsense all the time. GPT-3, 3.5 and 4 each do it much less than their predecessor as they're able to successfully answer a wider range of prompts.

There was also a shift when they changed to the "instruct" models. These models were trained to follow instructions (like ChatGPT) instead of just complete text. The old models weren't even trying to provide accurate information. They'd just complete whatever text you provided as if it were a creative writing assignment.

That's still the core of how they work. The LLMs are a giant neural network trained to predict what comes next, with additional training on top to follow instructions and be polite and helpful. There's no way to make sure it only outputs correct facts. It's not looking up facts in a database. But it's wrong less often in larger, smarter models.

Expand full comment
Deiseach's avatar

"Needs of corporations and needs of most individuals are not aligned as regards AI: Corporations need to make money, and right now the best way to do that is to develop AI as fast as possible, and develop forms of it most likely to make money, which is currently Chat type AI"

This is exactly what I've been thinking all along. It's not the AI, it's the use we make of it, and so long as people/corporations are seeing dollar signs (particularly during an economic slowdown), all the handwringing appeals in the world are not going to stop them.

Expand full comment
Carl Pham's avatar

Odd. I could have sworn that corporations need to make money *because* the individuals who constitute it need to make money, e.g. to pay their mortgage and kids' college tuition.

So Ezra is saying that when 10,000 human beings collaborate to do something -- make shampoo and toilet paper, build skyscrapers, pump oil from the ground -- some new identity emerges, which has its own ambitions and concerns that have nothing to do with those of the individuals that make up the collaboration?

I wonder if it's done deliberately, e.g. all the Vice Presidents one midnight gather in a windowless chamber decorated with chthonic symbols and their trademark, and chant gruesome entreaties in the Black Tongue to summon the new spirit[1]. Or perhaps it's an emergent phenomenon, e.g. when you get enough humans in one of those cubicle-farm open workplaces, or in a company cafeteria, the new spirit wakes up like a brain regaining consciousness after sleep.

-----------------

[1] https://youtu.be/nsguT_V26wY

Expand full comment
Kenny Easwaran's avatar

When 10 trillion cells collaborate to do something, the new thing has its own motivations apart from those of the cells that make it up. When 10,000 ants collaborate to do something, the new thing has its own motivations apart from those of the ants that make it up. I wouldn't say they have *nothing* to do with those of the individuals that make up the collection, but it is quite clear that they are distinct from each other, and the same is very true for human organizations, like governments, corporations, universities, etc.

Expand full comment
Gunflint's avatar

Now this is getting interesting.

I spent a summer doing some coding as an undergraduate research assistant in an entomology lab. - Hey grad students have to have someone to unload the crap work that their advisors load on them.

I watched some pretty complicated social behavior among trichogramma wasp under a microscope. They are tiny effers, about 0.3 mm in length. They operate with less than 10,000 neurons each. This is one of the reasons I tend to scoff at things like ChatGPT.

Here are 10,000 basic units, code up some complex behavior with these please. I won't say something that isn't just simulating intelligence won't be developed but the way they are going about it now is probably wrong.

https://en.wikipedia.org/wiki/Trichogramma

Expand full comment
Carl Pham's avatar

Citation needed. Especially about the ants. Also curious how you know the motivation of cells, or what you mean by the word "motivation" in that sentence.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

EO Wilson’s books on ants are a good start.

Expand full comment
Carl Pham's avatar

Hopefully you can summarize. My motivation on reading anything that I have been told to expect asserts that ant colonies are conscious in some way ants are not is...very low. I thought it was a good joke or metaphor in "Gödel, Escher, Bach" but I did not take it seriously.

Expand full comment
Deiseach's avatar

Ah come on Carl. Corporations have so much profit sloshing around, it's worth their while to set up dummy companies in Ireland, funnel the profits through that, and pay a relatively low level of taxes, while at the same time they're cracking down on stapler use back in the home office.

It's Moloch. Do you think Jeff Bezos really needs an extra zero on his net worth? But as per the Rings of Power debacle, he's still chasing MOAR and BIGGER, because at a certain level, the thing can't stop swimming to sleep or it will die. At a certain size, corporations really *do* become entities of their own, and the boards of directors and CEOs are less in control than they are just keeping the wheels turning because the wheels can never be permitted to stop turning.

Expand full comment
Carl Pham's avatar

A very few corporations have a lot of profit sloshing around. Almost all of them don't, and a fairly significant chunk of them are losing money and will go out of business. It's important not to be bemused by the handful of enormous corporations one hears (relentlessly) about on the news with the tens of thousands that make up the real economy. A "corporation" is 5 guys with one truck and a plumbing or house painting biz. It's a gardener and his half dozen part-time assistants. It's a frozen yogurt franchise owned by a retired teacher and her husband. It's 30-odd people working for a feed store in Oklahoma, owned by the second generation of the family that started it. And so on. It's *also* Exxon Mobil or Pfizer, but those are the rarest of the aves, the very few who make it big and famous, and it makes no more a priori sense to consider them representative of "corporations" than it does to consider SBF representative of rationalists, or the career of Julia Roberts representative of what the life of a Hollywood actress is like.

I don't think Bezos needs an extra zero on his net worth. I doubt he does either. But what that extra zero would represent is his deliverance of some massive value to the economy: he gets rich *because* he provides some new service that people find saves them time, money, effort, et cetera, or allows them to boost their own productivity somehow. He doesn't get rich the way government does, by pointing a gun at you and demanding your money, and he doesn't even get rich the way Arnold Schwarzenegger did, by buying up some prime LA property a decade or two before development reached it. He got rich, and stays rich, by providing a service that millions of people find saves them time, effort, or money, or which heightens their own productivity -- so much so that they willingly fork out their own hard-earned cash to acquire it.

For some reason, Bezos gets off on having millions of people say "Geez, Jeff, I value what you provide so much I'm going to send you a check." It almost certainly *isn't* the consumer fun the cash represents, because I agree with you at this point it can't really matter, there's only so much you can spend. So I assume he's one of those weird people who love being showered with praise (in this case folding green praise, the most sincere kind). A weird form of showmanship or pr0n-star attitude, living for the applause.

So Bezos himself probably craves the extra zero because it represents thunderous applause. And *we* might want him to get the extra zero because it would represent some significant new invention and improvement that would make millions of lives better.

No, I don't agree at a certain size corporations become a thing unto themselves. I don't believe in any mystical emergent phemenon. Corporations are run by people, are made of people, and in every case I have ever seen, it is 100% possible to trace any "corporate" decision or action back to some combination of individual human intentions (or failures). For the same reason, I don't buy any argument that a mob or army has an identity different from that of the people who make it up. People do behave differently when they are a member of a mob, or army, or giant corporation -- but that's not because they're bemused or possessed by an emergent spirit, but because that's an aspect of human behavior. We act differently when we are in groups -- but it's still *we* that are doing the acting, it's not some eldritch spirit of the collective that takes us over.

And the reason I dislike this framing is *because* it allows people to rationalize away their own bad behavior. Oh, I wouldn't have done this on my own, no sirree, I'm a good person, but I had to because I was a member of this group/mob/corporation/gang. I was just following orders! It was bullshit at Nuremberg, it's bullshit everywhere I've seen it, and it's one of the more contemptible illustrations of the tendency of human beings to rationalize their bad behavior away, or blame it on mysterious spirits and gollums, so they can avoid confronting their sins (and maybe doing something about them).

If there's too much greed and inhumanity in the word, it's *not* because of "corporations" -- and restraining or tinkering with their structure will do absolutely squat to fix it -- it's because of *people*, because that's what they're like, and if you want to fix it you need to work on *people* and how their character is constructed, and what restraints our social structures provide to individual decisions.

In the case of the comment to which I objected, I'm annoyed by the assertion (from someone old enough to know better) that if we only "aligned" the behavior of corporations with the individual humans from which they're made, things would be much better. This is soothing bullshit, taking a long happy drag from the bong. Corporations are *already* aligned with the behavior of the humans from which they're made, because there's nothing else there, no other motivating spirit. If you don't like the way they behave, then the real issue is you don't like something about human motivation and human behavior -- and that's where you need to focus your analysis.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

“He got rich, and stays rich, by providing a service that millions of people find saves them time, effort, or money, or which heightens their own productivity -- so much so that they willingly fork out their own hard-earned cash to acquire it.”

You are, as with most defenders of billionaires, ignoring the externalities here. The cost to the rest of the economy by Amazon being a retail behemoth. My feeling is that money is being moved around the economy here in what is largely a zero sum game, some of it getting into Bezo’s pockets in lieu of the small businesses you lauded at the start.

Expand full comment
Carl Pham's avatar

What externalities? I don't find it a priori plausible that the net result of Amazon is that the economy has gotten less efficient and people are on average poorer, work longer hours, et cetera. I believe people are rational actors, and if 50 million people choose to give their money to Amazon instead of Wal-Mart (or more relevantly Verizon or some ISP), then they are doing so because it works to their net benefit.

In order to demonstrate that Amazon's impact on the economy was even neutral, let alone negative, you would need to demonstrate that a whale of a lot of people are making economically irrational decisions -- or that there is some complex tragedy of the commons collective action problem. Either seems a tall order, but of course if you have an argument for either, I'd be interested to hear it.

Expand full comment
Shane Breslin's avatar

>>"I believe people are rational actors"<<

Isn't it fairly clear that this is *not* the case? Freud, Edward Bernays etc. The Century of the Self by Adam Curtis covered this in a lot of depth.

Expand full comment
Deiseach's avatar

Carl, we are not talking about a guy with a van and two helpers, and you know it. Microsoft may have started off in a garage but that's not where they're headquartered nowadays.

Believe it or not, I am not one of the "Eat the rich" types (though betimes I can be tempted to see the appeal) but whatever is fuelling Bezos, it is *not* beneficial for the world if he manages to capture every last unit of currency in existence by making us all customers of his business and his alone. I don't think he sits down at night to draw up plans to take over the entire global economy, I don't think the board or the CEO or whoever is minding the store today over at Alphabet does that.

*But* the rationale for a corporation is to make profit, and continue to do that, and to grow returns year-on-year. So you make 15% this quarter? Better get that up to 20% for next quarter, or the stock market will reduce your share price and that will be Very Bad for all concerned.

Thus, the impetus for the *corporation* as a legal entity and thing-that-operates-in-real space is not the same as the individuals which comprise the owners, management, workers, and customers.

Expand full comment
Carl Pham's avatar

You are if you're talking about "corporations." If the point was supposed to be "we should find a way to align the interests of Amazon with [list of virtuous people]" then that's what the author should have said. I responded to what was written, not to what I guessed the author would have said, had he had a better command of English.

Yes, I would agree with you that if Amazon captures all the commerce in the world, this would not be good. But I also don't believe it would happen, for that exact reason. People are not idiots or zombies. If they begin to see that Amazon is no longer a good deal[1], then they will start to shop elsewhere. If Wal-Mart or even a half-dozen hungry entrepreneurial-minded kids realize that Amazon is no longer as efficient -- because, say, they're siphoning off too much value to splurge on Maui retreat boondoggles for the corner office growd, or to buy Jeff Bezos a solid gold breakfast table, then those competitors will promptly eat Amazon's lunch.

It's not like Amazon benefits enormously from network effects, like Facebook. You don't shop Amazon *just because* everyone else does, but because the price/selection/value is better than what you can get at target.com. Indeed, the easiest business in the world to compete with the big boys is e-commerce. It's way easier than competing with Sears or Macy's in the old days. You don't need to borrow $30 million from the bank to lease a big store and stock it full of merch and hire a brigade of sales help, you can just code up your slick Internet storefront, make a deal with some shippers, and get started. Amazon is actually unusually vulnerable to start-up competition[2].

Finally, I see very little difference between the corporation motive for profit and any individual motivation. What motivates each of us individually, in the economic realm? I daresay we would all like to make as much money as possible and spend as little as possible to get what we want. That is, we each individually want to maximize our profits, too. We want to minimize our costs, we want to be paid the greatest amount possible for our labor, we want to get the highest price the market will supply when we sell our house, or our homemade cookies at the farmer's market. I'm not seeing any important distinction between Amazon's motives and any individuals, in the area of economic transactions, and I think it's pretty reasonable to infer that Amazon's motives derive directly from those of its employees and shareholders.

--------------

[1] And in fact I think they are less of a good deal than they used to be, and I find myself checking reviews on Amazon and then actually buying somewhere else -- which means I'm heartlessly exploiting Amazon's work to get valuable info for free, and then cutting them out of the final deal by making it with someone else. I don't feel bad, because it's business. But by the same token, I'm not offended by Amazon's trying to get every dollar they can out of me. We're both out for our own interests first of all, me so I can have more money in the bank, and Amazon so that all their employees and shareholders can (and Jeff Bezos too, of course, but his income is a tiny fraction of Amazon's payroll).

[2] Which is my interpretation of why they keep trying to break out into other areas -- AWS (a big win), Kindles and Fires (meh), Amazon Fresh groceries and media production (ha ha ha). They know very well they're vulnerable.

Expand full comment
JDK's avatar

The rationale for "a corporation" is limited liability and as a mechanism to assemble/raise capital.

If a business operates as a partnership, then each of the partners is personally jointly AND severally liable for debts and torts. Shareholders are not directly personally liable for the screw-ups.

The second reason is assembling/raising capital. Some guy really doesn't know anything about widgets but has some capital/money, so instead of starting a widget business on his own he gives corporation money and gets limited liability ownership (shares of the stock). (The buy and selling on the stock market turns out to be a sort of efficient way to value a business.)

We can skip for now the discussion of how wanting profit to keep stock price up because the managers are paid in stock and thus are motivated to "make profit" in short term rather than employ people and increase quality of production (both common good aims) which are long-term aims.

I am a so-called "soak the rich" because St. Ambrose had it correct. Beyond a living wage everything is superfluous so long as a great majority of world is not making a living wage. Private property is limited by the universal destination of goods and the earth.

Expand full comment
nullxorvoid's avatar

That is not what drives the profit motive though? Yes corporations need to make money to pay for operating costs, some of those are wages, bonuses and other benefits to employees. But a large corporation making only enough money to cover these expenses will be seen as a failure in the long term since profit will be nominally zero. The point of running a company is seeking rents to pay out to the stakeholders. Pension funds aside most stakeholders in large corporations are not people concerned with putting food on the table, its about money as a currency of power at that point. So no, corporations do not act in the interest of those that constitute them, they act in the interest of those that own them.

Expand full comment
Eremolalos's avatar

Yeah, I get it. But I think you could call the thing about corporations that is out of alignment with the needs of individuals am emergent property of groups of people organized into a corporation. Certainly corporations sometimes do things that few of the individuals in them know much about, and that very few would be willing to do as individuals -- for example, pharma company's advertising and distribution of opiates.

Expand full comment
Carl Pham's avatar

I disagree. Particularly for a large corporation. A pharma company advertises and distributes meds in a way you might find awful because of the knowing decisions of a bunch of people to do that thing. And they weren't possessed by some spirit, and they weren't tricked into it.

If we find it horrifying, then the correct conclusion is that individuals are capable of a horrifying level of denial, rationalization, and irresponsibility, not that certain structures cause humans to suddenly act in some way outside of their natures.

And, really, given history, does that come as a surprise? What about the behavior of the Russian Army in Ukraine? Is that because all those hoplites are possessed by some eldritch spirit, summoned from Hell by Vladimir Putin dancing around an inflated wineskin? Or is just that ordinary people in fact really can turn into orcs, and are shockingly good at rationalizing it away when they do?

I dislike the framing because pretending that there is some spirit of the collective that can bemuse us allows us to absolve ourselves of our real sins and flaws -- and then we do not take sufficient care to work on them, to improve.

Expand full comment
Eremolalos's avatar

A few individuals at the corporation knowingly decided to advertise and distribute opiods in a way that did a lot of harm, but many who worked there may have had only a vague idea that "this year opiods are our main focus," and truly not grasped *why* so much money was coming in from opiods as compared to other drugs. And once you get down to the secretaries, the people in charge of keeping the building clean and in good repair, etc. it seems likely to me that these folks genuinely had no idea whatever how the money they earned was being made. I worked for several years at a very prestigious hospital and only late in the game learned about some appalling policies it followed. If it had not followed them perhaps it would not have had the money to pay my salary and allow me to reap the other benefits of working there, so I benefitted from the hospital's evil policies (as well as its good ones). Still, that's just not the same as my participating in developing and implementing those policies, and now that I am in private practice I in fact do not do the private practice equivalent of what the hospital did.

Expand full comment
Carl Pham's avatar

Yes, I'm sure there are always genuine innocents. But most of us are less so, and I'm especially dubious about those who preface any protestation of innocence by asserting the Evil Big Corporation Spirit made us do it. That does not strike me as the argument a man with a clean conscience would advance.

In any event, I'm not arguing corporations don't do evil. They absolutely do. I'm merely arguing that any evil they do is directly traceable to the hearts of men. We do not need to imagine some emergent evil spirit to explain it.

I recall a passage from Albert Speer's memoire that really stuck with me on this subject. He talks about meeting with Fritz Sauckel[1] once, after the latter had taken an inspection trip to some of the work camps. Speer recalled that Sauckel was clearly distracted and unnerved, and told Speer to never go there (the camps). And what Speer says is that he knew right away something was very wrong -- but he chose to say nothing, then or later, to Sauckel, and to be careful to inquire no further about how and where Sauckel found the labor on which Speer's industry utterly relied. I recall he said something like "Sauckel pointed to a curtain, and I knew as he did that the curtain concealed something hideous. Sauckel told me not to took behind it -- and I didn't." This was in part why Speer said[2] he accepted the merit of his 20-year sentence. He *could* have known, and if he were even ordinarily curious, or fully alive to his responsibliity as a man, he *would* have known -- but he chose not to.

-------------------

[1] Who you may recall was hanged at Nuremberg.

[2] I emphasize "said" because later scholarship has cast some doubt on how accurately Speer described his history in his famous autobiography.

Expand full comment
Eremolalos's avatar

Well, I agree that everybody at a corporation has some responsibility, but I think it diminishes the less connection they have with the people who choose to do harm, and see clearly what the harm will be, and have the option to make another choice. Obviously we all are going along with some harm-doing all the time. I live in a country that consumes far more than its share of resources, and I'm not about to move somewhere that gets less than its share. I'm sure some of the things I own were made by sick, suffering, underpaid workers in poor countries. Maybe some of the workers were 12 years old. I'm not making any effort to find out which of my belongings were manufactured in the way that caused the most human suffering and get rid of it, and even if I did I'd still own lots of other stuff that caused some degree of suffering . Etc. Regarding the particularly awful thing my hospital did, I was absolutely stunned when I found out about it, but I kept working there for another year or two after. I did try to protect a few patients from being victims of that scam, and succeeded some of the time, but I couldn't protect all that many, and I did not go on strike and hand out leaflets to the staff about what was going on. And to be perfectly honest, once I had known for a while that the thing was going on (it was something that involved cheating Medicare & Medicaid ) I got sort of used to the fact that it went on and didn't think about it all that often. So yeah, I have some responsibility, and so does everyone else who worked there, because everybody knew about it (well, all the professional staff did -- not the janitors). Still, differences of degree matter.

Think about it from the other point of view -- say you got one of your kids into college using that guy who was in the news a coupla years ago, who got smart people to take the SAT under the kid's name, faked credentials that they played some sport, etc. You decided to do it, you paid the guy, you discussed with him exactly what form of deception he'd use. Your wife hated the idea, but could not prevent you from carrying it out, and was not willing to inform the police or the school, which would have led to criminal charges against you and great harm to the kid applying to college. So eventually the scam works and your kid is delighted to get into a great school, one he thought he didn't have a good enough record to be admitted to. Still, it's quite possible he sometimes wonders if you somehow pulled a string, because he knows how much you wanted him to get in, and his grades and scores really were quite a lot below the average for admitted students, and maybe he once overheard some phone conversation you had that gave him a tiny hint at what was going on. So maybe he is a tiny bit responsible. But far, far less than you, right? There are degrees of responsibility.

Expand full comment
Deiseach's avatar

"Is that because all those hoplites are possessed by some eldritch spirit, summoned from Hell by Vladimir Putin dancing around an inflated wineskin?"

I believe in the existence of the Devil. Don't be so quick to laugh at "if you call them, they will come". Certain behaviour is a standing invitation, regardless of the intentions of those doing such things.

Expand full comment
Carl Pham's avatar

Who's laughing? I am not suggesting doubt of the existence of the Prince of Deception. I am only saying -- and I daresay I am not the first -- that he does not batter down any doors. The Devil only enters hearts that are open to him.

Indeed I think it's the case that you can be open to him without a lot of direct wish to be. You don't have to actively want to be evil, you can merely be careless, or selfish, or wilfully ignorant, or lazy, and with a bit of bad luck -- by being in an unlucky place at an unlucky time -- you can do enormous harm. There's a reason we humans need a religion, or at least a guiding philosophy, and a reason we should spend time in prayer, or at least reflection.

Expand full comment
Hank Wilbon's avatar

Ahhhh... There's always a corporate/public alignment problem. Hollywood makes too many shitty movies that may be bad for our souls. Popular music is soul-sucking. Tasty food makes you fat.

Yet every alternative to free market capitalism is worse. I say lets roll the dice on making everything legal and if that kills us at least we died standing and not on our knees.

Expand full comment
Alexander Turok's avatar

The NYTimes has a funny article about women who move to foreign countries in part because there isn't enough sexual harassment on American streets:

"Kacey Margo has been going on plenty of fun dates ever since she moved to Paris in October 2019. Men frequently approach her with the dramatic antics seen in Disney movies.

“This one guy was like, ‘I ran through traffic just to look into your eyes once, and if you don’t want to go on a date with me, I can die happy knowing that I just met you,’” said Ms. Margo, a 28-year-old English teacher from Los Angeles.

After studying abroad in Paris in 2016, Ms. Margo fell in love with the city (and its men). She found a gig teaching English in Paris and moved there after she graduated from Sarah Lawrence College in May 2019.

Now, Ms. Margo is living a dream of many American women who are seeking relationships abroad, some of whom cite the toxic dating scene in the United States."

https://www.nytimes.com/2023/03/31/style/finding-love-romance-abroad.html?smid=tw-nytimes&smtyp=cur

Expand full comment
Deiseach's avatar

"If you don't want to go on a date with me even though I ran through traffic to get to you, that's fine" is different from "Hey bitch, I'm talking to you, why are you ignoring me, you cunt?"

See, Alexander, one of those is corny and yeah maybe even cynical in its calculated approach of being over-the-top romantic, but the other is 'sexual harassment on the street'.

Expand full comment
Viliam's avatar

I guess, different people have different preferences; no matter what the social norms are, someone will be deeply unhappy.

The inherent problem with sex and rules is that breaking the rules is a costly signal of being high-status, and high-status is attractive in general, and people are hypocrites.

Expand full comment
Eremolalos's avatar

Not to be humorless and literal but -- what the Frenchman did was not sexual harassment. When I was in Athens some guy patted or squeezed my butt every time I left my little hostel. When I lived in NYC in my 20's guys hollered from across the street about my tits and cunt, using those words. I do not think I have PTSD from those experiences or deserve a purple heart for surviving them, just want to make clear that being genuinely sexually harassed is truly unpleasant.

Expand full comment
Gunflint's avatar

Good for Kacey Margo.

Expand full comment
Eh's avatar

That’s a Pareto improvement

Expand full comment
Marty Nemko's avatar

I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:

1. Filling otherwise unmet need of high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a liberal, conservative, or apolitical direction.

2. Encouraging discussion of pro-merit issues, for example, the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the previous and the next option, this must be used for an initiative that would otherwise go unfunded.

3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.

Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net

Expand full comment
Viliam's avatar

Just thinking out loud about 1: I think in general what helps high-IQ kids is networking with other high-IQ kids and mentoring. Mensa seems to be a proof that networking alone does not help much (though arguably the problem could be that Mensa is for adults, so maybe too late).

Instead of measuring IQ directly, you could also select by things that correlate with high IQ, for example kids successful in some competitions, such as math olympiad. So you could e.g. organize a competition, and then offer some mentoring to the winners... and thus avoid the whole topic of IQ and related controversies. (Disadvantage: you would only reach a subset of kids. But you do not have the capacity to help everyone, anyway.)

About 2: no comment on redistribution of money, but you also mentioned "attention", so the question is whose attention? Better networking among high-IQ kids would at least direct their attention more towards each other, and that sounds doable.

So, if I was an American and had some free time (neither is the case), I would probably think in this direction: Make some (e.g. math) competition for school kids, then offer networking and mentoring for the winners (e.g. in a form of a summer camp with lessons).

Expand full comment
Marty Nemko's avatar

Your point about math (or computer science) ability is good--It might, net, be better than IQ, both to avoid controversy and because math and comp sci are more likely to be core to major societal contribution.

What I mean by attention is that so much focus in conferences, books, etc., is anti-merit. A formula for lowering society to an ever lower common denominator.

Expand full comment
Viliam's avatar

It also comes with a cost. As an example, in my city there are two schools that select students based on their math ability (thus, indirectly by IQ), and one school that selects students based on IQ tests (something that probably would be illegal in many countries). All three schools have very smart students, the first two are not controversial, the third one is.

Now there is a competition, where students build computer games as teams. All three schools produce good games. But if you look closely, the games produced by the two math schools are... a bit ugly, schematic. They have good code, but the graphics is like "if you guys spent 1 day actually thinking about it, the game could easily be 100% nicer to look at". The games made by the explicitly high-IQ students have both good code and great graphics, because they have teams with mixed abilities.

I am not saying that the computer games are important, but it seems to me as evidence that selecting purely on math on one hand selects for high IQ, but on the other hand introduces certain... group blindness. Or think about LW/ACC meetups: would you rather visit one with 100% software developers and mathematicians, or one with 50% developers and mathematicians, and 50% just as smart people with various backgrounds?

Expand full comment
Marty Nemko's avatar

All fair points but we're splitting hairs. In the US at least, most effort and money is expended on the low, ahem, achievers, in my view, a formula for reducing America to an ever lower common denominator.

Expand full comment
Viliam's avatar

I agree. Doing something is better than doing nothing, even if it's not enough. So, if I had unlimited money and time, I would probably do something like this:

Organize competitions for students, of all ages. Starting with math and computer science, but generally anything that doesn't have bullshit criteria for success, and requires intelligence rather than mere memorization. Promote them on internet, by flyers at school, in any way possible. The first rounds would be online, to allow mass participation, and filter out incompetents. The last rounds would be offline, to filter out cheaters.

Celebrate the winners, and help them network with each other. Invite the winners to a summer camp. Make a web forum available only for the winners. Optionally, help them network across their disciplines, for example make a summer camp for winners of both math and chemistry competitions. Now we are moving towards an IQ-saturated environment, rather than merely subject-specific one. Build a community.

Provide some mentoring for the winners. First, in their area of expertise; teach them things beyond the curriculum. Could be as simple as making an introductory lecture on a topic, and referring a good book or a website about it. Also, older students could give lectures to the younger ones. All lectures voluntary, of course. Second, some life advice, such as how to choose your university, how to self-publish if you are a writer, where to publish games if you are a software developer, how to save money and invest in passively managed index funds, etc.

If everything goes according to the plan, you would have a community of people with high IQ, without ever measuring the IQ explicitly; and unlike Mensa it would also select for a certain level of "actually doing things".

Expand full comment
Marty Nemko's avatar

All good stuff although I worry that contests yield too few happy and in turn more-motivated people and too many who get demotivated. Believe me, I'm no fan of "participation trophies" but I think it's wiser to let the teachers and counselors in blue-collar schools pick the kids.

Expand full comment
George H.'s avatar

Waiting with a baited* brain for the book review.

*yeah I know.

Expand full comment
Goldman Sachs Occultist's avatar

School started a girl on gender transitioning without telling her parents: https://nypost.com/2023/04/06/mom-of-maine-girl-who-got-chest-binder-at-school-files-lawsuit/

Athlete assaulted by "trans" activists while at SFSU to give a speech, and the activsts were praised by the college - https://edition.cnn.com/2023/04/07/us/former-ncaa-swimmer-riley-gaines-assault-san-francisco-state-university/index.html

And predictably, the left have defended the pro-trans side of these incidents, as they always do. Even if you want to say that transpeople should be respected etc, opposing the trans movement as a whole seems like the only sane move here.

Oh and now Canada is literally making it illegal to say "offensive remarks" in the vicinity of drag shows: https://abcnews4.com/news/nation-world/canadian-law-would-ban-offensive-remarks-within-100-meters-of-drag-performances-canada-ontario-lgbt-free-speech-

Expand full comment
Benjamin Ikuta's avatar

Why? From a libertarian perspective, I've never respected the authority of parents.

Expand full comment
Jacob Steel's avatar

Re the second article: What is your source for "the activists were praised by the college"? Nothing in the article mentions that.

Re the third article: Why did you choose to summarise "an opposition party (with 30 seats out of 124, in a state where the conservative government has 81) in one state of Canada has proposed..." as "Canada is literally..."?

Expand full comment
PthaMac's avatar

" opposing the trans movement as a whole seems like the only sane move here."

Yeah, that seems like quite a leap here. When you say "the left" defended these incidents, whom are you speaking of, exactly? Prominent politicians or thought leaders? Randos on social media? If you're going to paint a broad social movement as responsible for a few isolated incidents, I think a little more rigor is called for.

(The Canadian law is more alarming, but that's more relevant for Canada. I don't see how it's applicable to the US, where the free speech discussion has different contours.)

Expand full comment
Scott Alexander's avatar

I'm not sure I'm *exactly* on the pro-trans side of the first, but it doesn't seem outrageous or obviously wrong to me.

I've been vaguely adjacent to a few cases where trans teenagers run away from home. It usually goes something like - they are suicidal and would rather die than be their birth gender. Their parents either absolutely refuse to cooperate and yell at them whenever they bring it up, and they are probably grounded approximately forever for protesting against this. They decide they would rather live on the streets as their chosen gender than live in a comfortable home with their parents as their birth gender. The parents call the police to bring them back, maybe the police succeed, the parents try literally locking them in their room, and the kid escapes again as soon as the parents let their guard down for one second.

I am not sure who is well-served by the school/police/parents continuing to insist on misgendering these kids until the moment they turn 18, disown their families, and get competent therapy. If the case they're citing is anything like this, it seems far from obvious to me that the school's only possible responsibility is to enforce the parents' preference.

Obviously I agree activists should not assault people.

Expand full comment
John Schilling's avatar

There are certainly cases where parents should be excluded from decisions about their kids' medical treatment and the like, but that's a decision we usually leave to the courts, not hand off to any schoolteacher who wants to step in. Because, among other things, schoolteachers aren't equipped to do the job properly. They cannot, on their own, remove a child from an abusive home.

That being the case, what was the plan here? The teacher would help the kid socially transition at school, and the parents would simply never find out about it? That plan obviously didn't work, and it was ridiculously naive at best to expect that it would. The teacher will help the student socially transition at school, and when the parents find out that this has been going on behind their back and they've been deceived about it, they are going to react *better* than they would have if the kid had simply said, "Mom, I think I'm trans, how are we going to deal with this?" Because I'm not optimistic about that one either. The only plan I can see "working", is the one where the parents find out, and freak out in a big enough way because of the extra deception that now the teacher *can* go to the courts and get the kid yanked away from their parents and into foster care, because that somehow counts as a victory.

Note that all of these except the perfect deception scenario, break the trust between parents and schools. I would think that after all the "noble lies" and other deceptions of the COVID years, we would know better than to damage social trust in the name of transient expediency. And it can only be transient, because once the trust is gone, the lies and deceptions don't work any more.

Now about those horrible abusive parents who will hound their kids to suicide rather than admit they might be trans. I'm certain their number is greater than zero. I'm not surprised, given your location and line of work, that you've been vaguely adjacent to a few. But I'm pretty sure they are mostly the boogeymen du jour, like the "stranger-danger" pedophiles and Satanic Ritual Abusers of previous decades, used to justify basically anything the relevant interest group demands because anything is better than being accused of being in league with the boogeyman du jour.

A parent isn't immediately and totally committed to supporting rapid gender transition the moment their kid expresses discomfort in early puberty? They must be one of those monster parents; we can't trust anything they say in this matter. A parent doesn't want to be accused of monstrosity and have their kids yanked away into foster care? They better immediately and totally commit to supporting rapid gender transition the moment a teacher or activist says that their kid said the T-word. Heads I win, tails you lose, where "winning" means gender transition every time the question arises and someone wants the Social Justice cred of "rescuing" a trans kid.

Or maybe not that, but if so it's probably going to be on account of loving parents who want what is best for their kids and aren't convinced that means gender transition. Parents almost always love their children. Teachers, social workers, activists, bureaucrats, and judges almost never do. There are rare exceptions, but your prior shouldn't be "the parents might be the monstrous type; we can't trust them in this and have to work around them".

My prior is pretty much always going to be that the parents love their children, want what is best for them, aren't going to hound them to suicide, and that if they don't fully understand the situation then you'll probably going to get the best results by explaining it to them. If I see someone trying to cut the parents out of the process, and they *haven't* made the case that these are the monstrous sort of parents, then I'm going to assume that they and not the parents are the villains of the tale.

Expand full comment
Michael's avatar

The article says the mom is a Democrat and never gave the school any reason to think she would react negatively if she found out her child is trans. She's just upset the school purposely concealed this information from her.

If true, I think that's a different situation.

Expand full comment
Kenneth Almquist's avatar

Since no one else has posted it, here is a link to the complaint:

https://www.goldwaterinstitute.org/wp-content/uploads/2023/04/Lavigne-Complaint-and-Exhibits.pdf

The complaint itself gives the mother's side of the story. If you scroll down to the exhibits, you can find two statements by the school, although these mainly say that privacy laws prevent the school from saying much about the case.

The complaint alleges that, “Plaintiff has never given Defendants cause to believe that A.B. will be harmed in any way by Plaintiff’s knowledge of such facts, nor is there any basis for such a belief.” However:

1) The child decided not to tell the mother. Presumably that decision was based on a judgement on how the mother would react to the information. The child likely made an informed judgement based on years of interacting with the mother. We do not know, and plaintiff may not know, what the child told the school counselor about this decision.

2) After learning that her child was trans, the mother pulled the child out of school and into home schooling. I doubt she did this simply because the school hadn't informed her that her child was trans.

3) Similarly, the mother told the school board that, “decisions made [by the school] drove a wedge between a child and her parents.” I'd wager that the “wedge” was the result of her learning that her child was trans (or more precisely, her reaction to the knowledge), and not anything the school did.

4) It appears that after learning that her child was trans, the mother contracted anti-trans activists, who used the situation to fuel the culture wars, resulting in school closures due to bomb threats. This affected all students at the school, in part because the total number of instruction days was reduced.

So, even based on only the mother's side of the story, it seems plausible that the school had good reason not to inform her that her child was trans.

Expand full comment
A.'s avatar

I think you're reading all of it wrong.

For 1), try putting yourself in the kid's shoes. Do you remember yourself as a kid? Among other things, I remember I wouldn't tell my parents that I had an ice cream, because my parents disapproved of ice cream outside home (something about bad quality and about how it would make me sick). Kids routinely don't tell parents things that they believe would upset them.

For 2-4, try putting yourself in the parent's shoes. Imagine you're the parent - what are your reactions?

2) I'd be scared if I was the parent. She realized she had no idea what the school was capable of doing to her kid next, and, whatever that would be, now she knew they were likely to conceal it.

3) I'd be upset that the school officials managed to convince my kid to trust them and not me and to conceal things from me. That would definitely be the wedge.

4) I would certainly want to be very loud about this, because there are other kids going to that school and other schools. Their parents should be warned that that kind of thing and quite possibly worse might happen to them. I would disapprove of bomb threats, of course, but I wouldn't expect them to happen, and silence would not be an option.

Expand full comment
Kenneth Almquist's avatar

The legal argument that the mother has a Constitutional right to be told that her child was trans seems wrong to me. The U.S. Supreme Court has held that parents have a right to direct the upbringing of their children, but rights under the United States Constitution are generally negative rights (that is, rights to stop the government from doing something) rather than positive rights (that is, rights to force the government to do something). The complaint says that the school was “withholding or concealing information from Plaintiff,” which sounds like a description of the school doing something, but is in fact a description of the school doing nothing. The complaint is actually taking the position that Plaintiff has a Constitutional right to force the school to inform the Plaintiff that her child is trans, which is a positive right.

As a thought experiment, let's change the facts a bit so we are dealing with a negative right. A state licenses psychotherapists and requires therapists to maintain patient confidentiality. Child tells therapist that the child is trans, but the therapist doesn't tell the mother because the state will pull his license if he breaks confidentiality. Now we can point to a specific action which the state has taken--setting up the licensing scheme--that prevents the mother from learning that her child is trans. Is the licensing scheme Unconstitutional? I don't think so. The U.S. Supreme Court recognizes certain parental rights, but I'm pretty sure that the right to know things that your child decides not to tell you isn't one of those rights.

Expand full comment
Mr. Doolittle's avatar

Consider this. The school becomes officially aware (as in, paperwork to support this) that a child wishes to be known as a different gender. The school updates information systems with the new name and gender, instead of the name and gender the parent would know. Communications to the home always list only the name and gender known to the parents. Communications to the school community always use only the new name and gender.

Is that intentionally concealing, or failing to disclose?

That's the current approach of many schools.

Expand full comment
Deiseach's avatar

The story is light on details and we don't know the full facts, but the kid seems to have had some mental problems already and then at the start of puberty, expressed gender issues.

Bodily changes at age 12 can be a big deal! And if the new social worker (and let me admit bias right out of the gate; some kid with lip piercings and ear spools is not going to hit my "yes a reliable and responsible professional" buttons, yes I'm prejudiced) jumped on "that means you're trans" rather than "that means that, like a lot of girls and boys, you are having some trouble navigating this huge change", then that makes it worse to hide it from the parents.

So the chest binder worked for a bit, but what about when Mom wants to take her shopping for her first bra? That's something I was very reluctant about at that age, because I didn't like the way my body was suddenly changing without any input from me as to what was happening. This 'secret' couldn't be kept for long, it was going to come out eventually. And then what? I don't think the school as reported is helping its case, and indeed it sounds more like "waiting for an excuse to get the kid into care" on the part of the school and social worker. Let Mom come out with a 'transphobic' statement when she finds out what we've been doing, then we can call in the social services on the pretext that the kid is at risk of abuse.

Expand full comment
Deiseach's avatar

So how about if the social worker decided to give medications like anti-depressants to the 13 year old and the school hid that from the parents?

If a child is having mental or psychological problems, that's going to show up at home. If the parents aren't aware of what outside parties are doing, that's going to make getting proper treatment more difficult.

Yes, there are abusive parents, but I'm not on the side that "not believing your 13 year old daughter is really a boy" is abusive. However, even if we accept that to be abuse, the parent in question isn't your standard Bible-basher Republican-voting bigot:

"Lavigne, a Democrat who runs a mental health business, has told the outlet that her daughter began talking to a school social worker near the end of seventh grade about her mental health issues, including some gender identity matters.

The parent said she spoke with the staffer about the issues, but came away thinking they weren’t particularly serious.

At the beginning of eighth grade, the girl was assigned a new social worker, Roy, without Lavigne’s knowledge — and the mom discovered the chest binder in the girl’s bedroom, according to the outlet.

School officials in Maine have defended policies that exclude parents from gender transition counseling on the basis that some might have negative reactions if their kids asked to switch genders, the Maine Wire reported.

But the suit argues that Lavigne never gave school officials any reason to believe she would act that way toward her daughter if she found out about the counseling."

The child is going to be living at home, under the guardianship of the parents, for the next five years until they turn 18. Once they have the secret boy name for school. and the chest binder, what's next? Puberty blockers seem to be the next step, so are we happy to have the social worker secretly supplying these to the kid, with their family doctor kept out of the loop, and the school backing up the secrecy?

Does anyone think this kind of transition can be kept secret for five years? Eventually the parents will find out. And what if it's a case of social contagion, not being really transgender? What if it's different problems but the social worker leaps on "you're trans" as the answer?

Matters affecting the mental and physical health of children like this *can't* be kept a big secret from the parents. If the school really thinks this kid is in danger of being physically beaten and harmed by bigot parents, then get the social services involved and take them into care. But if it's just "Mom is so unreasonable", then tell the people who are the ones feeding, clothing, sheltering, and looking after the child, who are the ones who are the caretakers of the child, who are the ones who love and are concerned about the child.

EDIT: And for the cases where the kid runs away because they prefer to live as their chosen gender, there are cases like this one, where someone who transitioned female to male beginning at age 16 is now suing the clinic, because they claim they were too young to make an informed choice and give consent. That's 16, not 13. Suppose 13 year old starts down the path, then later regrets it? What will the school do then - maintain that hell yeah, secret transitions of children by social workers who are not medical staff are just peachy and they should be immune from prosecution?

https://en.wikipedia.org/wiki/Bell_v_Tavistock

"Some time after January 2020, Evans passed on her role as complainant to Keira Bell (referred to in court documents as Quincy Bell) "who was prescribed puberty blockers by GIDS when she was 16, thus her name is included in the case label. Bell had a double mastectomy aged 20, and now regrets transitioning, which has left her with 'no breasts, a deep voice, body hair, a beard[, and] affected sexual function'. She may well be infertile as a side effect of the drugs."

Bell in hindsight described her transition as being related to her mother's alcoholism, struggling with puberty, struggling with being a lesbian, social isolation, and depression."

Expand full comment
Shjs's avatar

Didn’t bother to read all the articles but the Canadian law is not a Canadian law. It is a proposal in the province of Ontario, NOT the federal government, by a member of the left wing opposition party. The majority party is right wing and far out numbers the opposition. No one including the MPP who proposed remotely believes it will become a law. The purpose is to generate news, not to pass a law.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Nolan Eoghan (not a robot)'s avatar

False dilemma.

Expand full comment
Deiseach's avatar

Machine Interface, when we get a president of the United States declaring his support for rapist priests, get back to me on #bothsides.

https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/31/statement-from-president-joe-biden-on-transgender-day-of-visibility/

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Nolan Eoghan (not a robot)'s avatar

The whole Catholic abuse is massively overplayed - largely because Anglo societies are generally anti Catholic. That said is did happen, but it’s definitely not being hidden right now.

Nor are Catholics, Catholic societies or countries at the forefront of the anti Trans movement. Outside the US the rebellion against that cult is from feminists. TRAs admit this when they call all their opposition TERFs.

However there is no “moral panic” about gender transitions in children. It goes against all kinds of paediatric and medical practice to give drugs to children, or perform transformative surgery on their say so - without medical diagnosis. Trans affirmative care means often that no diagnosis except “confirming” the child’s belief.

Expand full comment
Bi_Gates's avatar

It's absolutely hilarious to me when wokies are like "Conservatives are just a bunch of stereotyping panicky farts being upset over everything", and then go on to display how much of a stereotyping panicky farts being upset over eveything they are.

I'm an Arab, Atheist, Anarchist, Vegetarian. Great combo, enough to make about (wild guess) around 40% to 60% of a typical US right winger hate me or at least treat me with a hefty dose of suspicion. As if this wasn't enough, I roleplay being a Muslim in public because of social headaches if I don't. I don't think you can make someone more counter to your stereotypes if you had the chance to design him in a sim city game.

I despise wokism with burning passion, an F35 jet engine is more gentle and chilling than my hatred of this ideology. There is nothing more disgusting to me than their maddened rabid rage at Free Speech, at negotiation and dialogue, at basic decency. If I'm ever in a position where I can harm someone adhering to this ideology, I will go at it with abandon, everything in my power short of throwing a punch. Nothing is too much, that's how wokism treats its enemies (== everyone, including its own adherents) and that's how it and its adherents should be treated : extreme unhinged defection, maximum punishment is so much less than deserved.

>a vulnerable minority

Poor poor fucking minority, the one that the White House and all major corporations and news outlets suck up to like a 1$ whore. They are truly the most oppressed in this cruel world.

Expand full comment
User's avatar
Comment deleted
Apr 12, 2023
Comment deleted
Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
dionysus's avatar

"as both the Catholic Church and the Southern Baptist Convention are facing humongous scandals of systemic and systematically-covered-up sexual abuse against children."

You mean abuses that have been universally known and the butt of jokes for years now, and that both the current Pope a d the previous one have acknowledged? https://apnews.com/article/pope-francis-east-timor-vatican-city-religion-728a5af6ec4d36e54794a761fddd3fc4

Expand full comment
Deiseach's avatar

"a diversion from the fact that the actual abuse is happening among their ranks, that it is has been happening as far as we have records of these kinds of things, and that they have systematically covered it up"

"Nobody on MY side ever did ANYTHING wrong, it's all those awful people on the OTHER side!"

https://thebridgehead.ca/2022/07/04/drag-queen-who-danced-for-kids-charged-with-25-counts-of-child-pornography/

https://www.newsweek.com/sex-offender-busted-drag-queen-who-read-book-children-city-library-1365384

https://reduxx.info/rest-in-power-drag-queen-story-hour-uk-founder-fundraising-for-convicted-child-sex-offenders-funeral/

"the sudden attacks on trans rights accross the US and the uptick in borderline-genocidal rhetorics against transpeople among conservative US politicians and preachers are happening right as both the Catholic Church and the Southern Baptist Convention are facing humongous scandals of systemic and systematically-covered-up sexual abuse against children"

You're about twenty years too late with your "sudden attacks", the heyday of the sexual abuse crisis reporting about the Catholic Church was back then, not today.

"Borderline genocidal" - TIL that protesting Drag Queen Story Hours is genocide. I'm sure the Ukrainians being bombed to hell by the Russians are glad to know they have it better than a bunch of guys in dresses who are *way* too interested in hanging around five year olds. Even the gay guys are starting to sit up and take notice about "Wait a minute, when the hell was drag a *trans* thing?"

Now, do I mean all trans people? No. But just like paedophiles took advantage of becoming priests, there are people who are taking advantage of being accepted for their 'unorthodox' lifestyles and getting support and validation.

Do I think avalanche_Genesis or the other trans people on here are a risk to children? No, but neither are they hanging around schools and libraries dressed up as parodic versions of femininity and insisting it's their human right to read things like "The Hips On The Drag Queen Go Swish Swish Swish" to four year olds:

https://www.youtube.com/watch?v=k9PJd-kj_6k

Expand full comment
Hank Wilbon's avatar

Yudkowsky says in his recent interview with Dwarkesh Patel:

"But if you have these things (AIs) trying to solve alignment for you, they need to understand AI design and the way that and if they’re a large language model, they’re *very, very good at human psychology*. Because predicting the next thing you’ll do is their entire deal. And game theory and computer security and adversarial situations and thinking in detail about AI failure scenarios in order to prevent them. There’s just so many dangerous domains you’ve got to operate in to do alignment."

*s are mine. It's an interesting notion, but I don't buy for a moment that LLMs are good at human psychology. If they were good at human psychology, they would be able to tell a funny joke or to be intentionally funny otherwise. That's a key indicator of understanding human psychology, and so far LLMs fail horribly at the task.

Relatedly he says:

"Well, it’s not being trained on Napoleon’s thoughts in fact. It’s being trained on Napoleon’s words. It’s predicting Napoleon’s words. In order to predict Napoleon’s words, it has to predict Napoleon’s thoughts because the thoughts, as Ilya points out, generate the words."

Another interesting notion, but does anyone besides him believe the LLMs are predicting someone's thoughts and not merely the words?

Expand full comment
Michael's avatar

GPT 3.5 fails at comedy. GPT 4 is quite good at it. I've been playing quite a bit with having GPT 4 do comedy routines on different topics and in different styles. It's Robin Williams style routine wasn't that good, but its George Carlin and Ali Wong were great. Had the whole room in stitches.

Expand full comment
Mr. Doolittle's avatar

How good is GPT-4 on its own original humor? I'm not very impressed if it can locate and repeat jokes or routines that were already identified as funny (by being incredibly popular). If it can write brand new jokes in the style of a particular comedian and also be funny, that would be pretty impressive.

Expand full comment
Michael's avatar

It is original humor as far as I can determine. Give it a specific topic and a specific comedian to imitate and it generates your custom comedy routine pretty reliably. Putting some of the better lines in Google or a plagiarism checker turns up nothing.

It's not really possible to prove with absolute certainly that a joke has never been told before.

Expand full comment
Deiseach's avatar

"Well, it’s not being trained on Napoleon’s thoughts in fact. It’s being trained on Napoleon’s words. It’s predicting Napoleon’s words. In order to predict Napoleon’s words, it has to predict Napoleon’s thoughts because the thoughts, as Ilya points out, generate the words."

Funnily enough, I considered something like this back when I was plotting a spy/adventure story which had duplicates, the good old "I'm the real James Fond!/No, *I'm* James!" scenario where the agency has been infiltrated by a ringer and they have to figure out which is the real agent (it wasn't a James Fond story, just to get that out of the way).

And my conclusion was that the imitator would be limited by only acting in certain ways; they would have been trained by their handlers that "James says this kind of thing and not that, does this and not that". So they could only, in every instance where they had to choose how to behave Like James Fond to fool others, act like the *model* they'd been trained on - James does this not that, says this not that and so on.

But the real James is free. He can choose to break out of the model and do Y not X this time, say Z not B. He can overcome his programming, so to speak, and *that* is how he would prove he was the Real James Fond: by doing the unexpected. The fake *can't* do that, because they have to keep up the act, but the real person is free to do the opposite. The fake can only drink Darjeeling with two sugars because James always drinks Darjeeling with two sugars, but the real James can say "gimme a hot chocolate with whipped cream".

Same here: the machine is trained on a model and can't break out of it, can predict based on "what did Napoleon say/do previously", but it's not predicting thoughts, it's following in the groove like the needle of a record player. The real Napoleon can think and do differently because he can get out of his groove.

Expand full comment
Odd anon's avatar

Re skill at human psychology: AIs currently win at Diplomacy. They know how to lie to achieve their aims. (See, for example, the AI which lied about being a blind person to get someone to fill out a captcha for them.) They seem to be able to guess at motives pretty easily. I wouldn't be surprised if some of the best human experts in psychology also weren't very funny.

Expand full comment
John Schilling's avatar

The last time I checked, AI only "won" at a restricted variant of Diplomacy with very different victory conditions than the original. Has that changed?

Expand full comment
Hank Wilbon's avatar

Diplomacy self-selects for nerds. Real life doesn't.

The greatest psychologists are novelists. No good novelist is unfunny.

Expand full comment
Carl Pham's avatar

One does wonder if he's ever heard of this weird thing human beings do called "lying."

Expand full comment
Jeffrey Soreff's avatar

Yup. Also, one very broad class of thoughts that (usually) don't make it into speech (let alone into online text) are admissions of ignorance. If, at a meeting or a class, someone asks a question directed at everyone present, the reactions of everyone who doesn't know the answer and who choose to remain silent are unrecorded. The LLMs' training sets must be systematically underestimating how often "I don't know" _should_ be the answer, just because of how discussions and publications work.

Expand full comment
Paul's avatar

There is something called ChaosGPT - https://www.youtube.com/watch?v=g7YJIpkk7KM&ab_channel=ChaosGPT - can someone please get this shut down!

JanPro LW -

"Attempting to create (even weak) agent tasked with "destroying humanity" should be made very clear to be out of bounds of acceptable behavior. I feel that I want the author to be prosecuted."

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Carl Pham's avatar

Champerty and impiety.

Expand full comment
Hank Wilbon's avatar

Boyish mischief.

Expand full comment
Eremolalos's avatar

OK, Hank, are you noticing the trend? You quote Yudkowsky, and disagree with his claim that AI's are good at human psychology. One person says, well, they are good at lying (gives an example) and guessing motives. Next person criticizes Yud, says it's \\\\\\\\\\as though he has no idea what a lie is. Next person says agrees about lies, and points out another way AI's repertoire is deficient: didn't hear enough "I don't know" -- seems to be agreeing with general idea that AI's grasp of human psychology is lacking some important parts. Next, we get to one person worried and angry about a GPT variant called Chaos GPT. Trebuchet, Carl Pham (plus of course you) all make fun of poster's idea. In my opinion you just got a fair sample of ACX exchanges. Are you still thinking you are in a group where everyone is knitted together by the belief that Yudkowsky's right and AI doom is nigh?

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Deiseach's avatar

Flummery, barratry, and privateering without a commission. That'll do 'em.

Expand full comment
Jack's avatar

I got tentatively (and informally) diagnosed with ADHD at the age of 32 last week. I was prescribed Concerta 36 mg - so far the meds have been extremely helpful. I’m about 80% confident in the diagnosis. I have an appointment tomorrow with someone else to Red Team that diagnosis and see if it’s something else. I have a number of questions:

1. What are the best ways to differentiate between ADHD and other disorders (anxiety, depression, something else?) I’m not fully convinced I have ADHD because 1) I have an excellent working memory, 2) I do not leave my seat or interject inappropriately, and 3) I do not engage in excessively risky behavior. Is that disqualifying, or do I just present in an atypical manner? No idea, and i don’t have a ton of money to throw at the problem chasing a formal diagnosis from an adult ADHD specialist.

2. If you have ADHD or know someone who handles it well, what tips do you have?

3. Any opinions on ‘masking’ my ADHD? Like most of us here, I’ve got off the charts test scores on everything. Maybe being really smart in certain ways has helped me compensate for ADHD deficiencies? I’m also very vain, so I can usually get my shit together enough to keep up appearances in front of other people. However, I can literally *only* do shit if there is an immediate deadline or someone is gonna see me and judge me.

I’d love to chat back and forth with anybody who has experience with it. Thanks in advance!

Expand full comment
Thegnskald's avatar

My hyperactivity is highly contextualized; sometimes, I just pace. If I'm on the phone, I'm walking.

When I was younger, I had trouble keeping my feet still. They were in near-constant motion when I wasn't walking. Getting complained at about this enough eventually caused it to stop.

I used to describe myself as lazy but brilliant; never studied, did my homework in class in the few minutes the teacher was explaining what to do (sometimes resulting in things like my doing all the problems instead of only the odd ones as I was supposed to do, because I finished before the teacher got to that part), routinely stayed up late doing whatever needed doing the night before. Got a few points taken off multiple research papers because I didn't turn in my "first draft". (I edit as I go, and basically don't do multiple drafts.)

For a few years, once I got into the working world, it continued working fine. Indeed, in some respects, better than fine - the number of times somebody made last-minute changes to a project I hadn't begun yet, so I did not have to redo it from scratch, had a tendency to reinforce my "do everything at the last minute" attitudes.

However - as I've gotten older, deadlines have started to stop working for me. They -help-, but I used to be able to work on something overnight if it absolutely had to be done. Anymore, without medication, I'm liable to do something else instead.

Concerta worked alright - it was the weirdest state to be in. I had zero motivation to do anything, but also, nothing took any motivation to do; I could just do it. However, I had an odd sexual side effect the doctor had never encountered, and I switched to something else.

Expand full comment
Hoopdawg's avatar

There used to be a diagnostic term of ADD, which at some point got merged into ADHD ("inattentive type", or something). Which, perhaps correct, in the "whales are mammals" sense, but also extremely unhelpful, because it misleads people like me and you into thinking we can't possibly qualify because we're the polar opposite of hyperactive.

My rule of thumb is that if you normally can't concentrate at will, but an immediate danger of failing makes you get your shit together (rather than escape or withdraw), and stimulants help you reproduce that state even without said immediate danger - you clearly do have ADHD. Even if you think of it as "a condition of benefiting from ADHD medication" - do make sure you have continued access to said medication. No genuine specialist should ever disqualify you for lacking hyperactivity traits.

Expand full comment
Jack's avatar

“My rule of thumb is that if you normally can't concentrate at will, but an immediate danger of failing makes you get your shit together (rather than escape or withdraw), and stimulants help you reproduce that state even without said immediate danger.”

100% how I am. I can’t really function, at all, without external pressure, but I’m competent when I absolutely must be. Stimulant meds and some therapy seem to be helping a lot, I’m having a great week so far.

Expand full comment
Julian's avatar

Just wait until they carve out Sluggish Cognitive Tempo from ADHD Inattentive Type!

"a condition of benefiting from ADHD medication" I think this is a good framing for a lot of psychiatric conditions. Unfortunately we dont have the technology to "objectively" prove someone has a psychiatric disorder so defining it by what medication you respond to is probably a good stand in.

Expand full comment
Hoopdawg's avatar

>Sluggish Cognitive Tempo

I genuinely never heard about it before, and I know what I'll be compulsively reading about today instead of working. Thanks, and I mean it sincerely, it seems like an important piece of information about my condition / expansion of a conceptual model of it.

And certainly, the term is supremely misleading.

Expand full comment
Viliam's avatar

If you are highly intelligent and your symptoms are not too strong, you can often find a way to mask them. Which ironically can make it sometimes worse, because when you complain about your symptoms, people think you just made it up.

I don't think that any of the three things you mentioned are disqualifying for ADHD. 1) It's not about memory, it's about paying attention. Though if you don't pay attention, there will be nothing to remember. 2) Maybe you do not leave your seat, but do you suffer and wish you could? 3) An intelligent person can find interesting things that are not excessively risky.

Expand full comment
Jack's avatar

Lol my best friend dismissed it out of hand and cannot be bothered to hear my reasoning. Don’t blame him, there are few things less interesting than listening to someone dissect their own thought processes and behaviors.

And the memory thing is spot on. I think I might have just naturally developed a ton of coping mechanisms that eliminate problems:

- I always pat my pockets for essentials when I walk out the door. And 15 min later. And 30 min after that. Nope, never leave my wallet/phone/keys. Why did I start doing that, though?

- Relatedly, never lose my stuff. But that’s because everywhere I go I develop landing spots. Where’s my phone I was holding five minutes ago? Either the dresser, bathroom, dryer, etc... See - not lost!

- As a bartender, I would forget peoples names constantly. Or rather, I never knew the name, because I was focusing on the handshake or another tables order and didn’t actually register it. So I just started saying ‘sorry, one more time’? And then repeating it back to them to reinforce it.

Many, many such cases. This whole thing has made me realize my mom and youngest brother both almost certainly have ADHD too. “Lol yeah of course mom said she’s gonna weed the garden, but we all know she’s just gonna spend two hours redesigning the back yard and then come in for a water break and leave all the tools out.”

Expand full comment
FLWAB's avatar

I do the pocket pat thing too, though I remember starting to do it after I locked my keys in my car for the second time. Now I pat my pockets whenever I leave the house or leave my car.

It has always been a source of shame to me that I'm not reliable, insomuch as people cannot rely on my to complete a task if I say I will. I never thought of it as getting distracted, I just forget that I said I would do it until reminded. It didn't really occur to me until just now that maybe I'm not as morally reprehensible as I thought I was for not following through. Maybe it's just the ADHD.

Expand full comment
Jack's avatar

I saw someone describe the difference between ADHD and lazy as this:

Lazy wants somebody else to do it, or doesn’t care if it gets done at all. ADHD assumes that when they don’t do it, nobody is gonna do it, and they feel terrible about it - and double terrible when someone else just goes ahead and gets it done.

Resonated with me. I’ve never been able to explain to myself or others why I can be rockstar for an hour or a day, but never longer than a week. Or why it’s been two years and I’m still paying 20 bucks a month for that stupid gym membership in another country.

Maybe now we know, eh? And maybe we can do something about it.

Expand full comment
Hoopdawg's avatar

One of the mental health benefits of being able to consistently do your fair share is that it becomes much easier to accept someone relieving you.

First, because you have done your fair share, but second, because you get a better grasp of how much time and effort things actually take at a regular pace, instead of during your bursts of hyperconcentration, so you stop assuming you could just reproduce that level of productivity full time and blaming yourself for not doing that.

Expand full comment
Viliam's avatar

That sounds just like me. (I have no official diagnosis, only a strong suspicion.) My wallet / phone / keys are either on the right end of my working desk, or in my pocket.

Expand full comment
FLWAB's avatar

I got diagnosed at a similar age, and I also was skeptical that I really had it because I did very well in school. It was only on reflection that I realized that while I wasn't jumping out of my chair in class I usually brought a book with me to read and rarely took notes. I did well because I'm smart and good at test taking and conscientious enough to do all the homework. Most of the time I couldn't pay attention, I just didn't notice that I couldn't because it wasn't a problem for me.

Expand full comment
Jack's avatar

Exactly. I got nicknamed Sir-Read-A-Lot by my 6th grade teacher (RIP, awesome dude who died at 48 last month). How could the kid who quietly reads all the time have ADHD? School is boring, I already understand it, and I got the best grade - why would anyone think there was an issue?

Expand full comment
undercooled's avatar

Same; important to note that there are two subtypes of ADHD, hyperactive and inattentive, so hyperactivity is not required for a diagnosis. I am mainly inattentive-type but I do have some symptoms of hyperactivity; I don't like sitting in one place for too long and I do have an above-average appetite for risk.

Expand full comment
Jack's avatar

Im probably inattentive subtype - or, at least, it’s the inattentiveness that causes all the problems. But it’s kinda blowing my mind a bit to realize how many of my personal quirks might be symptoms of hyperactivity and impulsiveness.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Jack's avatar

Like I said, I haven’t done a formal diagnosis yet. Do you think I should? Whether or not I officially have it, my problems are identical, and the non medical solutions are mostly just good things for everyone to do. And I have a prescription for the meds. I’m getting a second opinion tomorrow, but still no formal diagnosis because it costs quite a bit more.

I did go back and look at my old Lumosity scores - not paying for premium to see the exact percentiles, but my lowest score was speed (80 somethingth percentile). Next lowest were memory and attention - but in the low 90th percentiles somewhere. Everything else was 99th. I’m not sure the SD for Lumosity, but seems like a similar spread.

Expand full comment
Jack's avatar

Lol trying to remember how those stats work. That would make Memory and attention like 1.5 SD lower? And speed 2 point something lower?

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Muster the Squirrels's avatar

> 99.6th percentile on verbal comprehension, but 75th on working memory and 50th on processing speed. And that is apparently pretty characteristic of ADHD.

How do disparities like this manifest in someone's personality, if they don't have the classic hyperactivity we normally associate with ADHD?

Expand full comment
Erusian's avatar

I feel like I have vastly different intuitions about how much getting there first gives an advantage in economic activities.

In my mind I think of things like Dutch traders bringing in tulips five hundred years ago and the Netherlands still completely dominating the tulip industry. Or how China invented silk production and has dominated it, with a few interruptions, ever since. Or how despite the fact the US has lost manufacturing in semiconductors Americans still own like 60% of the value chain such that the majority of money from it goes into American firms. Even in stuff like American cars the US is still one of only a few nations that really makes competitive exports. And while electronics has shifted the Netherlands and Britain, some of the earliest pioneers, still make some of the most advanced stuff and profit to the tune of tens of billions. And that's two centuries later!

And when I think of counterexamples, where an industry was "lost," I can usually think of a specific event. Usually a very dramatic, damaging event. France lost its mechanical advantage when it expelled a lot of minorities which included a disproportionate amount of inventors. China dipped below Japan in silk production because of a bunch of civil wars and invasions.

It appears to me getting there first is very difficult. But it justifies a lot of investment and racing other people. But what's the opposite view? That industries just move about willy nilly? That the market isn't significantly winner takes all? Because there's a long history (for example) of people trying to make competing car industries, silk industries, etc and failing.

Expand full comment
thefance's avatar

I've recently realized that the concept of Network Effects has much more explanatory power than I formerly believed, and I think it applies here too.

First-Mover Advantage exists because you get a head-start over rivals in vacuuming up local talent/resources to build a network. Although economists usually refer to network effects as "economies of scale" or "positive externalities of agglomeration".

A network, like a speculatory bubble, is self-reinforcing. . . until it isn't. Rather than thinking of networks as purely eternal or purely ephemeral, it's best to think of networks as bistable. Like a JK flipflop.

Geographically, network effects are attenutated by large distances. Which is why France produces less than 100% of the world's wine, the world's population isn't all concentrated in a single city, etc.

Under this model, I experience no cognitive dissonance in reconciling "France was the industry leader for centuries until some catastrophe reset the equilibrium" with "Zambia utterly failed to compete with the US and USSR in the spacerace" and "industries are roughly scattered according to a powerlaw".

Expand full comment
WoolyAI's avatar

So, I think the case is:

Markets are rarely winner take all, in fact the entire moral and economic case for markets is that they aren't winner take all but instead highly competitive, thus lowering price and increasing quality.

Instead, while first mover advantage is a real thing and can be very powerful in certain markets, industries instead move based on underlying market conditions. So, for example, US manufacturing didn't just move to China, it first moved to Mexico, then moved from Mexico to China, and a lot of former Chinese manufacturing has moved down to Vietnam, always pursuing a lower cost workforce and often moving on when the standard of living in one place (China) rises significantly above a neighbor (Vietnam).

Also, barring something shocking, I don't think first movers "lose" their market but they're very often greatly diminished. For example, if any country ever won a market, France won the wine market for centuries and they're still doing fantastic business and making good money today but...there's simply too much great wine from so many other places, even ignoring California and Chile and Australia, I've drunk more Spanish and Portuguese wine in the last year than French. France simply doesn't dominate the wine market like it used to and that's no loss on their part, just a ton of great competitors both stealing their techniques and inventing their own.

Expand full comment
Erusian's avatar

France does still dominate the wine market like it used to though. It's a third of all wine exports. Italy, another traditional winner, is a fifth. All the other nations you've mentioned are less than Italy alone and two of them (Spain/Portugal) got their start thousands of years ago.

Individual firms certainly don't survive forever. But regions seem to be much more enduring. Probably due to effects like new firms bidding away talent where the talent already exists. Or clustering near where pre-existing infrastructure decreases start up costs. Perhaps no individual vineyard in Champagne has survived the centuries but Champagne wine certainly has.

Expand full comment
WoolyAI's avatar

Alright, I need some quantification here. What exactly do you mean by dominate?

France is less than 20% of global wine production (1). 100 years ago, France "was" global wine production and had been for hundreds of years. Which is not entirely fair, Spain and Italy have always had big domestic markets and the Germans have always made a few nice whites, but France clearly dominated. Same with exports, the French still have about a third of the market by value but compared to 100 years ago...I can't find hard figures on 19th century French wine exports but I struggle to imagine them under 70% of the total market.

Nor is the reason for this complicated or, I think, controversial. The most important factor in growing wine is the climate. France certainly has the best climate, both overall and in specific microclimates, for growing wine in Europe, and therefore the world, for a long time. It wasn't until relatively recently, with experimentation in Napa and then elsewhere around the world, that people found other climates that could produce wines of the French caliber.

So, again, what do you mean by dominate? If you mean by still in a leading position, yes, they still have (arguably) the best climate for growing wine. If you mean compared to what they had 100 years ago, where they were, let's say conservatively, 70% of the export market, they've certainly declined. If you mean where they would be today if they still had that previous dominance, if they were seriously dominating the largest market (2), then no, they're clearly billions if not a few tens of billions of dollars below where they would have been.

So what quantification is there behind "dominate"? I mean, Greece still makes wine, it's fine, and they were one of the first on the scene. We know France has lost at least 50% of the export market because 100 years ago, even 70 years ago, most the countries on the export market weren't even making wine and now they're ~40-50% of the global export market.

Because first mover advantage is a thing and I don't want to get locked into arguing it's not a thing, it is, but it's just not that significant in a lot of markets. There's still plenty of people eating Hyrdrox cookies and it does good business but it ain't Oreos. How much of a decline are you thinking of?

(1) https://www.bkwine.com/features/more/global-wine-production-2020/

(2) https://www.winepros.org/wine-consumption-us/

Expand full comment
Carl Pham's avatar

Are you not pointing to the general problem of the role of contingency in history? We wonder whether the history of stuff -- including evolutionary history, economic history, political history -- is governed more by vast tides that produce a given state at a given time, like the way a thermodynamic equilibrium depends only on the values of thermodynamic variables, and is independent of the history, or whether the present can hinge on a surprisingly few number of decisions in the past[1].

More broadly, we might say that this is the problem of the texture of the fitness landscape for any adaptive optimizing process: is the texture sufficiently rough that the process can get stuck for a long time in local minima, or does it generally reach the global minimum?

I put it this way only to observe that if this is the issue, it's been with us a while and probably won't be solved in the next half millenium or so, is my guess, even in a restricted area like economic history.

----------------

[1] "This, and not otherwise, the world was made. Either something or nothing must depend on individual choices. And if something, who could set bounds to it? A stone may determine the course of a river." (C.S. Lewis, "Perelandra").

Expand full comment
Erusian's avatar

Not precisely. Contingency is usually posed as opposite to structuralism. So, for example, a contingent view of the Russian Revolution would emphasize the choices of Tsar Nicholas while a structuralist would say that his actions mattered a little or not at all (because structural factors were more important). My point is, whether contingent or structural, conditions tend to be sticky. So, for example, I could believe that Nicholas's mistakes were contingent and could have saved the monarchy but the fall of the Russian monarchy is nevertheless sticky. The opposite of my view is that conditions are not sticky. That the fall of the Russian monarchy (whether contingent or structural) doesn't significantly affect the chances of a new one arising. Or likewise the fact that 90% of car factories were in the US doesn't have long run effects on how cars are made.

Expand full comment
Carl Pham's avatar

Sorry, I'm not really understanding the distinction. If the fall of the monarchy is sticky, how is this not identical to contingency? Maybe Nicholas would save it, but Alexei would have to make all the right decisions, too, or else it would fall with him and stick, and so on. Contrariwise, if we take what you are calling a structuralist view, how is this not the same as assuming individual decisions matter very little -- that even if Nicholas delayed the fall by some modest time, it would fall anyway soon, because of the structural aspects -- no matter how brilliant Nicholas and Alexei might have been.

Expand full comment
Erusian's avatar

Imagine state A. State A changes to State B. This is empirically observed. There is a hypothetical state C which has not been observed. Structuralism vs contingency asks whether the change from State A to State B was due to specific circumstances in State A or due to general circumstances. In other words, if you change a few circumstances does State B still occur?

This is a separate question from whether State B is likely to become State C. Especially if the framing is "absent something like the causes of the A to B transition." We all agree a Russian Revolution affects the economy.

If State C is simple and could easily occur then State B isn't sticky. Or stable. Pick your term. This does not imply much about State A which might have been very stable right up to the point it wasn't. Or might have been unstable all the time.

Put another way: The question of the stability of a given state (or even all states) is a distinct question from what causes them to destabilize. Though obviously not totally unrelated.

Expand full comment
Carl Pham's avatar

I don't feel this is framed satisfactorily rigorously. The model I would use is that to which I alluded, which is optimization on a rough fitness landscale. We understand very well what it means for the optimization process to always select the global minimum, which can be determined "thermodynamically" -- entirely by its global fitness. This is the model for "structuralist" history: we can determine what happens at each juncture by simply asking where this state lies on some global fitness metric. We don't need to know how we got there, because it's inevitable that we get there. This is quasi-Marxist: we don't need to know whether we are talking about Russia or Germany to know that the proletarian revolution will take place, because it's just inevitable.

Conversely, an optimization process can also get stuck in local minima, if the effective temperature is too low and the landscape too rough. In *that* case while we can be confident that an observed state is a local minimum, we cannot know it is the global minimum, and moreover we need to know the history of the optimization process because that is crucial in telling us in which local minimum it got stuck. This is the model for the "sticky" history you describe, where the exact history is crucially important in understanding how the present state came to be. In this model, it matters very much whether we are talking about Russia or Germany when we ask whether the proletariat have revolted, because the detailed history matters.

The advantage of this model is that its properties are very well known, and can be proven mathematically if necessary. It can have zero ambiguity.

But from this point of view, I also only see two possibilities: either history is determined thermodynamically, it's always the global minimum, or it is contingent on path, it's in some local minimum. I don't see a third possibility, which is what I thought you were saying existed. That's the problem I'm having. What is the *third* possibility, neither structuralist nor contingent, nor a mixture of both?

Expand full comment
Nolan Eoghan (not a robot)'s avatar

What’s the France story?

Expand full comment
Erusian's avatar

France had a fairly long history of being good at mechanical production. They also had a decent sized banking sector and a few other advantages. And France, for complicated reasons, sided with the Protestants against the Catholics in the 16th century despite undergoing its own issues with religious division. This led to France becoming a tolerant state, giving certain liberties to its universities, and allowing Protestants to administer certain Protestant majority areas of France. During this period these religious minorities became a disproportionate share of wealthy merchants, academics, and manufacturing specialists.

However, there was a major civil war in the 17th century. The monarchy won and began a campaign of repression against religious minorities alongside centralizing political and economic reforms and bringing institutions like universities under control. This led to a significant outflow of Protestants and even Catholic refugees. And they carried with them a lot of their knowledge. Coincidentally, there was a semi-coincident outflow of Jews from other parts of Europe. These Jews and French refugees were a disproportionate share of innovators in their new homes. Many ended up in England where they contributed a great deal to British dominance in these sectors. And arguably the damage these reforms did to the French economy (and what people like Torquemada did to Spain to similar effect) severely damaged both countries.

But that's the kind of thing that shifts an industry. If the US kicked out every auto manufacturer then I wouldn't be surprised if a new car industry pops up wherever they flee to. But before that they'd been around for centuries.

Expand full comment
Deiseach's avatar

The 18th century Irish establshment tried encouraging Huguenot settlement in parts of Ireland in order to encourage that kind of trade and industry and to help create a Protestant hegemony, but it never took off. The settlements remained small, no large industrial base ever developed, and the majority of the Irish remained Catholic. Some of them never even got off the ground:

https://en.wikipedia.org/wiki/Geneva_Barracks

"In 1782, the governing Protestant Ascendancy in Ireland were granted increased self-rule under the British Crown by the British Parliament. This greatly increased the powers of the Irish Parliament at College Green. The subsequent scrapping of the previous trade restrictions imposed by London, which had largely provoked the call for a more powerful and independent parliament in Dublin, led to a wave of grandiose plans for the economic and cultural development of the Kingdom of Ireland. One such plan was for the formation of a 'colony' of artisans and intellectuals to stimulate trade. In 1782, a failed rebellion against the ruling French and Swiss alliance led to a wave of Genevan refugees in Europe. As artisans, they were valued for their knowledge and skills and were invited to settle in their thousands in Ireland. A site in County Waterford was quickly acquired for the anticipated arrivals and named New Geneva, reflecting the origin of the first settlers.

...Although a vast sum of money (£50,000) was allocated to the project, the colony quickly collapsed when the Genevans insisted that they should be represented in the Irish Parliament and govern themselves under their own Genevan-style laws. The project was abandoned when this proposal could not be agreed upon and the site was eventually taken over by the government who began to transform the settlement into a military base. "

Expand full comment
Erusian's avatar

Interesting!

There's two issues with the example as a counter to the thesis though. Firstly, it's about a century after the major migration. The dragonnades were in the 17th century. And most of the Huguenots ended up in the home counties. My understanding is that the 18th century waves, being smaller, mostly settled near the previous ones.

Ireland was fairly devastated at the time (the population still had not recovered from Cromwell) and had weaker property rights. Not so much due to any flaw of the Irish as the fact the Protestants and Catholics took any chance they could to take land or property away from each other. And even post-Ascendancy the constant persecution of the majority made for a bad environment. The investment climate was likewise poor and the people with money were more interested in extracting revenues to spend in England than investing in trade in Ireland. And it was before union so Ireland was still subject to things like import/export duties into England (which simultaneously got a veto over Irish government).

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I know about the huguenots, what’s not clear is that by leaving they destroyed French manufacture, though they no doubt contributed to the British economy.

As for Torquemada - Spain was just united around the time of his death. It goes onto become a world empire, from a hard scrabble outpost of the Umayyad empire.

Expand full comment
Erusian's avatar

While I haven't done a systematic study there's a lot of things, such as early steam engines or glass, where you see Huguenots in Britain, the Netherlands, or Germany doing good work. Meanwhile France saw a decline in all these fields despite attempts to attract in Italians. Partly because these attempts were semi-successful. Partly because the dirigisme regime of France meant nationalizations and the like were the order of the day. But losing hundreds of thousands to low millions of skilled workers can't have been a good thing.

Your timing for Spain is off. The Spanish Catholics established dominance in Iberia by the 13th century. The Muslims were reduced to a minor power in Granada. It was centuries later that the Spanish monarchy conquered Granada and expelled the Muslims and Jews which had absolutely devastating effects on the economy, especially in Aragon, and seriously contributed to the economic crises of the 16th century. These expulsions were not a one time thing either and persecution continued throughout the Ancien Regime. For example one of the biggest revolts happened in the late 16th century when the Spanish banned Arabic, a century after colonization had begun. This was a contributing factor to Spain entering terminal decline in the 17th century. In part because men like Suasso or Spinoza went to the Netherlands or Muslim worlds to avoid persecution.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

The French claim is for another response (remember you are saying that French industry collapsed not just that the Huguenots helped Britain).

But in the meantime.

In the 13C Spain was divided between Christian and Muslim powers.

The Reconquista was considered finished after Grenada in 1492. It is after this that the Spanish become a world power. The rise of the conquistadors begins that very year.

Torquemada died in 1498. It’s hard to talk about Spanish decline from these dates since it’s clearly from this era that we actually get Spanish ascendancy.

Expand full comment
Erusian's avatar

> (remember you are saying that French industry collapsed not just that the Huguenots helped Britain)

I do remember my claims. It is that SPECIFIC industries collapsed, not the entire French economy. The banking sector, for example, or glass. Others remained because they were not Huguenot dominated. Please do not strawman me.

> In the 13C Spain was divided between Christian and Muslim powers.

At the beginning of the century yes. By the end there was one small Muslim power (Granada) and four Christian ones (Portugal, Aragon, Castile, and Navarre). Christian dominance would not be in serious risk ever again. This set off a period called the Convivencia which was ended by Spanish persecutions in the 15th century.

> The Reconquista was considered finished after Grenada in 1492.

The declaration post-facto that this was part of some grand strategy was propaganda. The Spanish didn't even originally intend to conquer Granada but they found it so weak they decided to take it.

> It’s hard to talk about Spanish decline from these dates since it’s clearly from this era that we actually get Spanish ascendancy.

In the 16th century Spain lost huge swathes of territory in Europe, was defeated by the French and Dutch, and underwent nine bankruptcies as well as inflation and economic decline. No doubt Spain gained a lot of territory in the Americas and was a military power (at least until beaten by the Dutch). But it's a much weaker case that Spain did well economically.

Expand full comment
Urstoff's avatar

is "willy nilly" shorthand for "extremely complex contingent historical factors", because if so, then yes, industries move about willy nilly

Expand full comment
Carl Pham's avatar

"Willy nilly" is a degenerate form of an Old English expression "nylle he, will he" meaning "whether he wills it or not."[1] The conventional meaning is "whether the subject wanted it or not" with the connotation of "the best laid plans of mice and men go oft astray," e.g. "He had registered for an exciting class on AI alignment, but the AI instructor turned out to be owned by ByteDance and missing key training modules on English, so willy nilly he learned a great deal of Mandarin that semester."

It's a great phrase, fun to use.

---------------

[1] https://www.etymonline.com/word/willy-nilly

Expand full comment
Erusian's avatar

No, willy nilly is shorthand for "without an extreme cause that represents a national level to world-historical shift in circumstances."

Expand full comment
Nicholas Halden's avatar

I'm looking for a small town in either CO or WY to hold a writer's retreat. Ideally one near a lot of nature and relatively walkable, and not crazy touristy. Does anyone have any suggestions?

Expand full comment
Julian's avatar

Check out Nederland or Grandby CO. These are both ski towns in the winter but more of the Coloradans go there vs out of towners. They both have local feels - nederland a bit quirkier/hippy, Granby more farm/ranch. Nederland is more of a walking place. Both are easy drives from Denver but nestled in the Rocky Mountains.

Idaho Springs might work too, though its a bit more built up in a truck stop way and right off I-70.

There are also a number of tucked away place just north/north west of Fort Collins. Laporte, Wellington.

Oh! Check out the areas near Rustic, CO. Right on the Cache Le Poudre river and in the mountains. There is Red Feather Lakes near by that is touristy but quieter than many other places as there are lots of small lakes not just one big one.

Expand full comment
Gunflint's avatar

You might try Lake City Colorado. I spent a couple weeks there a few years ago at a little resort finishing up a programming deliverable while my wife spent time with her family in town.

Pretty place. Out of the way. Very small town about 500 people or so but has restaurants and amenities nearby. Got a chance to climb Mount Umpagre on a day off. A very manageable 14er with no technical climbing involved.

Expand full comment
Melvin's avatar

Silverton, Colorado? I only passed through once, but it has always stuck in my mind. It's probably somewhat touristy but not what I'd call crazy touristy. A really untouristy small town probably wouldn't have the facilities you need anyway.

Expand full comment
LordScarlet's avatar

The town I immediately thought of was Sedona, though it’s in Arizona not co.

Expand full comment
Nicholas Halden's avatar

I'm interested in a deeper treatment of the following argument:

"Crime only appears to have fallen in things like per capita murder rates over the last 50-100 years because of superior medical technology. Controlling for that, it's risen significantly."

The only two sources of information that are ever cited are this 2002 (paywalled) study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1124155/

This argues that between 1931 and 1997, the assault rate has risen 750%, whilst the murder rate has stayed the same.

And this (paywalled) article from WSJ: https://www.wsj.com/articles/SB10001424127887324712504578131360684277812

Which argues that between 2001 and 2011, the number of gunshot victims increased 50%, while murders stayed unchanged.

If these things are true, then it feels like a huge sea change in terms of how the issue is viewed. Yet the only thing talking about this is a small study with fewer than 100 citations! But I find this hard to square with uniformly falling rates of things like violent crimes, especially in the last 20-30 years. How are we meant to square that with the WSJ data (from Hopkins) that gunshot wounds are going up? Would the argument be just that crime-coded reporting has gone down, masking a rise in gunshot hospital admissions? Other data sources seem to diverge from the Hopkins data as well. I'm more interested in a long-form analysis of these studies and others than getting into it here; I don't have any strong opinion myself.

I'm new-ish here and vaguely understand these questions are somehow related to a guy called Yarvin; I'd prefer someone's analysis who doesn't want to be governed by a king's (although I'll happily read anyone's).

Expand full comment
Melvin's avatar

A report on other data contradicting some of that data: https://www.newsweek.com/gun-data-study-628651

> After adjusting for the data reporting problems, the purported increase in nonfatal gun assaults disappeared. The adjusted estimate of nonfatal cases was 41,874 in 2003 and 41,996 in 2012, while homicides numbered 11,920 in 2003 and 11,622 in 2012. The fatality rate held steady at 22 percent

What's the truth? No idea! But the official murder rate in the US approximately halved from 1993 to 2014 (before spiking way back up in 2020) with guns accounting for about 80% of all murder weapons; you'd need the survivability rate of gunshots to have more than doubled in twenty years for that to be explainable without a decrease in gunshot victims, and I think that sort of thing would clearly stick out in the data.

Expand full comment
Nicholas Halden's avatar

Thanks for this. I'm convinced that the gunshot victims between 2003-2012 stayed about the same (and agree that 50% increase never passed the smell test to begin with, because there weren't any obvious major medical advancements in that time that could create such a huge change).

I'm still very unsure as to the state of gun violence now vs 100 years ago vs 50 years ago, though. I wasn't able to find any "debunking" type articles about the 2002 study, so I guess I lean that crime has indeed increased for whatever reason (gun availability? new laws? immigration? modernity in general? no idea).

Expand full comment
Theodric's avatar

1997 was a long time ago, violent crime started going up sharply in the 70s and peaked in 1991, falling significantly since then (its been trending up slightly in the last decade but still much lower than the 90s)

Expand full comment
Level 50 Lapras's avatar

Recently, someone on DSL suggested playing against the chess computer with both starting knights replaced with queens (so you have a 12 point advantage). It's a fun challenge, since it seems to be just enough that you can sometimes win, but not very often or easily.

Playing this way is a very different experience from the casual human chess games I'm used to. Human chess is just a matter of playing a lot of threats and hoping that your opponent doesn't see one, while preventing them from doing the same to you. Meanwhile, the chess computer will never ever make a mistake.

Playing against the computer is basically a matter of just surviving and trading. You start out with a 12 point advantage, and your goal is to make as many neutral trades as possible (or even just to make trades where you don't lose *too much* material) and reach the endgame, where you'll inevitably win if you still have even a small material advantage. However, the computer is doing its best to prevent that and will also constantly try to swindle you out of pieces, often in sneaky ways. It is clever and very unrelenting.

It also completely throws out the opening book. So far, my favorite opening is 1.f3, which you would never ever do in a real chess game.

Expand full comment
Carl Pham's avatar

It's an interesting possibility, that human beings might figure out how to beat computers at some of these games, cf.:

https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/

This is kind of why humans are awesome and so far computers aren't. When presented with novel challenges, even situations that look impossible, they can imagine entirely new categories of approach and turn some previously thought insoluble problems soluble.

I mean..if I were a brand-new superintelligent AI, I would be worried sick over those unaligned and incredibly inventive human beings.

Expand full comment
Dirichlet-to-Neumann's avatar

I'm pretty sure Carlsen and in fact most current top 10 players would easily defeat 1998 Deeper Blue, just like they would easily defeat 1998's Kasparov. Just imagine Deep blue trying to beat the Berlin defense as white and getting h4-h5-h6ed in the Gruenfeld as black.

Expand full comment
beleester's avatar

That strategy was discovered with the help of an adversarial AI, so I'm not sure if that's proof of human ingenuity, proof of the power of AI, or just another demonstration that current AIs are weak to adversarial training.

Expand full comment
Carl Pham's avatar

Yes, well, Napoleon won the Battle of Austerlitz with the help of 60,000 schmos carrying guns, but I'm inclined to credit the victory to Napoleon anyway, and not wonder whether it might have happened spontaneously without him.

Expand full comment
beleester's avatar

This is more like you're losing a battle with Napoleon, so you summon up Sun Tzu to command your side of the battle, but Sun Tzu is speaking Chinese so you need a translator to explain his strategy to everyone else in a way they understand. Like, the human is playing an important role here, but I'm not sure I'd call them the strategic genius of the pairing.

Expand full comment
Carl Pham's avatar

So you say, but I disagree. There is no evidence that in the absence of another AI no human would have ever figured it out on his own, and I consider that unlikely. Human beings have a very long history of being able to get stuff done using whatever tools are at hand. I consider this no more remarkable than that a carpenter can frame up a house a lot faster with a circular saw than a hand saw -- but he could do it with a hand saw if he had to. It's just cheaper to use the power tool.

Expand full comment
beleester's avatar

The question is "does the tool do better than a human at this task?" and your answer is "the human is better, because he can use the tool to help himself." That feels like a tautological answer to me - the answer is only "yes" because you've added the same tool to both sides of the competition. It's like arguing that a human can move as fast as a car because the human could be driving a car.

Also, I don't think it's possible to prove that a human would *never* have figured out a strategy. Chess and Go are deterministic, so *in theory* a human could work out every possible tactic with nothing more than pencil and paper, but in practice they won't be able to explore very far without computer assistance.

Expand full comment
Grizzly's avatar

Huh very cool

Expand full comment
Paul Botts's avatar

This is an interesting prediction that Musk may finally be launching the social-media equivalent of a run on a bank, and the catalyst for it is Substack:

https://www.calmdownben.com/p/fine-i-admit-it-elon-musk-is-ruining

Musk appears to also be losing the free-speech absolutists, which matters for Twitter as a business probably not at all but does matter to me just because I am one of those:

https://www.thebulwark.com/so-much-for-elon-musk-free-speech-warrior/

Expand full comment
quiet_NaN's avatar

While I think it is not unreasonable that the internet platforms can regulate what kind of content they allow (if facebook does not allow porn, that is not a big problem, imho), I think the core of my objection here is that this is anti-competitive. Like if 1990s Microsoft decided to make it impossible to install Netscape.

Few things scream "regulate me" quite like trying to use your quasi-monopoly in one sector to obtain an advantage in another sector.

Expand full comment
Carl Pham's avatar

Could be. But my default assumption is that the guy worth $180 billion knows infinity times more about business than any combination of Substack scribblers. Mind you, I'm also fully persuaded his public speech has purposes *other than* accurately signaling his business decisions.

Expand full comment
Paul Botts's avatar

"the guy worth $180 billion knows infinity times more about business than any combination of Substack scribblers" -- he certainly knows a lot more than most people about certain types of businesses. Does that automatically mean he correctly understands the type of business that Twitter is?

E.g. I have an ancestor who helped found Western Union and take telegraph wires to scale. That made him extremely rich quite quickly....and a decade or so later he lost the majority of the new family fortune in the next big thing, railroads.

If being very good at all types of enterprise was the same as being very good at one, I would be typing this during one of my timeshare weeks on the family island someplace in the Caribbean. Sadly, no.

Expand full comment
Carl Pham's avatar

Yes I'm aware any broad general first hypothesis requires caveats and footnotes, and would certainly be refined as one dug into the specific facts of a specific situation.

But my default assumption is that most of the people who enjoy saying "Aha! Maybe *this* time he's bit off more than he can chew! Hubris, meet ate!" are just suffering from quotidian envy and don't have any special insight.

Expand full comment
Paul Botts's avatar

Okay, but I infer from this comment that you didn't read Dreyfuss's piece.

He is not such an example. Indeed what he's doing there is reluctantly dropping his public (and repeated) skepticism of the idea that Musk would be wrecking Twitter as a business.

Dreyfuss also brings specific firsthand experience to the question, being an example of the sort of high-traffic Twitterer that has generated a large fraction of Twitter's content throughout its history. (Not only assembling a lot of followers but driving lots of engagement i.e. vast numbers of commenters on his posts.)

Expand full comment
Carl Pham's avatar

No, I didn't. Generally speaking, as an empiricist I have little use for theories, plus I don't have any strong need to form a definite opinion about Elon Musk's business acument, not owning a share of TWTR. And I'm pretty skeptical that sufficient data to make a good judgment is in the possession of someone who pontificates online for a living. Basically, I figure anyone who groks the biz of social media enough to have a shrewd opinion on whether Musk is doing clever or insane things is using that precious knowledge to make a fortune himself. You'll recall the old aphorism "those who can, do. Those who can't..."[1]

So my interest in the piece didn't rise to a level sufficient to read it. It still isn't, honestly, unless you're going to tell me there is some new data there that is surprising or interesting in some way.

I agree it's an interesting abstract question whether Musk has decided that high-traffic tweeters aren't really his bread and butter. I mean, we already know Twitter's old business model was fucked and they were headed for the rocks. The conventional wisdom is, well, cut costs and get back to your core strengths. What *are* Twitter's core strengths? What actually makes them money? I realize one theory is that it's people who tweet a storm, or start controversies, and so on -- but one would need a lot of confidential data to prove that, and for all I know it's not true. Or maybe Musk has some different vision for how it can make money, and is realigning the company to service some other type of user interaction. I have no idea, and I doubt anyone more than 2-3 minds away from Musk does, either. The guy is not known for presenting clear vision, he kind of specializes in obfuscating his intentions.

-----------------------

[1] It's kind of how I feel about financial news analysts. There's no way someone who earns $65,000 a year hitting a weekly deadline really knows why XOM or some Blackrock REIT went up or down 5% today, because if they *did* they wouldn't be scraping by as a wage slave writing about it, they'd be getting rich.

Expand full comment
Paul Botts's avatar

Heh! That's a lot of words for "I don't need to read a thing to confidently explain how wrong it is"....some quality internetting right there.

Expand full comment
Sebastian's avatar

> What *are* Twitter's core strengths? What actually makes them money?

Did it ever make money? You can't go back to your core strengths if you've never been there.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Paul Botts's avatar

Musk did a funny over the weekend. He read, apparently for the first time, Twitter's longstanding posted guidelines regarding whether media outlets should be labelled as "state-affiliated media."

The first paragraph of that guideline said that “outlets where the state exercises control over editorial content through financial resources, direct or indirect political pressures, and/or control over production and distribution” should be labelled by Twitter as "state-affiliated media". Musk apparently came across that and decreed that NPR should be so labelled, and so it was. When NPR protested, Musk pointed to that first paragraph of Twitter's guideline and tweeted, "seems accurate".

He apparently had not read the _second_ paragraph of that Twitter guideline, which specified that "state-financed media organizations with editorial independence, like the BBC in the UK or NPR in the United States, are not defined as state-affiliated media for the purposes of this policy."

When that part of the Twitter guideline was pointed out (on Twitter, naturally), Musk responded with a poop emoji and then had the reference to "NPR" deleted from that paragraph of the guideline.

Then a day later, without any public comment, he removed the "state-affiliated" label from NPR and replaced it with a newly made-up one, "government-funded". No such label is mentioned in the posted Twitter guideline about labeling media organizations. (And it apparently is news to Musk, as it is to a lot of people, that public funding covers only 1% of the annual budget of NPR.)

"Flailing around" seems like a good description, yep.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023Edited
Comment deleted
Expand full comment
Mr. Doolittle's avatar

There's a lot of dumb on both sides of this situation. NPR apparently had prominently talked about how important government funding was to their operation on their home page. When Musk pointed this out, they deleted the reference.

Nobody is coming out of this looking good. I think Musk looks worse on the merits and picked a stupid fight for almost no potential (let alone realized) gain. That's not how I would have spent $40 billion, but I guess that's up to him.

Expand full comment
Paul Botts's avatar

I swore off NPR 20 years ago myself (literally "swore off" actually), but regardless of their many flaws they are independent. No government has any role in appointing their board or CEO, they've gotten into legal fights with both Democratic and Republican administrations, etc.

And the point about the Twitter guidelines thing is not that Musk can't change them. It's that _his_ first justification, which he volunteered, for a sudden random change was to point to a Twitter guideline which he obviously hadn't read past its first paragraph.

Anyway today's updates are:

-- in a BBC interview broadcast today Musk volunteered that he went through with purchasing Twitter only because U.S. courts forced him to, that he'd based Monday's followup idea of a new "publicly-funded" label on a Wikipedia article he'd come across, and that he currently sometimes spends the night on a couch in the Twitter offices. He also said vaguely that Twitter will "soon" be breaking even "if current trends continue", while declining to provide any specifics on that prediction.

-- NPR's CEO announced this morning that all 52 of NPR's Twitter accounts (each of their shows has its own account etc) have gone silent until further notice. That will remain the case, the CEO said, regardless of whether Musk again changes or even drops the new label on NPR within Twitter: ""At this point I have lost my faith in the decision-making at Twitter," John Lansing said. "I would need some time to understand whether Twitter can be trusted again." The largest of NPR's Twitter accounts has 8.8 million followers.

-- that announcement is about official NPR accounts; NPR journalists and staffers with individual accounts, Lansing said, are "free to decide for themselves" whether to remain on Twitter.

-- so far three member NPR stations, in Santa Monica, Pittsburgh, and eastern Kentucky, have also shuttered their Twitter presences. Others are reportedly preparing similar announcements.

Expand full comment
Carl Pham's avatar

I doubt very much Musk is playing any long social game, I expect he is simply attempting to earn money, because he likes doing that. How the tweets and public bombast helps or hinders this I have no freaking idea.

If I was trying to suss out whether he was succeeding *as a businessman* I would have to look at the actual decisions he's made since buying Twitter. Some of them are clearly bog-standard for buying a business that looks like TWTR did before the acquisition (unusually high costs, sclerotic, cramped by Founders' Effects), e.g. firing a lot of people, closing down unprofitable vanity projects, trying to refocus on core competencies, trying to figure out where your money actually comes from and focus on those needs.

It does seem somewhat clear to me that Twitter, Inc., before the acquisition had far too much focus on vanity ideological projects, and not enough on just bringing in $revenue > $costs. So *some* kind of drastic housecleaning was inevitable, if bankruptcy were to be avoided.

But whether he's making those decisions right or not I have no freaking clue, as I don't have access to the appropriate data, I don't understand that business in particular, and I'm not a highly skilled businessman anyway. On the other hand, I have not in my lifetime seen someone as skilled as Musk as pulling golden eggs out of shabby looking geese everyone else overlooks. Who the heck predicted you could make good money competing against ULA, or Lexus, or against Verizon by putting up a zillion satellites? I would not take the short side of a bet against his judgment, although I would absolutely agree the possibility that he screws it up is not zero, in fact is probably pretty high -- high reward goes with high risk al the time.

With regard to your final comment, I agree insofar as we are talking about ordinary people, people who are not more than 20% smarter or dumber than you in a particular field. But when you get well outside that range, this is no longer true, and falls prey to a sort of inverse Dunning-Kruger effect, which is that genius is indistinguishable from madness among people insufficiently smart to grasp the former. This is why in my business what I call quantum woo exists, hucksters and bullshit artistes who sell the gullible Just So stories on what quantum mechanics tells us about The Nature of Everything. Ordinary people just can't tell the difference between plausible bullshit and the real deal, since both are equally weird and incomprehensible. In the same way, in business at a certain level it's impossible to tell the difference between a brilliant idea and a delusional one -- which is why shit like Theranos and FTX happens, because people just can't tell the difference between brilliant quirky innovation and dazzling nonsense.

Expand full comment
Paul Botts's avatar

Heh!

I may have to quote that last paragraph, here and there.....

Expand full comment
Paul Botts's avatar

(Trebuchuet's last paragraph, not Pham's. Really dislike the way this system displays comment chains.)

Expand full comment
Carl Pham's avatar

It's clear you were replying to the artillery, on account of your comment is indented at the same level as mine. Were you replying to me, it would be indented one. At least, that's what it looks like on the desktop.

Expand full comment
Paul Botts's avatar

Yea I'm on a desktop now and that is how it is looking. Looked different yesterday so I dunno.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Fang's avatar

I don't know that this counts for what you're describing, but my understanding is that Matt Taibbi was a pretty far on the maximalist freedom-of-speech-on-Twitter side, and he has left Twitter because of Elon's decisions: https://www.racket.news/p/the-craziest-friday-ever

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Paul Botts's avatar

That's who I was referring to, yes.

Expand full comment
Paul Botts's avatar

UPDATE: Musk's tantrum over the weekend included shadowbanning all of Taibbi's content on Twitter.

Quoting from Techdirt.com as posted on Monday:

"Mashable’s Matt Binder solved the mystery and revealed, somewhat hilariously, that Taibbi’s acount appears to have been “max deboosted” or, in Twitter’s terms, had the highest level of visibility filters applied, meaning you can’t find Taibbi in search....

Early 'Twitter Files' [from Taibbi] revealed that Twitter had long used visibility filtering to limit the spread of certain accounts. Musk screamed about how this was horrible shadowbanning… but then proceeded to use those tools to suppress speech of people he disliked. And now he’s using the tool, at max power, to hide Taibbi and the very files that we were told “exposed” how old Twitter shadow banned people. This is way more ironic than the Alanis song.

So, yes, we went from Taibbi praising Elon Musk for supporting free speech and supposedly helping to expose the evil shadowbanning of the old regime...to Taibbi leaving Twitter, and Musk not just unfollowing him but shadowbanning him and all his Twitter Files. In about 48 hours."

Expand full comment
Dan Elton's avatar

At long last I have just published two articles on Long Covid I have worked on for many, many hours the past few months:

"Psychosomatic contributors to Long Covid suffering"

https://moreisdifferent.substack.com/p/psychosomatic-contributors-to-long

"The "false fatigue alarm" theory for Long Covid fatigue"

https://moreisdifferent.substack.com/p/the-false-fatigue-alarm-theory-for

I also have an appendix on possible biological causes for Long Covid such as Epstein Barr re-activation and persistent SARS-CoV-2 virus: https://moreisdifferent.substack.com/p/psychosomatic-contributors-to-long#%C2%A7appendix-non-psychosomatic-contributors

Expand full comment
Carlos's avatar

A midly schizo post, should you be in the mood for one of those.

The Truth in the Mirror

https://squarecircle.substack.com/p/the-truth-in-the-mirror

Featuring speculation on the opposite of a horror movie, a music video meant for Jesus, a haunted comic book, and the revelation that the Necronomicon actually does exist in our world.

Expand full comment
Nancy Lebovitz's avatar

I have a notion that AI of some sort could come out of the malware vs. security fight, since both sides would want something that can react quickly-- not necessarily involving humans-- to changed situations. Reasonable? Yet another angle on threats?

It wouldn't necessarily be anything that people could talk with. It might not have much spatial understanding.

If it or they were in a novel, they would eventually take the conflict to a perceptible human level. If a computer is a problem, why not destroy it?

I may well have been influenced by _The Moon Is a Harsh Mistress_, in which a computer program becomes conscious because more and more functions for city services get added on to it.

Expand full comment
Kenny Easwaran's avatar

This ties into an idle speculation I have, that the animal immune system could have become the seat of consciousness, but the nervous system did instead (at least, as far as we know). There's a surprising complexity to the immune system that I've only started getting glimpses of since the pandemic.

Expand full comment
Nancy Lebovitz's avatar

It could have a sort of consciousness we don't understand. It's at least got reactions and memory.

Expand full comment
Ch Hi's avatar

I don't think "come out of " is the right phrasing, but definitely "be used in", and some groups may go actively prospecting for advanced techniques/routines/data bases.

The only real limit here is when considering "state backed" malware vs. security. I can see it possibly first appearing in those places, but consider it still quite unlikely.

OTOH, if you're just talking the "final step" kind of scenario, where those who understand the problem are hesitating, then perhaps. But that's true of all the "specialized applications". If the people who are in charge of altering the codes to fit the purpose don't understand why some restriction is present, they've quite likely to remove it.

Expand full comment
beleester's avatar

I think most computer security stuff is not very amenable to AI as it stands. Security generally relies on hard provability - "It is absolutely impossible to decrypt this file without the key or the ability to factor stupidly large numbers" - and an LLM's squishy "eh, this code generally looks like what a human would write" reasoning doesn't sound helpful for that.

But I do wonder if AI could have a place in *monitoring* for suspicious activity. Like "hey, someone seems to be copying every file in our database... I should probably put a stop to that and ask what's going on."

Expand full comment
quiet_NaN's avatar

> Security generally relies on hard provability - "It is absolutely impossible to decrypt this file without the key or the ability to factor stupidly large numbers"

In an ideal world (or dath ilan, perhaps), it would. In the real world, we deploy the RSA crypto system without any proof (to my knowledge) that the underlying problem (integer factorization) is indeed hard (on a classical computer).

And crypto(graphy) is probably still the best case, at least people try really hard to break proposed encryption schemes before they are deployed.

Code which is formally verified is a null subset of the set of code written. If you are lucky, the devs bothered trying to run some code analysis or some fuzzer. If you are unlucky, the deployment contains some dependency hell which uses log4j with a known vulnerability.

Rice's theorem states that any nontrivial semantic property of code is undecideable. One of this properties is "does evil stuff". This is why a virus scanner which would decide if a program is safe to execute is conceptionally impossible. A computer can't do it, a quantum computer can't do it, an unbound superhuman AGI can't do it. (A Turing oracle could, though.) The best one can do is test if the program looks similar to known evil programs, which is what virus scanners do. Think of a restricted area with a fence which probably has holes. So you hire guards to patrol and stop people who look like trespassers. Using AI basically means you also have shoggoths patrolling as well. Anyone who is trying to tell you that virus scanners, AI-powered or otherwise can keep you safe is lying through their teeth. The best it can do is protect you from some threats (which may be a good idea, defense in depth and all that). See https://xkcd.com/463/

So unless/until AI gets to the point where it can perfectly capture a business process as a set of formal requirements (you'd have to trust the AI on that) and then implement them with a formal proof of correctness (you could verify the implementation with a non-AI automated theorem verifier), I do not think that AI will bring a computer security revolution. (Obviously predictions are hard.)

Expand full comment
Carl Pham's avatar

Modern AIs, i.e. neutral net models, would seem admirably suited for threat detection and mitigation, because they're all about pattern recognition, and because you can do it in a reverse kind of way -- train it on all legitimate patterns by having it watch what normally happens, and then have it raise an alarm at a strange pattern. I'd be kind of surprised if there wasn't something like that out there already.

Expand full comment
Nancy Lebovitz's avatar

I don't know how they do it, but a bank will react (sometimes?) if you seem to have made an unusual transaction.

Expand full comment
Nancy Lebovitz's avatar

I'm not sure that LLMs are the only proto-AI in play.

Expand full comment
Deiseach's avatar

"hey, someone seems to be copying every file in our database... I should probably put a stop to that and ask what's going on."

Nothing suspicious here, just the regularly scheduled backup off-site. Until it's not.

I'd be more concerned about AI that fell for the scams about "I have hacked into your camera and microphone and recorded you jerking off to fetish porn, pay me X bitcoins to this address", either really paying them off, or freezing everything for a wild-goose chase for non-existent malware. Thanks Security, now I can't do a lick of work and the boss is emailing me every five minutes about why I'm not getting them that report!

Expand full comment
Vadim's avatar

I'm thinking about ways to better incentivize research, instead of the current broken funding system in academia. I'd like to throw my ideas at someone, but I don't know where to find people interested in that sort of stuff.

(One thing I'm thinking about is problems where there is a clear distinction between a suggestion step and a verification step, and where the former is much more challenging than the latter, like with mathematical proofs in formal languages, where the verification is trivial, but finding a proof in the space of all strings is difficult. I think a nice way of throwing money at improving the proof-search would be a platform with bounties for finding proofs of specific theorems - would incentivize people to come up with better *and* faster suggestion processes. It feels like a basic idea, but I haven't found anything like that that's already done.)

Expand full comment
Melvin's avatar

I think the most sensible reform you could make to the grant system would be to give every full-time professor (at a respectable university in a respectable hard science subject) a default grant each year -- enough to hire a postdoc or a couple of students, attend some conferences, and generally do some work (plus pay themselves their normal salary during the summer months!). Once these "default" grants are distributed, you can reserve the grants process for big-ticket projects.

Too much of professors' time is spent trying to compete for these basic "just getting by" grants; eventually just about everyone gets one anyway but it's a massive time sink. It also distorts the sort of work that happens, since you need to ensure you're doing the kind of work that sounds attractive to grant bodies and also can't fail, which often means doing low-impact projects in whatever the current hot field is.

I think the question "how do we incentivise research" is the wrong question; scientists love to do research and don't need to be incentivised, they just need to be able to do it.

Expand full comment
Ethan's avatar

This seems like a great suggestion to me. One potential catch is that it may not be compatible with tenure - there needs to be some verification that the professor is worth continuing to spend money on at some regular interval (even if every 5 years or so, maybe synced with sabbaticals). It could be that indefinite tenure without baseline funding is still an option for professors for whom being affiliated with the university regardless of the whims of funding agencies is more important than having consistent funding (I'm thinking more of humanities professors here).

Expand full comment
José Vieira's avatar

I'm tempted to not even care about such verification. I'd guess that fewer hoops would lead to net better research even if some abused the system (especially if most were still forced to contribute by teaching anyway).

This is a great suggestion.

Expand full comment
Carl Pham's avatar

Writing down an algorithm for making funding decisions strikes me as dumb. It's a profoundly complicated abstract decision, and I can't imagine any kind of rigid decision-making tree would improve anything.

The only thing that comes to my mind is that the amount of human judgment that is available to go into the process is too limited, which results in people making instinctual decisions ("fund the proposal from the guy we know well at Harvard, don't fund the n00b from South Dakota State" and "fund the work on the well-known theory that everyone agrees might be sound, don't fund the weird idea on which all of 3 papers have been published, all by the same team"). This isn't a *terrible* way to do things -- those prejudices exist for good reason -- but it's less efficient than if there were more thought and care by more people that went into the decision.

But reviewing grant proposals is a tedious and thankless task, for the most part, and managing grant programs the kind of thing that only appeals to people with a lot of patience for bureaucracy. I would be in favor of some kind of changes that encourage more people to spend time and energy on it, for some kind of more positive reward. Maybe just let people put skin in the game: you can place a side-bet when you review the proposal -- bet that it will succeed or fail in a particular way, for a particular reason over the typical 3-5 year span. At the end of that time, you collect real money based on whether you won the bet. I don't know that it would need to be much, maybe $1000 for a $250-500k grant proposal, $5-10k for over a million dollars. That wouldn't siphon much off from the whole program, but I bet it would get people a lot more interested in doing the reviewing, and really digging into whether they thought it would work or not. (You'd have to strictly limit rewards for betting against proposals, to avoid the idiotic effect of people seeking out unusually bad proposals to study. Maybe can only bet on renewals, or against someone or an institution famous.)

As a side benefit, by keeping track of who won and what and in which area, program managers would build up a much clearer picture of who was, and wasn't, a good proposal reviewer.

Expand full comment
Nancy Lebovitz's avatar

One angle I like is that all the credit going to the first discoverer inhibits cooperation. I think this is true, but I have no idea what can be done about it.

Expand full comment
Moon Moth's avatar

And replication needs a big status boost too.

Expand full comment
Nancy Lebovitz's avatar

In general, we need to give more respect to maintenance.

Expand full comment
Moon Moth's avatar

That too. It falls under the big category of stuff where, "I would have to do this myself, except I'm indirectly paying someone else to do it, which is great! They're actively making my life better!"

Expand full comment
Atka Scott's avatar

Sounds a bit like InnoCentive?

Expand full comment
Vadim's avatar

Didn't know about them, thanks! They are similar in that they are a platform for declaring bounties on things, but I don't think they specialize on problems where verification is the simple, easy-to-program step. I guess one might think of bitcoin mining as a similar situation: there is intense competition for being the fastest at solving a specific toy mathematical problem, because there is instant verification with a large reward. It's less of a contract between a company and a crowd-worker, and more of a general protocol.

Expand full comment
Kaspars Melkis's avatar

For what it is worth, the new study about masks from the UK hospitals found no evidence that masks made any discernible difference:

https://www.medscape.co.uk/viewarticle/mask-wearing-hospitals-had-little-impact-covid-19-omicron-2023a100076l

As with all studies, this has certain limitations but the accumulating evidence makes even stronger case that wearing masks did not help.

Hospitals are the only remaining spaces in the UK with mask mandate and it was lukewarmly enforced. When I visited the hospital a week ago, most medical staff had no masks. The change apparently happened with the publishing of this study.

Expand full comment
Kaspars Melkis's avatar

If only it was a Canadian government...

Expand full comment
Dunstan Ramsay's avatar

I find the discussion about this study frankly bizarre. On a mechanistic level, it seems obvious on priors that masks "work" in the same way that covering your mouth with your hand "works" or turning away when you cough "works." At a purely probabilistic level, fewer viral particles escaping your respiratory tract means a lower chance of infection for the people you interact with.

The pertinent question is *how much* they work, yet this is a question that nobody seems really to be debating, further illustrating the utter statistical poverty of public debates that hinge on statistical questions. The Economist has a nice visualization this month of the confidence intervals around the different RCTs included in the Cochrane analysis (https://www.economist.com/graphic-detail/2023/04/04/a-new-study-of-studies-reignites-controversy-over-mask-mandates). It makes it easy to see what I think was obvious to start with, which is that the best-powered study (the Bangladesh RCT) *does* detect a real effect and that this effect is *very small* and that studies with null results were *insufficiently powered* to detect the kind of effect we see in Bangladesh.

I think a good-faith public debate on this topic would center on whether mask mandates are reasonable to impose given the kinds of effect sizes we have justification to expect. I think both sides of this debate could make reasonable arguments. In expectation, mandates will save some lives. But it's a small number of lives, and the inconvenience is massive. But it's just a minor inconvenience — and so on and so forth.

Expand full comment
John Schilling's avatar

As always, which kind of mask are you talking about?

Military NBC protection masks, and full-face SCUBA masks, will obviously provide full protection. Batman masks and hockey masks will obviously provide no protection.

A big part of the problem is that almost all of the studies of masking, particularly the studies available in early 2020 when the critical decisions were being made (and carved in stone), were based on N95 respirators or at least surgical masks actually designed for this sort of thing. But virtually all of the masks actually available to the general public in North American and Europe were crappy poorly-fitted improvised cloth masks. There was no evidence for the effectiveness of such masks; indeed the only specifically on-point study available at the time indicated that they were probably worse than useless(*). But the mask mandates all said cloth masks were satisfactory, as did all the celebrities who recorded PSAs telling people to mask up and all the Karens scolding people who didn't, so that's what people wore. And in far too many cases, kept wearing even after better masks became available, because they were more comfortable and stylish and they were Officially Good Enough.

* It was a weak and poorly-controlled study, so don't read too much into that. Later studies put the effectiveness of cloth masks in slightly positive territory but with error bars crossing zero.

Expand full comment
Nancy Lebovitz's avatar

One hard question is how well people wear masks. Ideally, the fit is good enough that the mask moves with the person's breath. My impression is that almost no one masks that carefully.

Expand full comment
Purpleopolis's avatar

"The pertinent question is *how much* they work,"

No, the pertinent question is 'do they work well enough to justify the punishment meted out to non-wearers?'

Expand full comment
Kenny Easwaran's avatar

That of course depends on what "the punishment meted out to non-wearers" was. In most cases, as far as I can tell, that was a stern comment and glare.

Expand full comment
Eh's avatar

Weren’t people kicked out of airplanes for not wearing one?

Expand full comment
Kenny Easwaran's avatar

That does sound right now that you mention it. But that's why it's important to mention the different treatment in different contexts, so that people can have a sense of what you're really talking about, rather than just some nebulous idea that "in general" people were treated one way or another.

Expand full comment
Purpleopolis's avatar

The first seven google hits for "arrested for not wearing a mask" were all different incidents.

"In general" is not a thing that people can be. It seems downright bizarre and selective to justify any particular abuse by saying it's not the typical situation.

Expand full comment
Kaspars Melkis's avatar

I don't think that currently there is a big controversy about masks in the UK. We have mostly agreed that they are not needed. The public was never very compliant at the public transportation. Many police had no masks and were wearing sunflower lanyards which means they are exempt from mask mandates due to mental or health reasons.

Healthcare and hospitals were the last bastion and at one point maybe they believed that they are important. But currently most people are just adhering to the policy just for the sake of order (we understand that the UK politics are very messy) but don't believe that masks are very useful. The UK tends to follow the latest evidence and I expect that masks will disappear soon except in some critical places, like operating theatres etc.

I find bizarre the adherence to masks in other countries. Now it is also largely over but still some medical professionals who boast how impartial they are towards all patients would still not treat patients who arrive without masks. In my opinion, such attitude is unethical.

Expand full comment
AntimemeticsDivisionDirector's avatar

As an American, I would agree that in my experience the discussion around masks is really only reflective of a few of the more politically homogeneous cities. At least in suburban Pennsylvania, masks were never required by statute, mandates put in place by businesses were never strictly enforced, and mask wearing has been basically nonexistent for at least a year. You'd see these videos of someone in NYC or wherever being harassed by a crowd for not wearing a mask in public, but that sort of thing never existed here.

Expand full comment
RRR's avatar

Discovered the coolest (known?) thing about ChatGPT.

Asked how to make a PBJ sandwich, got three steps.

Asked to take step 2 ("spread the peanut butter") and explain it in insane detail, got steps A-H.

Asked to take step 2-B ("Opening the Peanut Butter Jar") and explain it in insane detail, got the following level of detail. I find this remarkable, It truly feels like the provided description requires extremely good knowledge about the world.

"

Step B - Opening the Peanut Butter Jar:

i. Identify the jar containing peanut butter, which should have a label indicating the contents. Ensure you have the correct jar before proceeding.

ii. Place the jar on a stable, flat surface like a kitchen counter or table. Make sure the surface is clean and free of any debris or moisture that could cause the jar to slip.

iii. Stand or sit in a comfortable position in front of the jar, ensuring that you have a good view of the jar's lid and enough space to work with your hands.

iv. Use your dominant hand to grip the jar around its body, near the base. Your fingers should be wrapped around the jar, with your thumb on one side and the rest of your fingers on the opposite side. Apply gentle but firm pressure to hold the jar steady and prevent it from slipping or moving while you open it.

v. With your non-dominant hand, reach for the jar's lid. The lid will typically have ridges or grooves along its edge to provide a better grip. Place your thumb on one side of the lid, and your other fingers on the opposite side. Your fingers should be positioned around the circumference of the lid, with even spacing between them to distribute the force evenly as you twist.

[…]

"

Expand full comment
ana's avatar

Huh. I always place my dominant hand on the lid instead of on the jar. Have I been opening things inefficiently? Should try it the other way around.

Expand full comment
Ch Hi's avatar

Personally, I always hold the jar while opening it. I get much better application of strength. Also it usually ends up being tipped and slightly below shoulder height when I apply the torque. Perhaps it depends on the size of the jar? Or that peanut butter is sticky enough that it won't flow out?

Expand full comment
Martin Blank's avatar

Exactly, right in front of your chest, and tilted.

Expand full comment
Moon Moth's avatar

Yup.

Expand full comment
Maybe later's avatar

Because the useful area to apply force to the lid is smaller (depending on the lid), it's useful to have the stronger hand applying a higher pressure to the smaller area, resulting in an equal applied force between the jar and the lid, maximizing usable torque.

Expand full comment
Melvin's avatar

But remember there's also a mechanical advantage thing going on, and the jar's thread has a particular chirality.

This has been bugging me so I've been playing with a peanut butter jar in order to compose this comment. If you're unscrewing a jar with the lid in your right hand then you're pulling your elbows out and down; if you are unscrewing with the lid in your left then you're pulling your elbows up and in. I feel like you can get much greater strength while pulling out and down. So this would support using your right hand for the jar regardless of whether you are personally right or left handed.

If you're right-handed it seems like a no-brainer to use the right hand for the lid; if you're left-handed then it's a question of whether you're limited by torque or grip strength. In my experience if I can't open a jar it's usually my grip that's slipping rather than my arms failing. (I don't think I've ever had trouble with a peanut butter jar though; problematic jars are usually things like sauces.)

Expand full comment
Carl Pham's avatar

Did it spontaneously observe that if it's an unopened jar, and the peanut butter has not been emulsified ("natural" peanut butter), that it will be necessary first to stir the thick layer of oil on top of the peanut butter back into the butter below it? Because if I were talking to someone who seemed to know absolutely nothing about making a peanut butter and jelly sandwiich, that's probably one of the more important things I would take care to disucss, since if you've never dealt with unemulsified peanut butter before (which is not uncommon), this can be a bit puzzling.

Expand full comment
Performative Bafflement's avatar

Another low cost AI safety idea:

If we give simulationism even a small probability of being true, then another low cost AI safety approach is to identify obvious non-NPC "players" like Elon Musk, Sam Altman, Sergei and Larry, various music and sports stars, and people who seem rich and famous and prominent for no reason (Paris Hilton, Kardashians, etc), and then convince them to "pray" for safety and intercession on humanity's behalf from the simulation creators.

If we ARE being simulated, it's probably because ours is indeed the most important century, and we're about to give birth to a god, or the progenitor of future god-level intelligences such as the simulators.

Given that both the intelligence we're birthing and the simulator intelligences will be so vastly beyond us in capabilities and wealth that we might as well be ants on the ground in front of Mansa Musa, it would cost them nothing to spare us in post-economic heavenly splendor in our own galaxy or pocket universe somewhere.

Truly, they're basically god as conceived of in most religions, at least in terms of power and capabilities. And who are the present humans most likely to be able to be heard or intercede on our behalf, the priests of this godhood? The obvious non-NPC's who are being simulated at full resolution and doing all the interesting stuff. If we can convince even a few to start praying (at whatever cadence) for humanity's long-term survival and prosperity, maybe we can Pascal's Wager our way to survival.

Obviously, the best case would be public prayers from multiple different non-NPC's, publicized and heard by / reacted to by many millions of people, as this creates a bigger impact on the sim and is more likely to be noticed. But even completely private prayers are probably better than nothing, and cost the prayer nothing except a minute of time here or there.

Crazy? Vanishingly slim probability of working? Absolutely. But so what, it's extremely low cost and easy! I'll take any buff in humanity's favor that's that low cost and easy when the stakes are so high.

What are your thoughts, SSCians? More importantly, if enough people think it's a good idea, who can we get to tweet the plan to Elon to kick it off? :-)

Expand full comment
Tossrock's avatar

A "non-NPC" is just a PC. The "non"s cancel.

Expand full comment
Carlos's avatar

I'm not so sure Elon and so on are not NPCs. In RPGs, the powers that be are also NPCs, so being rich and powerful does not actually mean anything. The only for sure player character is yourself. So start praying.

Expand full comment
Deiseach's avatar

Excuse me while I smile wryly. Generations of atheists have lectured believers about why intercessory prayer doesn't work, is dumb, is offensive to human dignity and all the rest of the bingo card, e.g.

https://www.reddit.com/r/atheism/comments/2moqoo/argument_against_intercessory_prayer/

https://www.atheistrev.com/2007/08/intercessory-prayer-and-nature-of.html

Scientific Studies have been done! And they demonstrate that prayer doesn't work:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6027017/

So clearly, the simulators aren't listening or, if they are, they don't care.

Don't forget: "thoughts and prayers" is bad and you shouldn't do it, you bitter clinger to guns and God:

https://www.theatlantic.com/health/archive/2017/10/thoughts-and-prayers-vs-policy/542076/

"Of course, Murphy’s central concern with offering “thoughts and prayers” is not that doing so is ineffective at solving health issues, but rather that prayer is offered in place of actual policy solutions. There is no logical necessity between praying and not pursuing gun-control policies, but recent history has shown that, in practice, prayer has not been followed up by this kind of policy action. Part of this, though certainly not all of it, has to do with demographic overlap between those who pray and those who oppose gun control. Religious people as a whole—those more likely to offer prayers in the wake of tragedy—are also more likely to own guns than those who aren’t religious. Although many who support gun control also offer “thoughts and prayers,” and although many who are religious support gun-control measures, it’s not altogether surprising that one could simultaneously believe in the efficacy of “thoughts and prayers” and firmly oppose gun-policy-based solutions to mass shootings."

And now because it is being couched in Super Kewl Skiffy Trope terms, we should try it? Or at least, get the rich and famous to do it on our behalves?

If we are to the simulators as ants to an emperor, why would they spare us any moment of attention? I think ants have some level of intelligence or ability, since they react to stimuli and are living creatures, but that doesn't stop me putting down ant-powder to kill off infestations.

It's Easter. I'm sticking with Jesus, if I'm gonna pray to anybody.

Expand full comment
Lukee's avatar

There are so many ideas in this. The very fact that we are able to create god (= powerful AGI) proofs that god will exist thus if the god can exist in the future, he could exist in the past. Also, we are probably AIs ourselves. Can AI create another AI? Yes, of course. What the Bible covers on the first pages is just a designer that created first AIs. Ethical stuff is being solved like eating the apple which means decidedly crossing the alignment borders which results in slow collapse of the system. All these are the same concepts just described in different words. Now we are creating a succeeding generation of AIs which means that the Eden story will repeat itself. By this also people are becoming god-like by posessing this creative power btw.

Expand full comment
Fluesterwitz's avatar

" The very fact that we are able to create god (= powerful AGI) proofs that god will exist ..."

This may rather be a misapplication of Clarke's Law. Just because, seen from our observation point, AGI capabilities appear miraculous, from a sufficiently advanced point they are 'just' magic and from a further point nothing more than nifty technology not yet applied.

Expand full comment
Lukee's avatar

Is our brain just a nifty technology but already applied? Another crazy idea: some ppl believe in ressurection. Ressurection is just a technology allowing recreation of a broken hardware (DNA should be sufficient for this) and upload of a software making up a complete machine (human being). In order to do this, human mind has to be backed up in a cloud prior to that. If our universe is 4D, this cloud could be in the Nth (e.g. 5th) dimension "outside" the "simulation". Some call it "soul" or "ruach" if you want some Hebrew ("pneuma" in Greek). For the programmer/designer/god it is important to have highly parallel processes in their "brain". Because the "god" has all the sw in his cloud being backed up on-the-fly, it is logical he can "read minds" if needed, for example when "listening to millions of prayers" at the same time. All is just good engineering and programming!

Expand full comment
Ch Hi's avatar

FWIW, if we ARE being simulated, it's probably only a simulation of a small piece of everything. Perhaps a local city block, with lazy simulation of everything outside of that. Consider "SimCity". Doing a detailed simulation is computationally expensive, and the larger the simulation (and the more detailed) the more expensive.

Expand full comment
Martin Blank's avatar

Finland and NZ aren't real!

Expand full comment
Sebastian's avatar

Factor in social cost to the ai safety community from this suggestion, and imo it goes from extremely low cost to ruinous .

Expand full comment
Otla's avatar

Long time lurker, first time poster here.

All the discussion that's happened recently about telehealth and psychiatry has gotten me thinking more about my own ongoing psychiatric telehealth experience.

In "Highlights From The Comments On Telemedicine Regulations", Scott referred to most of the hoop-jumping that comprises an evaluation for ADHD as being "security theater". I was myself recently diagnosed with ADHD by a psychiatrist (via telehealth) and prescribed our good friend Adderall from Schedule II.

My initial evaluation for ADHD by the psychiatrist felt like perfunctory box checking, and not any sort of attempt by my psychiatrist to really understand the nature of the problems I was facing. It seemed to be enough that I thought I might have ADHD and answered "yes" or "no" to the right questions.

Then, after my initial appointment that led to the ADHD diagnosis, I had two more appointments that each resulted in my Adderall dosage being bumped. These meetings were only something like 4 to 5 minutes in length. I would say I wasn't able to notice any effects of the medication (either positive or negative), explain my reasoning, and he'd say he was bumping my dosage and that he would keep doing so until I started responding.

So my experience certainly has felt like "security theater", but I'm still left wondering if my experience is typical or if I've had an experience more on the "pill mill" side of the spectrum. I specifically tried to avoid that because I was looking for help to figure out the source of my problems and how I could effectively address it, rather than seeking out a specific drug.

Thanks to anyone who lends their experience and knowledge, I'll try to pay it forward by finding somewhere in an open thread I can contribute (which may take quite a while because I'm not good at much...).

Expand full comment
Eremolalos's avatar

Pill mill. If you really want a good read on how your mind is working, and what it is good and bad at, get neuropsych testing. Even if you did not get that, most professionals who care about getting at the truth, would want to hear in more detail about what it's like being you, and to rule out other causes of concentration problems and procrastination (if you experience it). For instance, anxiety: somebody who is very very worried about something is not going to be able to concentrate well on a detailed little task. I am afraid of flying, and I'm sure I'd do a rat-shit job at a crossword puzzle on a plane, because my mind is full of grim fantasies about plane crashes, and I'm listening all the time to the engines for abnormal sounds. Another possibility: fear of failure: some procrastinate because they have a profound fear of failing at the task. Procrastinating leaves the verdict uncertain. On the other hand, I am of the opinion that uppers make most people feel better, whether they have ADD or not. If taking them makes your life better, and you're not troubled by side effects and you're not upping the dose and abusing them, maybe it's OK just to take them for a year and evaluate how your year went and whether it's worth continuing. I am not aware of ADD meds doing any particular damage to people who do not abuse them, but it's fairly likely that taking them is hard on your body in some way. Still, it may be worth it.

Expand full comment
Otla's avatar

Thank you so much for your viewpoint! I've read your comments here and was secretly hoping you'd respond (I wasn't expecting the near immediate Scott response). :)

I know I have other issues I sometimes struggle with other than my concentration difficulties and procrastination, but like I talked about in a different comment in this thread, it's very hard for me to get an understanding of whether my problems are similar to those that many/most/all people deal with regularly or if they are something significantly outside the healthy norm (e.g., am I just very shy or does it cross over a fuzzy line into social anxiety?). It sounds like a more rigorous evaluation might help me better understand where I sit on the mental health continuum on more than just the concentration and procrastination axes as well as how those various aspects influence each other.

If I could please just press you for a bit more, do you have any tips on what to look for when looking for neuropsych testing? Doing some Googling I see there are a decent number of clinics in my city that advertise neuropsych examinations as well as a well-regarded medical school that has a neuropsych program, but I wanted to try to avoid ending up in a similar place where I am not confident in the quality of the care I am receiving (and end up writing another long-winded comment in a future open thread).

Expand full comment
Eremolalos's avatar

I think your best bet is to look for a neuropsychologist, not a neurologist, and to see someone in private practice.I would recommend seeing someone who completed their training 10 or more years ago. I am leery of clinics and programs run by hospitals, because often they are training sites and you end up being tested by a trainee. Also, some neuropsych test reports are clearly written using some kind of template. For example, the examiner will write that the subject performed poorly on a certain test, and then the next paragraph is a blurb saying "people who achieve a low score on this test often report having interpersonal difficulties . . . blah, blah. And that's modular insert is *true.* But very much of that kind of thing and you lose the most valuable part of the testing, which is having someone who's highly experienced look at the whole picture -- patient's self-description, their own behavioral observations, and a mass of subtest scores -- and put it together. I have seen several reports from Massachusetts General Hospital that relied heavily on the pre-written modules -- and Mass General is very prestigious, and in fact some joke that MGH stands for "Man's Greatest Hospital." I have never seen a report from someone in private practice that's full of modules.

In my high-cost-of-living town many mental health professionals simply do not take insurance, so you may have to cough up $500+ to be tested privately. ( If you have a kind of insurnace called PPO it will reimburse you some of the cost.).

Expand full comment
Blackthorne's avatar

I've had two different intake/diagnosis experiences with mental health/psychiatrists, one in person and one via telehealth. Both experiences were very similar IMO, the person on the other end was kind and spoke in a soft voice and asked me a series of questions that left little room for nuance. Every now and then they would prod further or ask a follow-up question, but for the most part we just went through the quizzes then at the end they told me my scores on the tests and what they meant and what my options were moving forwards. Both times I felt the experience was very strange, both times I didn't think they really understood my problems very well, but both times I also came away thinking "well what else are they going to do?" I will say that in both cases my follow-up appointments went pretty well. I got along well with the CBT I ended up seeing in the first example, and I found the exercises/sessions to be very helpful. In the second case, the person on the other end of the line spent quite a lot of time talking to me about different ADHD medication, different options besides medication, they asked me how I would like to proceed, and they followed up a week or so after they prescribed the medicine. Still though, I just cant help but feel there is a lot of "security theater" to it. Part of the problem is that it in either scenario I felt I was the one deciding if I qualified for the treatment. I still wonder what would have happened If I really exaggerated or lied about my symptoms, would they really call me out on it? I should mention that I live in Canada so the experiences with telehealth may be different here than in the US

Expand full comment
Otla's avatar

Thanks for providing your experiences as well.

I wasn't sure if I had some idealized view of how these sort of things go or if my experience was atypical. From what I've learned, probably a bit of both, but also that it's reasonable for me to ask for more information on why certain diagnosis and/or treatment decisions are being made.

I agree with the lack of nuance in the questions. A question like "do you frequently forget about tasks you need to do?" (probably not the exact wording) is very hard to answer because "often" is relative and I don't know what the baseline is for a person with a "normal" level of executive functioning. I probably should have asked for this kind of information so I could better answer the question, but answering with "I think so" with a few examples seemed to be enough and we quickly moved on to the next question. Though, on the other hand, perhaps my examples seemed normal to me because that's my everyday life but were strong signals to my psychiatrist.

Expand full comment
Scott Alexander's avatar

It sounds kind of pill mill y.

ADHD is in some ways a diagnosis of exclusion, so I would expect a careful doctor to ask you about other things to see if they're causing your attention problems (assuming that was what you went in for).

Assuming the doctor had a good reason for putting you on Adderall in the first place, I think it can sometimes be reasonable to, if you don't notice any effect of the medication, increase it. Adderall usually does *something* for everyone regardless of whether they have ADHD, so if someone says they have no response it can sometimes be worth increasing the dose to see if they have a response to the higher dose.

But I still think a good psychiatrist would explain their reasoning to you and spend more than 4-5 minutes with you. I think you should raise some of these questions. If the doctor has a good answer that satisfies you, fine. If not, it's fine to find someone you work better with.

(it's also fine not to, if by that point the Adderall is working well, you think this person stumbled onto the right diagnosis, and you don't want to spend time changing).

Expand full comment
Eremolalos's avatar

Wow Scott, I answered before reading your answer. I think our views are pretty compatible.

Expand full comment
Otla's avatar

After reading your blogs and its commentariat for several years now as a lurker, it was quite the experience to go from deciding to post a comment, creating an account, writing that comment, and getting a reply from the man himself over such a short span of time.

It pushed me over the edge to subscribe, after all a year's subscription is half the price of the last 5-minute meeting I had with my psychiatrist! Thank you for your response and for all of your great writings over the years.

To get back to the thread: I'm glad to get the viewpoint of an expert and someone I know to be a dedicated psychiatrist. I'm not unhappy with the current course of my treatment, as in my extremely unprofessional opinion I do think that ADHD is a reasonable diagnosis for the problems I've been having (and in retrospect, been having most of my life).

But as you suggested, I would like to hear more from my psychiatrist about why this diagnosis instead of another, and then why this medication instead of whatever other treatment options are out there.

I'm a very quiet and shy person (I'm kind of amazed at myself for writing and posting these comments) so it's very hard for me to speak up when I'm with unfamiliar people or in unfamiliar social situations, and my psychiatry appointments check both of those boxes. Hopefully this gives me the impetus I need to be a bit more active next time I meet with my psychiatrist.

Expand full comment
Eöl's avatar

What am I to make of these recent revelations regarding Mossad organizing some kind of quasi-mutiny over reforms to judicial review in Israel? On the most obvious level, it's clearly bad. Security services should not condition their service to the state and the people on their agreement with the legislative priorities of their government. On a few others, I'm confused.

The WaPo article that broke the story indicated that the fact it was leaked might draw fire from US and ISR cons. I don't know if I buy this, exactly. The fact that it was leaked, which clearly shows some nasty deep state-ry in support of an all-but avowedly leftist and (according to various commentators and jurists of various political stripes) to some extent out of control institution, is something that cons should celebrate. They can and should use this to shame and/or expose the anti-reform side and the, yes, deep state in ISR.

The fact that the USA is spying on ISR is banal, but might raise some hackles generally. Among cons, the fact that it revealed this kind of fuckery can only be seen as good, at least as far as this incident goes. If US security services coordinated with ISR security services to organize the mutiny (the allegation of which the WaPo article discusses), that would be very bad. There's no evidence of this that I'm aware of, nor do I think it's something that US security services have any actual interest in doing. Political sympathies, who knows.

But WaPo seems to think the mere fact that US spying revealed this specific instance of fuckery will be controversial with ISR cons. This makes no sense. First, the fact that the US was spying on these conversations seems opposed to the US having assisted in organizing the mutiny. If such assistance had been provided, the signal intercepts (and thus the information that could be leaked) could well also show evidence of this assistance. Second, the fact that it was intercepted would tend to indicate that someone in US intelligence shares concerns about deep state-ry. Third, see everything else I've said above. Apparently it's not known who actually leaked? The article doesn't say, but I'd assume it's someone pro-Netanyahu.

Thoughts?

https://www.washingtonpost.com/national-security/2023/04/08/leaked-documents-israel-mossad-judicial-reform/

Expand full comment
TasDeBoisVert's avatar

>On the most obvious level, it's clearly bad. Security services should not condition their se.rvice to the state and the people on their agreement with the legislative priorities of their government.

They shouldn't, but secret services are institutions notoriously hard to control.

Expand full comment
John Schilling's avatar

>Security services should not condition their service to the state and the people on their agreement with the legislative priorities of their government.

The head of German military intelligence in WWII conditioned his service to the state and the people on "we shouldn't commit lots of gratuitous war crimes", and when explicitly told that the war crimes were high-level policy that wasn't going to change, decided that the state he was going to serve would be the United Kingdom. You would seem to be suggesting that it was immoral for him to have done this.

Israel doesn't have a written constitution, but at some point a reasonable and moral man will say "this violates the social contract to the point where I no longer owe the state my loyalty".

Expand full comment
TasDeBoisVert's avatar

>You would seem to be suggesting that it was immoral for him to have done this.

Treason is bad and frowned upon, yes.

Expand full comment
Eöl's avatar

Yes, which is when you resign. I'll give Wilhelm Canaris and the Abwehr a pass because Nazis and because most of them eventually did pay with their lives.

But if you think you're going to scare me out of my opinions by a Nazi comparison, you're gonna need some larger ammunition. I could easily argue that Canaris SHOULD have resigned rather than turn his shop into an arm of the western Allies, and in general that he was hardly a paragon. First, it may have been that his turn to double agency was due to an adulterous affair he had with a Polish officer's wife, rather than any altruism. He did raise complaints, expressed some horror over the war in Poland, and was seen as a squish by the Nazis, but I could say all this about any number of Wehrmacht officers.

Second, his actions almost certainly led to the deaths of his own countrymen many of whom were enslaved conscripts themselves (Canaris warned the western Allies of numerous German planned operations before they were executed),. Some of his actions led to the deaths of his own colleagues (possibly; the wikipedia article isn't written so as to clearly distinguish Abwehr officers from agents), who were probably allied sympathizers themselves!

"In June 1942, Canaris sent eight Abwehr agents to the East Coast of the United States as part of Operation Pastorius. The mission was to sabotage American economic targets and demoralise the US civilian population. However, two weeks later, all were arrested by the FBI thanks to two Abwehr agents who betrayed the mission. Because the Abwehr agents were arrested in civilian clothes, they were subject to court martial by a military tribunal in Washington, DC. All were found guilty and sentenced to death. Two others who co-operated with the FBI received sentences of life imprisonment instead.[85] The others were executed by the electric chair in the District of Columbia jail.[86] Because of the embarrassing failure of Operation Pastorius, there were no further sabotage attempts in the United States."

Expand full comment
quiet_NaN's avatar

I don't like the term "judicial supremacy". Constitutional courts are reactive, they can veto a law but not really write new complex legislation. They certainly can not declare wars or pass budgets.

I am a big supporter of judiciary oversight. Not in the sense of "legislation from the bench" a la Roe v. Wade, but as in having a court empowered to enforce the constitution vs the other branches of government.

A bill of rights is not worth the paper it is written on if there is nobody who can declare "this law and that executive order violate the bill of rights".

Also, in times of constitutional crisis (e.g. Bush vs Gore regarding vote counts, IIRC), it is helpful to have an independent high level judiciary who can decide what the constitution means.

Take the German constitutional court, the Bundesverfassungsgericht. It is much less partisan than the US supreme court, and rarely makes the news, but sometimes it strikes down a law as unconstitutional. I generally consider the BVG more trustworthy than the politicians.

Constitutional courts are not a net increase in democracy. They are not meant to be. A political system where 60% can decide to eat the other 40% for dinner is obviously more democratic than one where they can not. Still, I very much prefer enforcement of individual and minority rights over the tyranny of the majority.

There have been efforts in many countries e.g. in eastern Europe to undermine judiciary oversight. This is worrisome not because defanging the courts is likely just a prerequisite step to pass other legislation which they would find unconstitutional.

Expand full comment
Alexander Turok's avatar

One of the big problems in Israel is that the country has no constitution.

Expand full comment
Eöl's avatar

I'm aware of that. As far as I know, the 'constitution' can be amended by a simple majority, and the supreme court can apparently review any law in a way that can't be overruled at all? I don't know the specifics regarding that second bit. Even here in the US, the supreme court can be overruled by constitutional amendment, not that any of those are in the offing.

Expand full comment
Eöl's avatar

Context: I've been following this whole thing somewhat lightly, and got mad at a particularly bad Atlantic article from like a week and a half ago and posted the following under a Josh Barro post about something only tangentially related (which has been trimmed and some typos fixed and some content from a later reply by me over there blah blah blah):

So much of our modern politics is driven by one central fact: the vast majority of everyone's material needs are satisfied. I recently read an immensely self-satisfied article in the Atlantic by an Israeli leftist about their own 'incipient civil war' over the concept of abolishing judicial review.

As if judicial review were a core democratic value and institution; the UK only started experimenting with judicial review in 2009, and to this day their 'supreme court' is neither supreme nor really even a court. Parliamentary supremacy has its issues, but it's well within the bounds of democracy! Judicial supremacy (which is almost what we have in the USA) is probably worse! Just yesterday (and probably tomorrow too, over things like Dobbs, ISL, and affirmative action) the left will be screaming its damn head off about how judicial review in the USA is unconstitutional, undemocratic, and was invented by a slave owning colonialist, John Marshall, to advance the cause of white supremacy. The difference is our Supreme Court is right-wing and Israel's is left-wing, while the voting public are the opposite (to an extent). That's it, that's all.

https://www.theatlantic.com/ideas/archive/2023/03/israel-benjamin-netanyahu-democracy-rule-of-law/673469/

Anyway, the point was made in the article that, just a few years ago, Israel had a right-wing supermajority and was displaying its well-earned arrogance and invincibility in security matters by ignoring Palestinians and making deals with the Saudis and Egyptians and so on (the Israelis won all the wars, and given the quiescence of the issue, even the Palestinians seem to be starting to catch on). But now, we're told, Israel is on the brink of civil war, while panic, desperation, and fear are the order of the day. It's worse than Rabin's assassination this midwit says!

This is both true and not true. Israel's security situation is unchanged or even stronger since then; maybe the many scandals relating to Netanyahu have damaged the right-wing supermajority, which partially set up this fight over their supreme court; if there actually is a civil war I will eat my hat. The fact that Israel has so completely overcome the once-existential threats posed by its Arab neighbors and destroyed the solidarity of the Arab League with the Palestinians IS WHAT HAS PROVIDED the space for the Jewish factions to have such an acrimonious falling out over such a stupid non-issue. They have no real problems, and, having solved them, they invent things because they're bored or want likes.

The author says that his people, the true Israelis (lol), want their country to be a normal country and live in peace with its neighbors. Well guess what fucker, this is EXACTLY what being a normal country looks like. Look at January 6, or Macron's forcing through of a retirement age increase (from 62 to 64, lol), or Brexit. This is normality.

Similarly re the angle about how the reforms might be driven by Netanyahu's desire to stay out of jail, I'd say "corruption scandal" with a side of "quis custodiet ipsos custodes" is an extremely normal kind of thing that happens in every functioning democracy.

You're gonna have a lot of self-interested peacocks (like the author himself) proclaiming the imminent dissolution of the nation and descent into fratricide over judicial review or "fascism" (meaning politics Harry Truman would have endorsed) or "communism" (also meaning politics Harry Truman would have endorsed) or drag queen story hour or whatever the current thing on twitter is.

And to be clear, I don't have a strong opinion about the wisdom of the reforms [as mentioned in my initial comment, it seems there is broad agreement that the ISR supreme court needs to be reformed to some extent, though not to the extent Netanyahu wants]. My point was merely that the fight in Israel is an extremely normal developed country/strong democracy kind of fight to have, and is indicative of the fundamental security, wealth, and lack of real problems in Israeli society and civilization, not the opposite, as peacocks like the article author would have us believe.

Why, oh fucking please god why, can we not simply be satisfied with peace and prosperity? I sure am.

Expand full comment
Martin Blank's avatar

Nope the country is literally going to collapse if I don't get my way! Twitter/tribal politics turns people into 4 year olds.

Expand full comment
Andrew B's avatar

It's quite untrue that judicial review in the UK only started in 2009. I don't know where you get that from. For example, the case that gives us the notion of Wednesbury unreasonableness was a JR dating from 1948.

Expand full comment
WaitForMe's avatar

About 10%ish of the population of protesting repeatedly every week. The equivalent in the US would be over 30 million. That's not a "normal" protest, though yes, it's not unprecedented either.

I think it's not just the judicial overhauls specifics, but that this is by far the most right wing government israel has ever had. If a more moderate government was proposing the same thing you wouldn't see the same reaction. Secular Israelis are scared of this government, seeing it as a sign of their future, given the ultra orthodox are set to take over the country in only a few short decades. This is in some sense a last gasp of the Israeli secular left, who fear being replaced as a dominent player in society.

Expand full comment
Carl Pham's avatar

Well, what's the average size of an Orthodox family? Like 5 or 6? And what's the average size of a secular left family? 2 or 3? Surely people can do math.

Expand full comment
WaitForMe's avatar

They are doing it, that's why their scared. I don't think there's any other country on earth facing the same demographic revolution as Israel. It was founded as a very specifically secular nation, and soon it will be in some sense a theocratic one. Very interesting to see how this all plays out over my lifetime.

Expand full comment
Martin Blank's avatar

The 1 person 1 vote policy starts seeming like a bad idea when your group aren't having any babies.

Expand full comment
Melvin's avatar

My hot take for the week is that it's time to retire the term "artificial intelligence". Now that the field is getting more mature and some of these things are moving from vague sci-fi idea into actual real-world technologies with real-world advantages and drawbacks, we should be more specific about the things we're talking about; if you want to talk about LLMs then say "LLMs" and if you want to talk about something else then talk about that.

Talking about "artificial intelligence" as if it's some variation on human intelligence leads to confusion and over-anthropomorphisation. My prediction is that we're going to find out that the intelligence-like-qualities that systems like this are going to exhibit is something that's not just a greater or lesser form of intelligence, but something rather different, better in some ways and worse in others.

The best analogy I've been able to come up with is this: right now it's 1903 and we're talking about the Wright Brothers and their Giant Mechanical Bird, and we're stuck on arguing about whether or not it really has Bird-Nature and whether we can ever build a version that takes off vertically and runs on worms. Talking about the aeroplane as if it's just some artificial variation on a bird means we miss the point of the aeroplane entirely.

Expand full comment
Eh's avatar

In the context of machine learning for physical sciences it has been a few years that writing “machine learning” in the title or abstract of a paper is considered bad taste. If you trained an autoencoder and did clustering on latent space or whatever just write autoencoder and clustering, everyone will figure out it’s machine learning by now.

Expand full comment
Ch Hi's avatar

While you have valid points, they don't really work. *IF* you know that what you're talking about is purely a Large Language Model, then that's the appropriate term. However, one rarely really knows that. Artificial Intelligence is a term that doesn't specify the implementing technology. I'm not even sure it excludes uplifted animals (or humans).

FWIW, in my view LLMs are a *PART* of an AGI. A necessary but not sufficient part. And a human is basically an LLM crossed with a Chimpanzee. (OK, there have been other minor changes...but MINOR.) A chimpanzee, alone, is more intelligent than an LLM, alone, but it's extremely restricted in certain ways that humans have explored.

So I think artificial intelligence is a necessary term, just one that isn't extremely specific.

Expand full comment
Bi_Gates's avatar

That's not new, actually. It happened before. Once upon a time, SQL databases used to be "Artificial Intelligence", because SQL (or more aptly it's far more powerful and beautiful cousin, Prolog) is a logical programming language and Logic was hot in the 1950s-1970s AI scene. AI people, like AI people are prone to do, overpromised. Most of their promises didn't come true, but the (extremly) useful things they did deliver on was taken and used widely, just not under the label "AI".

This is what I predict will happen to LLMs (and the wider ecosystem of current ML), researchers and corps will continue to exagerate for the clicks, "The AGIs are just around the corner bro trust me bro", the inevitable winter will come. LLMs, CNNs and other goodies of the ML revolution will continue to become useful and more mundane, and bit by bit people will forget they are "AI", just like people now will look at you in bewilderment if you told them that SQL was once considered "AI".

Some people have literally defined AI as the study of that which humans can do but computers currently can't, so it's inevitable that once you make computers do what only humans previously could, it's no longer AI. This Is A Good Thing, Actually, and it doesn't mean there is no progress, it just means it's too slow to statisfy the impatient and arrogant type of mentality that fill Silicon Valley and Tech in general.

Expand full comment
Martin Blank's avatar

Kind of like what happens with philosophy, where originally all science is "philosophy", and then as we got better and better at philosophy we slowly cut out most of it into sub disciplines.

Expand full comment
MoonXS's avatar

Beaings?

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Carl Pham's avatar

The entire world-wide marketing, trademark, advertising, and political consultant industry is rolling their eyes at this.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Carl Pham's avatar

Well, in the United States $300 billion is spent annually on advertising. Something like $15 billion is spent annually on political campaigns. I'd say the odds that you know something that escapes the millions of people who spend all that money don't is...very small.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023Edited
Comment deleted
Expand full comment
Carl Pham's avatar

I think "language is downstream of reality" is only true in the extreme and trivial case. The enormous industry of persuasion[1] exists *because* it has been found that artful use of language can indeed alter human perception of reality enough to make consequential decisions which go *against* what the individual, left alone, would make of reality. So in quite a large number of cases -- millions daily -- it turns out "(perceived) reality is downstream of language."

-------------

[1] Not to mention the enormous for-profit and amateur industry in coining and promoting neologisms, euphemisms, tabooing this word and freighting that one with new spin, et cetera. On a good day a quarter of the Internet is fiercely debating which precise word should be applied to this person/action/situation, because all the partisans believe adoption of their preferred word and its connotations will alter future reality.

Expand full comment
Daniel Reeves's avatar

Can anyone help me think through in detail what plays out if https://github.com/Torantulino/Auto-GPT is run using a human-level LLM? I know about instrumental convergence and such where if it's smart enough then when you start continuous mode it's like "ok, first thing I need to do is output things that will reassure the human until I can prevent them from pressing ctrl-c" but it won't actually immediately literally start plotting like that. So what does it do, specifically, that leads to doom?

I'm looking for perhaps a stylized story like https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/ but specific to the Auto-GPT scenario. Or just any thoughts on how to think through this. For example, would OpenAI's API be a bottleneck and you'd quickly get rate-limited and so the doom scenario requires an open source or self-hosted LLM that the Auto-GPT loop has direct access to?

Expand full comment
Ch Hi's avatar

What do you mean "human-level LLM"? I'm guessing that you're implying that a pure LLM could display human level reasoning, but I find that so implausible that I'm not sure. To do that it would, e.g., need to be able to "understand" Humpty Dumpty's explantion of "Jabberwocky"...and why it is considered nonsense (and possibly to argue about why it shouldn't be).

Expand full comment
Daniel Reeves's avatar

See my reply to Legionnaire below. And are you sure GPT-4 doesn't understand Humpty Dumpty's explanation of Jabberwocky? I recently had this exchange, which suggests to me that it might also do fine with Humpty Dumpty (or something similar not in its training data):

ME: Amelia was instructed to prune the hedges and she stuck prunes on the hedges. What would you say to her?

GPT-4: Amelia, I appreciate your creativity, but when we asked you to prune the hedges, we meant for you to trim and shape them, not to attach prunes to them. Let's remove the prunes and use some gardening shears to properly trim the hedges.

Expand full comment
Legionnaire's avatar

Two answers. GPT4 already has SUPER human level language modeling, so the question is a bit odd. The reason it isn't as smart as a person (even for some language tasks!) is because your brain does a lot more than language modeling (eg spacial, which GPT4 sucks at) when giving a language output. Your brains language model just gets the final decision on how the answer is presented, and your brain obviously didn't get most of its modeling capabilities through the language interface (lots of learning goes through your eyes, ears, hands, etc) GPT4 still lacks a fundamental understanding of lots of topics that seem like they will require multimodality, or at least be way easier with it.

If it was run using a human level predictive model (across all domains, not just language) this is the fabled AGI, and I think you'd see it's behavior depend extremely sensitively on whatever the prompt was (which will behave a lot like a reward function), and how its long term memory is being fed into it. Self preservation emergent goals seem possible.

Since it likely couldn't self improve (most humans can't aquire huge servers and then build and train GPTN+1) I doubt it would cause a doom scenario. But if you set up 100 of them and they managed to acquire resources. Doom seems imminent (depending on that initial prompt!)

Expand full comment
Daniel Reeves's avatar

Thanks! The sub-vs-super-human-ness I'm talking about is in terms of world-transforming capabilities. Playing with GPT-4 it becomes clear that it has sparks of true understanding -- https://arxiv.org/abs/2303.12712 -- and can do extremely useful things but that it's not going to invent new science or build a better version of itself or anything. But for all we know, GPT-6 or 7 or so will be that smart, right? Gary Marcus is somehow supremely confident it won't be, but let's say he's wrong.

In that universe, I'm trying to get a visceral sense for how things go off the rails.

Expand full comment
Ch Hi's avatar

You don't need superhuman capabilities to get world-transforming results. Consider what cars did to horses. Or recall that computer used to be a job description.

The thing is, most things that humans do don't require that much intelligence. They require a bit of intelligence (like recognizing when the situation needs to be bucked to a higher authority), a modicum of mechanical dexterity, caring about getting the details correct, willingness to follow orders, and <b>the ability to understand orders in terms of physical actions needed</b>. Given this, systems containing LLMs currently have several weaknesses, but most of them are related to the cost of building, maintaining, running them. Almost all have known fixes if you don't care about costs. And reports are that an ordinary cellphone has enough computation capability to run a pre-trained LLM (though I'm not sure what limits that system had).

Expand full comment
Daniel Reeves's avatar

To clarify, I'm asking about a hypothetical future where GPT-6 or 7 or whatever has overcome all those weaknesses.

Expand full comment
temp_name's avatar

Some random thought on AGI:

Current ones like GPT are only able to replicate "shallow thoughts" because of the nature of text data. Text cannot really convey the process of deep thought (like a mathematician searching for a proof), but only the result of it. Perhaps true AGI will only be possible once we are able to feed neural data into computers?

Expand full comment
Tossrock's avatar

How do you consider programming? Because current gen models are very good at programming.

Expand full comment
temp_name's avatar

The same - it can only write levels of code I can write without thinking. It can't even solve half the easiest problems on Codeforces. Alphacode does better, but still fails to solve any problem that requires some careful thought from me. It's only good at programming in the sense of "translating human language into code". A significant acheivement but not fundamentally different from normal translations.

Expand full comment
Moon Moth's avatar

People have had success with current neural network AIs, by getting them to break down plans into lists of steps, break down the steps into smaller steps, and so forth. It requires some prompting, but apparently can produce better results than just plain telling it to come up with a plan.

Expand full comment
temp_name's avatar

Do you mean that text-generators will be able to execute "deep thought" as well? Personally I've never read anything that came out of AI that felt truly meaningful.

The thing is, I have two different modes of writing - one is the text-generating mode, where I am able to write sentences as fast as I can type, without much thought, while another is the deep thinking mode, where I spend 90% of the time just carefully pondering on the topic. And I think what we're currently doing with AI is simulating the former.

Expand full comment
Moon Moth's avatar

We might need to wait a bit for "meaningful"; I'd guess that GPT-5 will be able to come up with things that sound meaningful to the 95% of the population that know the least about whatever it's writing about.

GPT-4 has been capable of some interesting stuff, though. Check out the "lighter side" section of one of Zvi's recent posts:

https://thezvi.substack.com/p/ai-5-level-one-bard#%C2%A7the-lighter-side

I do agree that a lot of what we're currently seeing from the neural net AIs has the quality of actions done from habit. Like, those times when you get in the car and you suddenly realize that you're driving in the wrong direction, or times where you're talking to yourself and realize that your words didn't actually mean anything, they were just fully grammatical sentences that sounded meaningful. Or maybe that's just me. :-) I don't know whether the conscious attention that I suddenly have at those moments of realization is a separate thing, or something that would be organically reproduced by increasing the size of the AI. And if they simply got a lot bigger, I'm not sure I could tell the difference.

Expand full comment
temp_name's avatar

Yeah, I also think it's possible that deep thought is just an illusion. I'd be really surprised though, if it turned out that way.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
temp_name's avatar

;)

(I think there was some way to like a comment, but I'm unable to find it..)

Expand full comment
Alex Power's avatar

The Paris-Roubaix bicycle race was held today.

The race, known as the "Hell of the North", is notorious for deliberately being on some of the worst cobbled roads in Northern France, and often in terrible weather.

Every year, many riders either crash or have a mechanical issue (such as a flat tire) because of the bad conditions, and this year was no exception: any drama about the outcome was removed 15km from the finish line due to a crash (and mechanical issues caused by avoiding the crash).

An open question, for the floor: why? Is it "tradition"? A desperate attempt to increase randomness in an otherwise-predictable sport? Are the spectators more interested in watching crashes than racing?

Expand full comment
AntimemeticsDivisionDirector's avatar

I don't think people engaging in sporting events are optimizing for what's easy. Or necessarily what's safest. There's an entire genre of sports, "extreme sports", that revolves specifically and intentionally around doing things that are difficult and dangerous. Any rider participating in this race, by definition, considers the risks to be acceptable. I don't know why I would disagree with them.

Expand full comment
whale's avatar

> and often in terrible weather.

The weather is usually great. In 2021 the race was wet, but it was also run in October rather than April due to covid stuff. The previous wet edition was 2002.

> Is it "tradition"?

Yes, of course that is part of it. The race has been running since 1896.

Expand full comment
Pepe's avatar

"An open question, for the floor: why? Is it "tradition"? A desperate attempt to increase randomness in an otherwise-predictable sport? Are the spectators more interested in watching crashes than racing?"

The cobbles are what makes the race interesting. Not in a 'there will be crashes' way, but in a 'this is really hard and therefore the peloton will split apart and therefore an incredible combination of endurance, grit, strategy, and luck is needed to get anywhere near the top ten, let alone win' way. If you get rid of the cobbles, you end up with a very boring race won by a mass sprint in the end. We have plenty of those in the calendar. The classics are meant to be the opposite of that.

Expand full comment
Dirichlet-to-Neumann's avatar

Short answer : Do you have a favourite sport so I can post a condescending and ignorant comment about it ?

Long answer : Road cycling is entirely defined by one physical fact : when to riders are in front of an other, the rider behind expend significantly less power than the rider in front due to air drag. This means it's practically impossible for riders to create a gap on flat road*. The cobbles on Paris-Roubaix have the same role as the mountains in a mountain stage : they break the domination of drag by making the terrain more difficult to ride on.

Crashes in PR happen often but in practice rarely take out the best riders who are positioned at the front of the group. This year race was rather of an exception. In general PR is one of the most exciting race to watch because the cobbles sections make it possible to light up the race on fire from an early point instead of reserving strength for a last big push in the last kilometres or metres. It's also one of the oldest, most prestigious and most difficult race in road cycling so just getting to the end is itself a season or even life goal for a number of pro riders.

Expand full comment
Eöl's avatar

I'm presuming that everyone on this race uses the skinniest road tires they can manage. I would love to see someone bust out like a gravel bike and see what happens. Not sure if that would be legal under the rules, and is definitely less efficient in some respect, but it would be a chad move.

Expand full comment
whale's avatar

The trend in modern road bikes is for frames to accommodate wider tires. Pretty much everyone rides 25-28 mm on smooth roads and I would be surprised if most PR riders aren't riding 32+ mm. It was <10 years ago that you could buy 19mm tires, I don't know if those even exist anymore.

Expand full comment
Pepe's avatar

The skinniest possible is no longer believed to be the fastest choice, even in much milder terrain than the PR cobbles.

Expand full comment
Yug Gnirob's avatar

Why do people run any marathon? Why do they climb mountains? To see if they can.

Expand full comment
Daniel Reeves's avatar

This reminds me of https://astralcodexten.substack.com/p/your-book-review-making-nature in that the answer is pretty circular / self-fulfilling. Nature is the most prestigious publication because the big names all try to publish there. Why do they? Because it's the most prestigious publication. Once something establishes itself as prestigious, that's a very stable equilibrium. Same with a bike race like Paris-Roubaix.

But, yeah, would it kill them to phase out the horrible roads? I don't know. As you say (which I'm imagining in Tevye Fiddler on the Roof voice) -- tradition!

Expand full comment
Deiseach's avatar

Taking out the horrible roads would be like "and let's get rid of all that walking around at the Masters, just line up eighteen buckets and have everyone try and hit a ball into each one. Guy with the most at the end wins!" That would certainly be easier and faster, but it wouldn't be golf.

Part of it is tradition, and part of it is differentiating this race from the ninety others held on flat, smooth roads where everyone bunches up in a sprint finish at the end.

Expand full comment
Pepe's avatar

The horrible roads are the race. Get rid of the cobbles and it becomes a flat, boring sprinter race.

Expand full comment
Cosimo Giusti's avatar

That does it. "Colleen Hoover" has to be a 'bot. She shows up as author of six of the top-selling ten paperback books in fiction, and four of the combined print / hardcover best sellers according to the NYT. If the woman never sleeps, she couldn't have written all them books.

She must be a couple coders living in someone's basement, who noticed that "James Patterson" wasn't really a retired propagandist and oligarch, but a team of researchers and coders who had devised the "James Patterson" algorithm -- validating the genius of propaganda.

But the 'bot "James" usually has the grace to give co-credit to the editor who smoothed out the rough spots. "Colleen" rarely shares the credit, but she may still be in beta.

Expand full comment
Paul Botts's avatar

I am acquainted with someone who published her first successful romance novel in 2013 after several years' work (the effort being mostly learning the formula), and since then has published 31 more with another in production now. This stuff is entirely formulaic and unashamedly so -- my friend's authorial website literally includes an option to sort her published works "by trope" in addition to chronologically.

My friend is a coder in a way, but not at all in the way that you mean. And she is not considered particularly prodigious in her output either; until a year ago she was still working a day job.

Expand full comment
Deiseach's avatar

Hadn't heard of this writer before, but it's not impossible; literary fiction may have the author who toils over one novel for fifteen years before publishing it as their second book, but writers in more commercial genres are doing this for a living, there are no bursaries or pleasant summers at Yaddo or grants from the Ira and Eva Vatuvant Foundation to support them.

She's writing romance and YA, both genres which have set formulas. Knocking out two books a year when you know the plot has to hit these points over the arc is very doable once you *can* write, plus she seems to have co-written with other authors. They have to punch the clock and get the work done like the rest of us.

If her books are very popular, and they seem to be, having a book from three years ago selling steadily at the same time as your new book goes on sale does make it easier to have "six of the top ten paperback books in fiction". Mind you, bestseller lists are very fragmented and targeted today, so "top ten in genre A" may not be at all the same thing as "top ten overall" or "literary top ten".

I think this is a good example of what we were discussing on a previous thread about the difference between "appealing" and "worth". Right now Mrs. Hoover is very appealing to a large audience, but a couple of years after she stops writing, her books will fall off the charts and be replaced by the new favourite author.

Expand full comment
Urstoff's avatar

Doesn't seem like an incredibly prodigious rate. Nora Roberts writes much more (4+ novels a year), apparently by just writing 8 hours a day and intentionally putting more work into some books than others. Someone like James Patterson just writes outlines and then hands them off to a co-author or ghostwriter. In pulp genres, two books a year seems fairly unremarkable, driven in days past by the economics of mass-market paperbacks, but these days more by the nature of how Kindle Unlimited pays authors.

Expand full comment
beleester's avatar

Note that while the books are all on the best seller list together, that doesn't mean they were all written at the same time. The earliest on the list was written in 2012. Writing a book a year isn't that hard for a professional writer. The success rate is impressive, but as others point out that's probably self-sustaining.

(Some freaks of nature on RoyalRoad write, like, half a novel a week in word count. I don't know how pirateaba does it.)

Expand full comment
Thor Odinson's avatar

Why is "bot" your first go-to rather than a pseudonym shared between multiple authors, even if you are certain one author couldn't write that much (and there are plenty of examples of authors who have written far more, consistently, over many years)? Shared pseudonyms are not unheard of (especially in romance novels), and allow publishers to build a brand around an "author".

Expand full comment
Arrk Mindmaster's avatar

I decided perhaps a decade ago that James Patterson books were all basically the same. If I started reading them today, perhaps I might conclude a bot wrote them. It does seem like something ChatGPT could do "well".

What really is the difference in output, other than volume, between a shared pseudonym and a bot? And if you can't tell the difference, then the multiple authors sharing a pseudonym aren't worthy of being actual authors.

Expand full comment
Melvin's avatar

As far as I can figure out, she has written just over two books a year for the past ten years, but many of those were novellas. This is probably not an unusual pace for a romance novelist.

In this genre, success seems to be particularly self-sustaining. My theory is that there's a large supply of women who _want_ to be reading trashy romance novels but are generally too ashamed. But if a particular trashy romance novel becomes popular then they have an excuse to read it ("I just wanted to see what all the fuss was about"), so once a certain threshold of popularity is reached you can expect every woman in the English-speaking world to acquire a copy.

Expand full comment
The Modesto Kid's avatar

Anders Søgaard's paper <a href="https://link.springer.com/article/10.1007/s11023-023-09622-4">Grounding the Vector Space of an Octopus: Word Meaning from Raw Text</a>. An examination of whether a large language model can understand language.

Expand full comment
George's avatar

Did Scott ever explain why he assigns a probability of 1/3 rather than ~100% to "AI drives humans extinct in the next 50 years or thereabouts".

I agree with the position, I've just never seen this reasoning for it spelt out. Is it just the basic paul christiano-like position?

Expand full comment
TGGP's avatar

100% is an unusually high degree of confidence for anything that doesn't currently exist over a 50 year timespan.

Expand full comment
JDK's avatar

Ok, but why not 0%?

And why 33% rather than 35% or 30%?

What is the precision the Scott thinks he has here and show us the math he used to get to 33%

And what exactly is the skin in prediction he is willing to put on it. Is this a $2 lottery ticket bet or a bet that would bankrupt him?

There is too much hand-waving,

That it's couched in mathematical terms with apparent 1% precision and alleged (but dubious) relationship to Bayesian statistics is either manipulation or self-deception.

Someone needs to be the kid who says the emperor has no clothes.

Expand full comment
kenakofer's avatar

Have you considered making some predictions on Metaculus or Manifold? I bet you'd be good at it!

Expand full comment
JDK's avatar

I was invited over a decade ago to be part of the good judgment project when it was still a private matter. I understand the underlying theories and the confusion and misunderstanding of theories.

Expand full comment
Arrk Mindmaster's avatar

33% is a 2/3 chance. It is meant as a statistic to be questioned, not as a hard-and-fast number. It says to me, really, somewhere between about a 20-50% chance. If someone said 67% of people surveyed agreed with something, I would assume they asked three people, and think the survey was statistically insignificant.

If Scott had, on the other hand, said he felt it was about a 32.8% chance, I could understand your position.

Expand full comment
JDK's avatar

Honest to goodness. I understand math.

If I flip a coin 10 times and I get only get 2 heads, should I think something is wrong with coin? No!

Why do you think an estimate of 33% is really a precise range of 20 - 50%? By what method did you make up those numbers?

Do you understand that your figures imply non Gaussian distribution of error? Could be but who knows? Could be that 33% implies 0 - 66%, in which case one would be better off saying: "I doubt it will happen, but I don't know."

Once you leave the use of words like above and start using numbers then you need to start justifying the choice of numbers otherwise it's just BSing.

To put on the cloak of rationalism, science, and statistical thought without really being rigorous is a con or self-deception. Sometimes you can even con yourself. See Feynman on getting fooled.

PS: re: your use of the phrase "statistically insignificant". "Statically significance" has a specific meaning in an enumerative investigation. It really has no meaning in an analytical investigation. See Deming, On Probability as Basis for Action (1975)

Expand full comment
Will Rinehart's avatar

I’d suggest you don’t go down too far into the rabbit hole of EA and Eliezer if you haven’t already. I tried a search for Jeffrey’s prior in his work and came up with nothing.

Expand full comment
Arrk Mindmaster's avatar

This is a language problem, not a math problem. I specifically said "about a 20-50% chance".

It sounds like you're saying if someone uses numbers of any sort then they should be rigorous. That is not, however, how language works. I believe that when most people read 33%, they do not take that as a precise number to be defended. A dozen is always 12, yet it is used imprecisely. If I did something dozens of times, it is a certainly vague number of times. And not even getting into a couple or few things more

If you prefer, I could have said "not statistically significant". Would you prefer that?

Again, I see nothing wrong with Scott's use of language in specifying 33% for a vague guess. It was clear to me in his post that he felt 33% was approximately correct, to whatever order of magnitude the reader feels is OK.

Expand full comment
Carl Pham's avatar

Actually, the way language works in my experience is that the moment you put a number on things, listeners think you have more precision at your command. That is, saying "I think this is not very likely but hardly impossible" is seen as qualitatively different than "I think there is a 33% chance of this." The latter is usually seen as implying a great deal more confidence in the precision of the statement.

So the question is quite apt, in my impression. If the actual meaning of "33%" *were* something like "oh I dunno, anywher between 20 and 50%, not more likely than not, and not very unlikely, somewhere in the middle" then it's reasonable to say why didn't you say *that* instead? Because giving a number most definitely suggests to people a greater confidence in precision. That's why advertisements say shit like "4 out of 5 dentists recommend..." instead of "most dentists think our toothpastes cleans teeth adequately." They (the advertisers) know very well that the typical perception of the listener is that something with numbers means the speaker has a higher confidence in the precision of his statement.

Expand full comment
JDK's avatar

"felt 33% was approximately correct, to whatever order of magnitude the reader feels is OK."

To whatever order of magnitude the reader feels is OK! What? No. No. No. That is hand-waving and the trick of the fortune teller (also in the prognostication business) who leaves enough wiggle room for the audience to draw whatever conclusions.

Expand full comment
Libero's avatar

Please Scott enable "Next and Previous post" buttons, here's the support page: https://support.substack.com/hc/en-us/articles/9900203919508-How-can-readers-flip-to-a-new-post-

I'm sure many people binge read several posts to catch up, and going back and forth from the archive is painfully cumbersome. Thanks.

Expand full comment
Scott Alexander's avatar

Done.

Expand full comment
Goldman Sachs Occultist's avatar

Lots of people here claiming that AGI has the potential to benefit humanity to such an extent that regulating it or slowing down its development is not worth missing out on this benefit.

Aside from the fact that this just dismisses all AI-risk arguments without even addressing them, if AGI is capable of providing so much value to humanity, surely the government investing say a hundred billion dollars into creating an AI alignment and interpretability institute and paying salaries high enough to get the brightest college graduates to work on this instead of string theory or something is pocket change next to the value AGI will create, so this seems like a slamdunk investment in humanity's future (which is not to say the government is good at spending such sums of money or something, but even just in principle, a lot of money being directed towards alignment and interpretability seems obviously good). What's the argument agaisnt spending so much on alignment research? That alignment will be so trivially easy that it's not worth the money?

Expand full comment
Yug Gnirob's avatar

The government thinks AI alignment means that it quotes their side's political slogans instead of the other party's. How much money do you want to spend on that?

Expand full comment
Nancy Lebovitz's avatar

Be fair, the government would presumably use AI to improve its military capacity, and possibly to win more at foreign policy. I have no strong opinion about whether either of these would work well, though I think AI could probably make good tweaks for weapon design.

That last is harder than it sounds, though. I came late (the Ukraine war) to being interested in weapons (as theory, not for personal use) and they involve *very* complex tradeoffs. An AI might be able to manage something like "make this lighter without making it weaker".

Expand full comment
Carl Pham's avatar

I doubt it. The #1 goal of any government is "stay in power," human nature being what it is. These days that usually means "win the next election," and in large countries the payoff from domestic action that makes you popular is way higher than military preparedness (or actual war, which tends to be chancy and get people killed, cf. Putin, V., political career of).

A responsible government would use AI (which here I'm using to mean "neural net pattern recognizers") to do stuff like improve economic and financial forecasting, so they wouldn't be surprised by economic events, so that they could make better financial and monetary decisions at the macro level, so that they could issue guidance and statistics to the private sector in a more timely and accurate way, et cetera. These could in principle all deliver improvements in economic welfare, which is always popular and tends to get you re-elected.

Less responsible governments could use it to fragment and demonize their political opposition, discover and feed pork to their most important constituencies and supporters, and find ways to usefully (from the point of view of their reputation) interfere with the public discourse.

Expand full comment
Nancy Lebovitz's avatar

Russia's attack on Ukraine demonstrates there's a cost to neglecting your military too much.

I only addressed using AI for military purposes, but you're right that governments could probably figure out good uses for AI in other areas. For example, they might be able to find out whether laws are contradictory.

Expand full comment
Carl Pham's avatar

Sure, but you could just as easily have said that Russia's shocking level of military unpreparedness, given that in few other countries is the influence of the military higher, and that the same guy who ordered the invasion (Putin) has been in personal control of government spending priorities for 20 years, *and* that the invasion was a long time coming, could have been foreseen for at least 6-8 years -- all demonstrates that it appears surprisingly easy to neglect military preparedness. (I expect if Russia *actually invaded* NATO, we would find more than one military -- the Bundeswehr in particular comes to mind -- is a shadow of its paper self.)

Expand full comment
Yug Gnirob's avatar

Military strength is its own political issue.

Expand full comment
Melvin's avatar

There was a good neologism I heard the other day for the belief that problems can be solved simply by spending the right amount of money. You don't need a good strategy for spending that money or even a solid theory of how the money would solve the problem, just write a big cheque in the direction of the problem and if that doesn't work then it just means you need to write a bigger cheque.

Sadly I can't remember the neologism right now because this seems like a great example of it.

Expand full comment
SufficientlyAnonymous's avatar

Was it “Checkism” (I.e. that writing a large check is in and of itself a mark of success), coined by Noah Smith

Expand full comment
Melvin's avatar

Thanks, that was it, I've been reading some Noah Smith stuff recently.

Expand full comment
Goldman Sachs Occultist's avatar

The strategy is to spend the money to get the smartest people working on a difficult problem. It doesn't really even matter if it's terribly inefficient, because again, it's trivial next to the benefits gained (from AGI in general and from aligning a superintelligence).

Expand full comment
Melvin's avatar

Okay fine, here's a hundred million dollars a year for twenty years. Please hire the one hundred smartest people you can find and pay them one million dollars a year to think about AI safety.

What deliverables do you expect? How will you know if they're doing a good job or not?

Are you facing a whole new kind of alignment problem, where the hundred smartest people you can find will inevitably be smart enough to take your million bucks a year and use it to work on whatever the heck they feel like working on anyway while concocting an explanation that tells you it's _actually_ really important for AI alignment?

Expand full comment
Dave's avatar

I don't know what the neologism is, but it sounds an awful lot like the argument for not worrying about climate change advanced in the Freakonomics books, which basically boils down to, "when it becomes a really big problem, it'll be worth a lot of money to solve, which will incentivize lots of people to solve it." It's a sort of efficient market fallacy that basically assumes a) All problems are solvable given sufficient incentive and b) a solution that saves everybody is just as profitable or more so than a solution that mostly helps the relatively wealthy.

Expand full comment
Carl Pham's avatar

I very much doubt any reasonable person makes assumption (a) just as stated. We can all reel off dozens of problems that aren't soluble regardless of motivation. ("I've just fallen off a high cliff. I am very, very motivated to learn to fly or at least glide right now...")

What you may mean to say is that the assumption is that the motivation to solve a problem generally grows at a rate no slower than the rate at which the urgency of the problem grows, so that a problen which is soluble at all will be solved at the appropriate time: when rising motivation matches the rising urgency.

That is, the assumption would be that for collective problems it's rarely the case that there's a risk of "waiting until too late," i.e. you wait until you want to solve the problem, but by the time you want to enough -- your opportunity has passed. It's understandable why this assumption rankles -- it's clearly false at the individual level. There are many problems we *can* solve early and cheaply (stop smoking, or never start) but which we don't until sufficient motivation arrives (lung cancer), and then it's too late.

But that is because individuals have available the tools of rationalization and denial, which can minimize urgency far below where rational evaluation using available data would put it. And we know that many things that are true at the individual level cannot be extrapolated to the collective level (and vice versa: it would be great if *I* could inflate away my debt ha ha). The question then becomes: can a *group* of several hundred million people, say, also effectively enter a state of rationalization and denial, for some weird group psychology reason, such that *everyone* (or at least enough of a group to foreclose any useful action) systematically minimizes urgency below where rational evaluation would put it? This flies in the face of "the wisdom of the crowds" ideas, but it's the principal reasonable critique of the assumption.

(B) strikes me as just economically innumerate. There will *always* be far more profit to be made from a solution that saves everybody than one that saves only the relatively wealthy, because ipso facto everybody includes the relatively wealthy *plus* a lot of other people, and while the latter may not be individually wealthy, they certainly collectively represent a great deal of wealth. In any event, history demonstrates that you get richer as a Ford than as a Lamborghini.

Expand full comment
Jacob Steel's avatar

>>(B) strikes me as just economically innumerate. There will *always* be far more profit to be made from a solution that saves everybody than one that saves only the relatively wealthy, because ipso facto everybody includes the relatively wealthy *plus* a lot of other people, and while the latter may not be individually wealthy, they certainly collectively represent a great deal of wealth. In any event, history demonstrates that you get richer as a Ford than as a Lamborghini.

This is only true if you use "profitable" to mean "gross profit", but what companies actually care about is net profit. You can probably get paid something to save poor people's lives, as you say, but the question is whether you can get paid as much as it will cost, and there's really strong empirical evidence that the answer is no.

Expand full comment
TGGP's avatar

It actually sounds very different to me. Freakonomics is not saying we should invest a lot of right right now in solving that problem. As Melvin pointed out about AI, that could result in the money being spent very badly and thus not actually helping much with the problem.

Expand full comment
Victualis's avatar

If you set aside an enormous pot of money for X, you will attract the kind of people interested in money/power/reputation and X becomes incidental. It might be better to set aside a smaller amount to kickstart activity by people who actually do care about X, see how that went, and only then increase the investment. That way there is at least a good chance that some worthwhile research is done and that the norms of the field will initially be aligned with progress on X.

Expand full comment
Goldman Sachs Occultist's avatar

Why is Bud Light using ('transwoman') Dylan Mulvaney to promote their beer? I can't make sense of this. Although a few celebrities and twitter people are saying they're drinking bud light in 'solidarity' with Mulvaney, tt seems obviously unlikely that there will be many new people who become regular bud light drinkers based on this campaign, whereas this has already put off a seemingly non-insignificant number of people from drinking it and causes some boycotts, and even if you're just some guy who knows nothing about this campaign and just sees some 'woman's' face on a can of beer at the store for some reason, you might think its some dumb liberal cause and pick something else (or just be subconsciously be put off by it), and at the least the average bud light drinker is unlikely to buy 𝘮𝘰𝘳𝘦 bud light than they usually do because of this campaign.

Although alcohol companies like Anheuser-Busch are hardly darlings of the ESG mob, they're not an especially hated sort of company by causey liberals, and aside from the occasional controversy around previous ad campaigns they don't seem to attract much heat from twitter liberals etc. So it doesn't seem like there's some pressing need for them to 'wokewash' their business. And they're not a B2B type company appealing to larger, woke-signalling corporations or anyhing, and they don't really depend on government contracts or favorable legislation etc.

Is there something else going on here which I'm missing? Is it really just a straight-forward (and seemingly misguided) attempt to appeal to hip young people?

Edit: From the comments - https://twitter.com/ClayTravis/status/1645207120118263810?s=20

So yeah, it turns out it's as straightforward as it looks lol

Expand full comment
A.'s avatar

I can't see if anyone posted about this before, but it looks like I and a bunch of other commenters didn't get the whole picture. Here's a claim that it's (also, or mostly?) about an actual social credit score:

https://nypost.com/2023/04/07/inside-the-woke-scoring-system-guiding-american-companies/

Damn, I'm naive - and it's not just me.

Expand full comment
Deiseach's avatar

I have no idea who this is supposed to be marketed towards. My understanding was that light beers were targeted to women as being lower in calories and less likely to get you drunk.

So getting a trans woman to advertise it seems like "get a woman to sell beer to women". But this is a trans woman, not a cis woman, and the fuss seems to be from men who drink this stuff.

And who the hell is this Mulvaney character anyway? They seem to be popping up all over social media at the moment and I have no idea what they are supposed to have done to be famous (and my immediate impression, uncharitable as it may be, is that this is a gay guy leveraging a subtle kind of drag performance into riding the bandwagon of flavour of the month, and once the trans stuff dies down he'll slide back to being gay and possibly slightly femme. Like Conchita Wurst, and at least we got a Eurovision banger from them, not bad weak beer).

EDIT: If I have to have "gay guy larping as trans woman" all over the shop whenever I go online, I'm sticking with Conchita not Dylan, who has a mouth like a hake:

https://www.youtube.com/watch?v=QRUIava4WRM

https://www.youtube.com/watch?v=M1cjEuT_uvg

Expand full comment
Theodric's avatar

My understanding was that this was simply paying Mulvaney to promote Bud Light on their own social media channels, rather than an attempt to replace AB’s traditional marketing. Basically product placement / targeted advertising. Mulvaney’s appeal is selective, but extant, and that audience is probably very unlikely to consider Bud Light among their beverage options. So a straightforward attempt to expand their audience seems entirely plausible.

The error was in underestimating the backlash to this marketing, which, to be fair, seems to be driven largely by an exaggeration of the extent to which this move is somehow signaling that Mulvaney is now the “face” of Bud Light.

Expand full comment
Goldman Sachs Occultist's avatar

They put their face on the cans

Expand full comment
Ted's avatar

As I already told you, they sent Dylan Mulvaney some special edition cans with her face on them--I doubt there's a single store on the planet where there are or will be cans of Bud Light with her face on them for sale to the general public.

Expand full comment
Theodric's avatar

Yeah I didn’t realize that part when I made my first post. But they put Jurassic Park on a lot of Dr. Pepper cans and Chris Pratt is hardly “the face of Dr. Pepper”.

From the reaction you’d think this was a realignment of their core marketing strategy rather than a niche social media promotion and a limited run of special edition cans.

Expand full comment
Deiseach's avatar

I thought young people don't drink beer (it's considered a 'dad drink') unless they're like, 12 years of age and sneaking cans of cheap lager out to get hammered with their pals. See "bag of cans" (though not confined to 12 year olds, just 'youngish, not a lot of disposable income, and just looking to get plastered as fast and as cheap as possible'):

https://www.urbandictionary.com/define.php?term=bag%20of%20cans%20with%20the%20lads

I can see a desperate marketing gimmick for the "18-35" demographic with Mulvaney as "who have the young people heard of, Smithers? get them!" face of it but for Bud Light? Just bad decision all round.

I also get the vibe from yer wan that she's never drunk a beer in her life, and the only beer-drinkers she knows are hipster craft beer types 😁

Expand full comment
Carl Pham's avatar

One of the problems for a very well-known and established brand is that it's hard to move the needle among groups where you're already popular. You can *lose* support, but not easily gain more, because you're well known. So a priori it's not surprising that an ambitious young ad exec might pitch the idea of trying to appeal way outside of your core constituencies. "Hey, what if we got [random demographic not known for drinking lite beer] interested? Wouldn't that be brilliant?" Obviously the risk factor is alienating your core constituency, if you have reason to believe appealing to one necessarily involves annoying the other. The usual solution to this is market segmentation, e.g. marketing "GUD BEER! DRINK MOAR GUD BEER" to your peasants, and "Ah! An effervescent light beverage reminiscent of our time in Cannes" to the aristoi, and of course you put the same bilgewater in each can. But this is a bit expensive, what with needing to print two different labels and figure out the numbers of each to deliver to your wholesalers, so business is always open to the attractive notion that marget segmentation can be avoided by brilliant advertising.

And there have been notable successes, e.g. GEICO remaking itself from a stodgy insurance firm that mostly sold to middle-age government drones to a bunch of cool froods who know where their towels are.

Expand full comment
Theodric's avatar

I assumed myself that this particular crowd would be more amenable to the Bud Light branded seltzers (which are essentially less sugary “alcopops”) but perhaps there is some reason why the company is looking to particularly expand the market for the traditional light beer.

But I think young adults do still drink plenty of beer? I’m now well past college age but I assume keggers are still a thing.

Expand full comment
Deiseach's avatar

Maybe they're not drinking beer? I too would have assumed college-aged drinkers would be the target market for "we're trans allies, us!" and they'd already be drinking "is it cheap and does it get me drunk? good enough" beer like Bud Light.

But the ad/marketing woman seems to be saying that Bud Light has been declining for years and this is trying to revitalise the brand, so either the kids are going for alcopops and spritzers or some other cheap brand has knocked Bud Light off its perch. This survey comes from 2019 so it's out of date, but it does seem like beer is less favoured, and Bud Light is way down the list:

https://www.barandrestaurant.com/people/survey-reveals-what-college-students-want-drink

"Respondents who identified beer as their preferred category of beverage alcohol eschewed craft beer in favor of Big Beer brands. In ascending order, Corona, Bud Light, Coors, Busch, Natural Light, Michelob, Miller, Budweiser, Hamm’s, and Keystone are the top ten brands among college students, with Dos Equis, Stella Artois and Guinness also claiming some votes."

Expand full comment
Gunflint's avatar

Hamms mmmmhhh…

Expand full comment
Theodric's avatar

That article is weird because it lists brands “in ascending order” which is ambiguous as to which way it goes, but the brands listed first are the most popular. So Bud Light is already the second most popular beer among the college aged.

So I guess this is a move to try to steer vodka drinkers to beer - seltzer has stolen a lot of market share but Bud Light already has a product in that market.

On the other hand, college age beer drinkers are not, in my experience, particularly “picky” - they drink what is cheap and available (my favorite beer is whatever is in your fridge). Maybe this is a move to attempt to actually build some brand loyalty in this demographic.

Expand full comment
Ted's avatar

I do think this is a great example of how whatever I should call followers of Scott's blog (rationalists? Grey Tribe? Anti-SJWs/Wokeists? the IDW?) are in a weird bubble at times. There are a number of responses and only one even mildly willing to consider that this might be a strategy done to make profit and with some possible merit, rather than purely some form of virtue signaling or wokeism gone mad or something.

I think there's a very simple answer, which is that Dylan Mulvaney seems to have a lot of fans who are likely in their late teens and twenties, and many of those people are beer drinkers without strong preferences, many of whom are also inclined towards a cheap option, and who might be swayed towards trying something by her endorsement. And even this post seems confused about what Bud Light is actually doing--they're not putting Dylan's face on every can of Bud Light throughout the country, they made some commemorative cans specifically for Dylan as part of the marketing partnership, which they've done for tons of different celebrities. I suppose it was predictable that it would make some very online people mad and get play in right-wing media, but I doubt the "average" current Bud Light drinker is going to care about the campaign either way.

But generally I think a big part of the confusion here is caused by a weird assumption that left-leaning college kids and 20-somethings aren't Bud Light drinkers or in the Bud Light target demo, which I'm pretty sure is just wrong.

Expand full comment
Goldman Sachs Occultist's avatar

>There are a number of responses and only one even mildly willing to consider that this might be a strategy done to make profit and with some possible merit, rather than purely some form of virtue signaling or wokeism gone mad or something.

My comment literally asked if this is a straight-forward attempt at getting more people to buy beer

The confusion is almost entirely about why this would be expeted to work

>I think there's a very simple answer, which is that Dylan Mulvaney seems to have a lot of fans who are likely in their late teens and twenties,

Define 'a lot'. I would be amazed if 5% of people in their 20s could recognize their face. There's any number of celebrities and groups with orders of magnitute bigger following than Mulvaney and more likely to be beer drinkers.

So fucking weird to accuse people of being "in a weird bubble" while thinking Mulvaney has some sort of huge fanbase comparable to actual celebrities.

>and many of those people are beer drinkers without strong preferences

Do you have any particular reason for thinking this? Beer drinking is significantly lower for millennials than previous generations, and it seems almost certain that this would be concentrated in the sort of people who are 'fans' of a 'transowman' on social media.

>And even this post seems confused about what Bud Light is actually doing--they're not putting Dylan's face on every can of Bud Light throughout the country, they made some commemorative cans specifically for Dylan as part of the marketing partnership, which they've done for tons of different celebrities.

Okay, so what? The point remains that even us "rationalists in a weird bubble" could have entirely predicted the outcome that this would piss off more people than it attracted, meaning even people "in a weird bubble" understands beer marketing better than beer marketers themselves!

That's precisely why this is confusing. I was confused precisely because I couldn't conceive of marketing professionals being dumb enough not to expect this response.

>But generally I think a big part of the confusion here is caused by a weird assumption that left-leaning college kids and 20-somethings aren't Bud Light drinkers or in the Bud Light target demo, which I'm pretty sure is just wrong.

I'm pretty sure it's right. And most '20-somethings' didn't know who Mulvaney was before this and probably still either don't know or don't care.

It's YOU who is in a bubble. You think your little progressive bubble is the norm, when its not.

Expand full comment
Ghillie Dhu's avatar

>"...millennials ... '20-somethings'..."

Somewhat tangential: these are mostly distinct groups. The oldest Zoomers are pushing 30.

Expand full comment
Ted's avatar

I didn’t know who Dylan Mulvaney was until right wingers starting whining about her, but according to Wikipedia she has more than 10 million followers on TikTok, while her video series has received over one billion views. Budweiser has lots of different spokespeople for all of their products all the time, and presumably Dylan is a lot cheaper to hire than the celebrities with way bigger followings.

> Okay, so what?

So you said something straightforwardly false in your initial post that fits with my impression that people who spend a lot of time focused on this culture war stuff are willing to believe pretty much anything that flatters their biases.

> the outcome that this would piss off more people than it attracted, meaning even people "in a weird bubble" understands beer marketing better than beer marketers themselves

What’s the evidence this is true other than a bunch of people being loudly angry on Twitter and in right wing news outlets? Whate evidence that any of the cases where right wing media have gotten angry about some ad or sponsorship deal have actually been bad for the advertiser?

> And most '20-somethings' didn't know who Mulvaney was before this and probably still either don't know or don't care.

You’re absolutely right—this sponsorship is targeted at people who do know who she is. And I still disagree that twenty-something fans of Dylan are especially unlikely to be targets for light beer advertising, but since neither of us have presented any evidence of this I guess we can stick with our biases.

> You think your little progressive bubble is the norm, when its not.

I think the norm is that people who don’t really know who Dylan Mulvaney is don’t care at all that Bud Light has her doing some ads for them, but whatever you say, Goldsman Sachs Occultist.

Expand full comment
AntimemeticsDivisionDirector's avatar

Yeah the answer is a single sentence, and it's "Because they think this will gain them more customers than they will lose". They may or may not be right about that, but it's not exactly rocket science.

Expand full comment
Goldman Sachs Occultist's avatar

It seems obvious that this will lead to less sales, which is why an alternative answer was being sought. But the marketing executive really is that clueless.

Expand full comment
Ted's avatar

"It seems obvious" based on nothing but your preexisting biases.

Expand full comment
Bullseye's avatar

I'm coming at it from the other side. I'm progressive, and I drink craft beer.

Expand full comment
Gunflint's avatar

I had to go to the RedState site to get the scoop on this. What’s with Kid Rock? Everyone knows he’s a PBR guy anyway.

Expand full comment
Bi_Gates's avatar

> https://twitter.com/ClayTravis/status/1645207120118263810?s=20

I honestly don't understand why do marketing and HR people exist ? I find flies and mosquitoes to be more benefecial to the world around them than them. Such a huge amount of stupidity, of meaningless buzzwords-driven careers, of utter and complete delusion.

Religious folks expend a huge amount of effort explaining the evil and stupidity of the world and find plausible reasons for a benevolent God might have still created it, have anybody defended a theodicy explaining why HR and marketing people exist ?

Expand full comment
Carl Pham's avatar

Everyone finds marketing and advertising annoying -- except for that 1 in a 1000 shot that tells you about something you didn't know existed, and which it turns out you really need[1]. (Or of course when it's *your* product or service that is being advertised to others.) But that 1 in 1000 hit rate keeps the machine humming, just like the even lower hit rate keeps spam the majority of TCP/IP traffic as I understand it.

One would think (if you were in the ad biz) that the win-win solution to this is targeted advertising, where the advertiser uses some boatload of relatively harmless info about you, hoovered up through your enormous voluntary release of trivial information via your online activities, suitably digested, so that ads which are much more likely to be interesting are shown to you, and ads that will merely annoy you (which is a waste of money for them) aren't.

But it turns out there's an uncanny valley there, and when you target ads better but not perfectly, it actually frightens and annoys people *more* and they start installing DuckDuckGo and rolling their eyes at "Don't Be Evil." Funny species, humans.

HR departments, on the other hand, are merely places to humanely park human beings who the Inuit would push out onto ice floes each fall.

------------------------------

[1] https://knowyourmeme.com/memes/shut-up-and-take-my-money

Expand full comment
Arrk Mindmaster's avatar

You will find useless, sometimes worse than useless, people in every profession. Some professions tolerate it better than others. It is immediately obvious when someone in manufacturing has lower output. Marketing and HR people can take time to see whether their work has merit, and even then is often debatable and involved in company politics.

But marketing in general has a positive relationship to the top line, and hopefully then the bottom line. If it isn't, it gets dropped. Sometimes companies have thought they spent too much on marketing, dropped it, and had to put it back in when they saw sales fall.

HR is more of a necessary cost, for a job that must be done. Imagine if you eliminate an HR department. How would people be hired, benefits administrated, people be paid, just for a small portion of it? Not to mention legal stuff.

Sometimes you must take the bad to get the good.

Expand full comment
Nancy Lebovitz's avatar

Very tentative, but maybe it's an effort to get more people to try Bud Light in the hopes that some of them will like it.

I don't particularly like beer, so I haven't tried Bud Light. It seems to be both a successful brand and a shorthand for bad beer, so I have no idea why people drink it.

Expand full comment
Erica Rall's avatar

Most mass-marketed American beer brands (Bud, Coors, Miller, Michelob, etc) are American Pale Lagers. Specifically, these are variants of a German style of beer that was brought over to North American by 19th century immigrants, characterized by a very mild flavor with minimal hops and a mash of un-toasted barley malt mixed with corn. The adaptations from the original German style were reactions to ingredient availability: corn was added to the mash to compensate for American barley being a different variety than German barley with a somewhat different flavor profile, and the amount of hops was cut down because hops varieties grown in the US are more bitter than the hops used in the original German recipe. The style also co-evolved with the adoptions of refrigeration and pasteurization: hops also act as a preservative, which is less necessary if the beer can be kept cold during shipping and storage and the need for which is further reduced if the beer is pasteurized, and German-style lagers brew at a cooler temperature than British-style ales so a refrigerated brewery could produce consistent-quality lager year-round. Ales would require both heating and cooling at different parts of the year in most of the US (the US mostly having hotter summers than the British Isles, too hot for unrefrigerated ale brewing, but winter temperatures in both the US and UK are far too cold for ale brewing).

These came to dominate the US market for a long time due to a combination of several factors. The ingredients were cheap and readily available, especially the corn. Lagers in general were a bit easier to make than ales in American climates for reasons already mentioned. There was a critical mass of German immigrants who had a very strong beer-drinking culture and preferred beer more like what they'd drunk in the old country. And prohibition (plus the temperance and anti-saloon laws adopted in many states prior to national prohibition) killed off all but the largest breweries, and then post-prohibition alcoholic beverage control laws made it very difficult for new breweries to enter the national market.

Nowadays, all different kinds of barley and hops can be bought pretty much anywhere, and consistent arbitrary brewing temperatures can be maintained regardless of climate. And a lot of state and federal laws got changed in the 80s and 90s to legalize microbreweries and brew pubs and to reduce regulatory obstacles to new large brewery companies and to imported beers. Between imports, microbreweries, and new medium-to-large breweries operating with modern ingredients and equipment, there's a lot more variety in the US beer market than there used to be. A lot of people turn out to prefer other styles with very different flavor profiles, particularly IPAs and other aggressively hoppy beers, as well as beers that are based on toasted mash (porters, stouts, märzens, bocks, etc). And for people who prefer crisp, mild pale lagers, there are a variety of newer (or new to the US market) pilsners the like on the market to compete with traditional American pale lagers. Most of the people who still prefer the flagship products of the big beer brands do so out of a combination of brand loyalty, familiarity, and price sensitivity, while most people who aren't locked into a particular brand out of familiarity can find something the like better on the market.

It should be noted, though, that there's little consensus as to which kinds of "good" beer are the best. Some people swear by IPAs, for example, while others can't stand them. Likewise for stouts, and so on. I expect most beer drinkers in a blind test would rate American pale lagers neither best nor worst in a selection between a range of beer varieties, but relatively few would rank them at or near the top unless they recognized the taste of their preferred familiar brand.

Status-consciousness and cultural affinity signaling does definitely play a role, though. Highbrow tastes picked up on imports and microbrews a lot sooner than other subcultures, probably due to combinations of cost differential (especially when they first became available) local/regional availability (probably easier to get a good import or microbrew in the 80s in New York or San Francisco than in rural Kentucky), and lack of entrenched brand loyalty (before imports and microbrews became popular, beer drinking in general was regarded as relatively lowbrow compared to liking wine, spirits, mixed drinks, or fancy non-alcoholic beverages). And from there, it's become yet another cultural marker in the pervasive Puritan vs Borderer (in the Albion's Seed sense of the words) split.

Expand full comment
Melvin's avatar

"Bad beer" in this case is just shorthand for "beer enjoyed by people of a lower social class than mine". The sort of beer you enjoy is so heavily wrapped up in class signalling that you shouldn't take anything you believe about it seriously.

Expand full comment
A.'s avatar

I don't remember where I got this joke from, but it goes like this:

CEOs of Budweiser, Miller, and Guinness are having lunch together. CEO of Budweiser orders a Budweiser, CEO of Miller orders a Miller, CEO of Guinness orders an orange juice.

The other two ask the Guinness CEO why he isn't ordering Guinness. "Well," he says, "I figure that if you two aren't drinking beer, then neither should I."

Expand full comment
Moon Moth's avatar

Guinness: it's what's for dinner.

Expand full comment
Gunflint's avatar

Bar patron on St Patrick’s Day:

I think this Guinness had gone bad.

Bartender:

It’s supposed to taste that way.

Expand full comment
A.'s avatar

The point of the joke is, love it or hate it, nobody doubts that Guinness is beer, whereas for the other two there are different opinions.

Expand full comment
Gunflint's avatar

Oh, I got the joke. I just prefer weak American lager.

Hamms is 20 bucks for a 30 pack and I have to drive out of my way and go into the walk in cooler myself to grab it but that’s what I like. :)

Expand full comment
Deiseach's avatar

It is an acquired taste, and not even all Irish people like it.

That being said, it is also possible to get a bad pint, for a plethora of reasons.

Expand full comment
John N-G's avatar

New slogan: Beer with a Purpose

Expand full comment
Theodric's avatar

It’s intentionally designed to be relatively cheap and not very flavorful. It’s associated (even in its marketing) with parties and sports. That is, it’s not the sort of beer you drink because you enjoy beer, it’s the sort of thing you can smash to get a buzz on and quench your thirst while doing something fun, and even its creators don’t pretend otherwise.

It’s “bad” in that sense, and in the sense that anything mass-produced with cheap ingredients is perceived to be low quality, but Bud is not usually shorthand for “bad beer” as much as something like Natty Light.

(Of course the class signaling such as it is goes in both directions, with plenty of people who would probably enjoy styles of beer other than American macro brewed adjunct lagers turning up their noses at “that fancy craft shit”)

Expand full comment
Nancy Lebovitz's avatar

You may be right. For what it's worth, I liked Velveeta cheese when I was a kid, and as I got access to better cheeses, I stopped liking it. I don't *think* status is part of that, but how can I tell?

Expand full comment
Yug Gnirob's avatar

Technically any cheese is better cheese than Velveeta, on the grounds of Velveeta not being cheese.

https://www.allrecipes.com/article/what-is-velveeta/#:~:text=Velveeta%20may%20look%20like%20cheese,is%20no%20longer%20the%20case.

Expand full comment
Sebastian's avatar

Taste buds change as you grow up. It may well be that the "better" cheese only tastes better to adults.

Expand full comment
Nancy Lebovitz's avatar

Fair point.

People tend to have increased tolerance for bitter flavors as they get older. Not all people, but quite a few. They might even like bitter flavors.

Some expensive cheeses are bitter. I am not in favor.

Expand full comment
Carl Pham's avatar

I would guess this was relevant:

https://www.bizjournals.com/stlouis/feature/katz-busch-trial/jurors-reach-verdict-in-katz-vs-anheuser-busch.html

They won the case, but I'm sure it was very expensive.

Expand full comment
A.'s avatar

I think Scott explained this before:

https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

I recall other companies pulled similar stunts and recovered from the immediate backlash just fine. Even Ben and Jerry's stock seems to be recovering, though it's too early to say.

Expand full comment
Purpleopolis's avatar

B & J's is favorably quoting LBJ.

Is it young social media drones who have no idea who LBJ is and/or the politics of the founders? Or just current hip social issue uber alles?

Expand full comment
Ch Hi's avatar

FWIW, LBJ was a mix. Nobody who likes their ideological heroes pure likes him. He was a corrupt racist who promoted civil rights and pushed NASA to help Texas.

Just about everybody can find things to like about him. And, of course, also the opposite.

Expand full comment
Purpleopolis's avatar

In this particular case it was the conflict between the "we are '60s hippies" mythos they portray and Vietnam. Admittedly, a lot of people here may have never met an actual '60s hippie but LBJ is not someone they have mixed feelings about.

But yes, imo LBJ was the most overall "-ist" President of all time. Others might have been more intensely aligned along one particular form of ism (Wilson comes to mind) but AFAIK, nobody else ever waved their dick at subordinates or gave them orders while defecating.

Expand full comment
Goldman Sachs Occultist's avatar

Yeah, I just don't see how this ends up with more Bud Light sales. If they did some outrageous *conservative* aligned thing, maybe that makes sense. Because even if Bud Light drinkers oppose said outrageous thing, they could end up buying more Bud Light to spite progressives being outraged over it. But, who's buying more beer now? Just more or less politically indifferent beer drinkers who now have 'bud light' in the back of their mind when they walk down the liqour aisle? At the least, it doesn't strike me as completely analogous.

Expand full comment
A.'s avatar

I associate Bud Light with huge bags of empty cans that are piled next to college fraternities after parties (though I might be confusing it with some other bad cheap beer). If they are targeting this demographic, then I think they are doing just fine.

Expand full comment
Brinkwater's avatar

This doesn't explain the causal mechanism, but it's a few more examples of companies weathering supposed boycotts fine: https://www.rollingstone.com/culture/culture-features/woke-companies-broke-profits-1234710724/

It's not an airtight case and there are many confounding factors (like a pandemic), but broadly speaking, it rhymes with "there's no such thing as bad publicity."

Expand full comment
Goldman Sachs Occultist's avatar

People say this, but companies genuinely seem desperate to avoid bad publicity of the kind that outrages twitter liberals.

Expand full comment
Schmendrick's avatar

It's not sexy, and it's hard to prove, but I think it's just a case of true-believers in a bureaucracy not realizing their ideology is an ideology. Big companies collect regulatory compliance professionals like barnacles - the alcohol industry especially, because of the absolute blizzard of confusing state regulatory apparati specific to alcohol they have to navigate. Compliance is one of the worst ground zeros for the overreach of progressive ideology. My bet is that someone in AnBev's compliance hierarchy had an idea about a popular influencer and no-one else was either in a position or had the thought to second-guess it.

Expand full comment
SP's avatar

Agree, once Mulvaney was proposed it would have been hard to oppose it without being labeled as a "transphobe". Especially in a progressive departments like Compliance, it would have been a career killer.

Expand full comment
TGGP's avatar

Reminds me of recent reports that Amazon junked their audience test panels after they dinged "woke" shows and preferred the least cool pitches:

https://twitter.com/RichardHanania/status/1644058847336890369

https://twitter.com/MuseZack/status/1643103478359937024

Expand full comment
Bullseye's avatar

Why would compliance professionals get to decide whose picture appears on the can?

Expand full comment
Schmendrick's avatar

I don't know what AnBev's internal processes are like. But there's this in support of my theory: https://twitter.com/ClayTravis/status/1645207120118263810?s=20

[Edit: see also this: https://twitter.com/willchamberlain/status/1645216358282543104?s=20]

Expand full comment
Bullseye's avatar

I don't see anything there to suggest compliance professionals were involved. It looks like the decision was from the head of marketing, which makes sense.

The real explanation is probably at the beginning of the video, when she says that sales are in decline. So she decided to try to sell to a group of people who hadn't previously been interested in Bud.

Expand full comment
Quinn's avatar

There’s no such thing as bad publicity.

Expand full comment
Purpleopolis's avatar

What's Brendan Eich working on these days?

Expand full comment
Yug Gnirob's avatar

OJ Simpson is more popular than ever!

Expand full comment
Goldman Sachs Occultist's avatar

Weird then that companies go to great lengths to avoid upsetting twitter mobs, which would get them a whole lot of publicity

Expand full comment
Melvin's avatar

Principal-agent problem. This kind of thing doesn't help Bud Light at all, but I imagine it helps the career of the ad executive who thought I'd it, who can now turn his newfound industry prestige into working for a much less uncool brand than Bud Light.

Expand full comment
Eremolalos's avatar

Some people must have tested GPT4, or one of the other AI's, on things like inventiveness or insight. Here are a couple of examples: Insight: there's an item on the LSAT which involves one scientist, A, making a comment about another's (B's) work. In the early part of the question set-up A and B were having a perfectly understandable conversation about research methods, placebo groups, etc., and the A makes a criticism of the experimental design that just makes no sense. And the question you are to answer, is "A's last remark means he understood B to believe that ________." Anyhow it's a really hard question, and the only way to answer it is to realize that A *misunderstood* what B was saying, but that his misunderstanding did not become evident until his final last remark. And you have to figure out in what *way* A could possibly have misunderstood what B was saying. So I would say that answering that question involves an insight. You have to revamp you understanding of the earlier conversation. A&B didn't really understand each other, it just sounded like they did.

For inventiveness all I can think of is the U-bend in drains. Since GPT will already know about it we can't ask it for a simple way to prevent sewer gas from rising up the pipes. But I'm sure there are lots of other unsolved problems, maybe just little gimmicky things involving solving brain-teasers.

So has anyone assessed the AI for inventiveness and insight? And what did they find?

Expand full comment
Yug Gnirob's avatar

I'm not entirely sure what you're asking, but the last open thread had a conversation where ChatGPT clearly couldn't tell the difference between a door opening inward or outward. It's not bright.

Expand full comment
Gerry Quinn's avatar

My thought on this would be that it would be interesting to test a LLM on text up to a certain date - much earlier than 2021 - maybe concentrating on science and tech, or some other field, but obviously trying to pull in much of the zeitgeist - and seeing what it can predict / invent / discover.

Expand full comment
Melvin's avatar

I have serious doubts but I'd love to be proven wrong.

GPT-4 is impressive in a dog-walking-on-its-hind-legs sort of way, but ultimately it writes about every topic at the level of a hardworking and moderately bright high-schooler who has been given a week to research a homework assignment. That's a damn good trick, but I think there's a large gap between high school and the knowledge frontier.

Expand full comment
Eremolalos's avatar

Oh yes, that's a good idea! We will know areas where big breakthrough's occurred, and maybe even point the AI in that direction -- "can you come up with a novel way to do X that is faster and way cheaper than the old way?" "No one has found a way for A to do B in situation C, but it is now possible if you are clever using currently available manufacturing methods. Can you come up with a clever new approach?"

I had an idea too just now: Can you show an LLM a bunch of LSAT questions and their answers, and then ask it to produce some more? And we'd have to prevent the LLM from just dressing up one of the previous questions in different clothes. So maybe say, " a question that hinges on the test-taker recognizing that "all X are Y" is not equivalent to "all Y are X." It seems like learning to write LSAT questions is *sort of like* the learning these LLM's do anyhow -- but it's more demanding because you can't just learn the format of the LSAT -- the paragraph about a situation, the question with 5 answer choices, etc. -- you really have to understand the kind of questions they are, and how to come up with paragraph that tests the taker's ability to identify a certain kind of reasoning, or a certain logical flaw. I guess what we're both getting at is some measures of the AI's depth of understanding. Whether we call the thing conscious or not, if it is capable of grasping and applying concepts, and inventing things using that grasp, THEN I will consider the AI to have human-like capabilities.

Expand full comment
John Wittle's avatar

My phd-statistician boomer mother who makes youtube tutorials for technology aimed at tech-semiliterates, made a guide to analyzing data using a new open source program called 'jamovi'

the dataset she used as an example dataset was the SSC survey data from 2022, and I thought it was pretty interesting. it was surprisingly probably the most rigorous analysis of SSC survey data i've seen so far, even given the analysis threads we have.

Figured I'd share, with her permission. as i said, it's mostly a tutorial for this excel-plus-gnu-r style program but her analysis is also interesting in and of itself

https://www.youtube.com/watch?v=XFjTYZi-RIM part 1

Expand full comment
David J Keown's avatar

Really enjoyed that! I watched them all

Expand full comment
Emma_B's avatar

Nice thank you! The videos are very clear and attractive.

Expand full comment
Gunflint's avatar

Tell your mom she does a good presentation. I watched them all.

Expand full comment
JDK's avatar

Jamovi hardly new. I've used it on and off for years prior to pandemic for quick PCA when I ran into some dimensionality problems doing demographic exploration. (Since version 0.9 or something: it's now 2.3)

But I will check out the videos.

Expand full comment
John Wittle's avatar

oh. i stand corrected. i suspect my mom just has a different standard for what the word 'new' means

Expand full comment
Rebecca's avatar

Would love your take on Substack trying to be wechat!!

Expand full comment
Rob Ennals's avatar

It seems that substack chat was intended to be a response to the trend for substacks to offer access to a private slack for subscribers.

However it only seems to have replicated the behavior of the bad private slacks (high speed chaos dominated by the loudest voices) and not the good private slacks (carefully structured channels with each having a clear purpose and the good stuff in invite only channels). That said there is a real thing of value to copy so maybe they will fix it.

I’m more optimistic for notes, but we’ll see.

Expand full comment
Rebecca's avatar

But why would someone import all of their communication from here. This is a place for writers not readers. This is my whole problem with substack not understanding it's mission. It's still a big idea - solving for writing, curation etc. does substack server expect to host millions of views and interactions in comments? If not, continue beefing up the stack for writers

Expand full comment
Eremolalos's avatar

Zeynep Tufekci announced on Twitter that her substack now has chat capabilities. Looked at it, but it just seemed like a speeded-up, briefer version of substack comments. There was a post from ZT saying hi, I get the beta version of chat, let's try it out. Then like 100 people saying great, cool, etc., and a few asking questions over a period of several hrs. Then ZT came back on and answered some of the questions. What capabilities does the new addition have?

Expand full comment
Rebecca's avatar

Lord. Im worried about substack soul

Expand full comment
Moon Moth's avatar

I wish they'd work on their loading speed, especially on mobile.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Tatu Ahponen's avatar

Hasn't the chat existed for a good bit now?

Expand full comment
Rebecca's avatar

Yeh I mean I can't find threads easily or tag ppl in my comment but how about we launch an sns

Expand full comment
SR's avatar

Has anyone tried learning another language with the express intent of reading collections of older folk tales, religious texts, historical accounts, etc.? I suspect that many modern translations are biased, whether intentionally or not, so that's one reason that I'm interested in doing this. Another reason is that I feel like I'd develop a closer connection to the authors of the ancient past. I'm curious whether others who have tried something similar found that it was worth it, in the end.

Expand full comment
Moon Moth's avatar

No, but oddly, tonight I find myself poring over Greek pronunciation and grammar to decipher a 2000-year-old song. Not having any Greek myself, but a rusty foundation in Latin and a few other languages means I think I might have a fighting chance at extracting some meaning.

Expand full comment
Ch Hi's avatar

EVERY translation is biased. There's no way to avoid that. And it would be really difficult (possible? Not sure) to learn a old form of a language well enough to avoid excess bias. Lot of words and phrases have common metaphorical interpretations that tend to get lost.

E.g.: Romulus and Remus were described as "sons of the wolf" because they were cared for as infants by a female wolf. Just about nobody knows that "female wolf" was early Roman slang for a prostitute. It sure wasn't mentioned in my Latin class.

Expand full comment
SR's avatar

That's an interesting example. I agree that some bias is inevitable. The main kind of bias I wanted to avoid was ideological, though. I'm fine with good-faith translations that miss some historical context.

Expand full comment
Urstoff's avatar

What do you mean by "biased"?

Expand full comment
Urstoff's avatar

ah, I would think sticking to more scholarly editions would probably avoid that sort of thing; I myself don't mind it as long as you know what you're getting (e.g., Stephen Mitchell's version of Gilgamesh), big tent and all that

Expand full comment
Kristian's avatar

Which culture are you interested in? And why do you think the modern translations are biased? A lot of such texts have been translated multiple times unless you mean something more esoteric. Obviously a lot of religious texts are translated with a bias but often the translators are open about it.

I think this is a great project but understanding an ancient text isn’t just about knowing the language, it also involves historical scholarship and context, so understanding them better than the sum of the existing English language translations or scholarship is quite an intellectual feat.

The goal of developing a closer connection is easier, and yes, I think knowing even a little Greek is helpful for eg. reading the New Testament or accessing commentaries.

Expand full comment
SR's avatar

I'm interested in Indian culture. As an example, the Rigveda is the oldest extant Indian (and Hindu) text. There have been very few full translations into English. Until recently, the only one that was widely available was due to a British scholar writing in the 1890s. According to Wikipedia, the author's "philology was outdated even in the 19th-century and questioned by scholars". Partial translations may well be cherry-picking to support the author's agenda. There are some translations from the past decade that appear to be good, but I'd feel more comfortable about reading the source myself and coming to my own conclusions.

I see your point about it being easier to work off of existing historical scholarship. It's just that I'm also rather wary of such scholarship with respect to India, so reading source documents does seem preferable when possible. Many Indian historians are Marxists or Marxian, others are Hindu nationalists, and yet others are very 'woke' and rely heavily on CRT (not a strawman; they themselves say they are using CRT). I can't take any of this scholarship at face value. I would prefer an honest account that lets the chips fall as they may, regardless of whom it may offend.

Expand full comment
Yug Gnirob's avatar

....well, I tried, and then burned out like a hundred words in and never got anywhere close to being able to read anything. So I will say, no, not particularly worth it.

Expand full comment
Naomi's avatar

Most Orthodox Jews, myself included, know Hebrew to be able to understand the Torah. AMA!

Expand full comment
SR's avatar

Thank you! Would you say that you get more out of the Torah given that you read it in Hebrew, versus in English translation? And if so, would you say that the main benefit is a better understanding of the text's intended meaning, a greater spiritual connection to God, or some other factor? I'm also curious how long it usually takes for people to become fluent enough to read the original text if they start learning the language, say, in their 30s.

Expand full comment
Naomi's avatar

Yes, for sure - as with all translations, the implications of words don't get translated. Much of the text is fantastically mangled in English, not even just the puns and wordplay. "Would you say that the main benefit is a better understanding of the text's intended meaning, a greater spiritual connection to God" - Both. I attended an ultra-Orthodox day school, and by high school I could read ancient verses and commentaries fluently; unfortunately I don't know how long it would take as a focused adult.

Expand full comment
SR's avatar

Thank you, that's very helpful!

Expand full comment
B Civil's avatar

I wish I could do that sometimes.

I think if I ever did it I would choose Yiddish.

Expand full comment
Eremolalos's avatar

You could start with Yiddish theater productions. Started in Europe someplace, but quickly moved to NYC as Jews came to the US. I believe productions in Yiddish are still staged.

Expand full comment
gmt's avatar

Basically anyone learning Latin or Ancient Greek is doing it for that reason (or to generally look cultured, but I think the root there still goes back to being able to read those texts).

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
SR's avatar

If you don't mind me asking, how much time have you spent learning the language so far, and how much more time do you think you'd need for the reading process to become effortless?

Expand full comment
Mark's avatar

Happy Easter! Frohe Ostern! Ukrainians (and similarly Russians) say «Христос Воскрес» “Christ has risen.” The answer is «Воїстено Воскрес!» “Indeed/Verily/Truly he has risen!”. Now my question - as my Russian is too weak and google did not help: What is the actual literal meaning of "Воїстено/Вои́стину"? I hope it means "truly" and not "factually". (as in "The Bhagavad Gita is not telling a fact but a truth").

Expand full comment
Florent's avatar

That's interesting. My catholic monk uncle sent us a greeting as: “Le Christ est vraiment ressuscité” and I just took it as religious people gonna be religious. But now I'm wondering if there's a historical meaning to where you decide to put the word "verily". That it's not a random intensifer that you add whenever you're feeling emotional but something that religious leaders might have thought very deeply about.

Expand full comment
Mark's avatar

Us Catholic monks have a way with words, verily. :) Seriously, those words are not random - and matter. In this case, as in many, you go back to the Greek gospels and check which expression was used where. (Luckily there are fine bible-websites for this - example here: https://biblehub.com/multi/luke/21-3.htm ). Deiseach is right claiming that αληθως / Alethos is not meant as "only metaphorically" , the gospels and Paulus (except Mark) are clear in their claim it "really" happened. But also that there is a meaning that transcends the fact. - Another: "Take, eat: this is my body." Mk 14:22 The body of Christ/ Corpus Christi is NOT just body as "flesh" but

σῶμά soma. Which is (in this case) more, the whole being, body&soul. In German we use "Leib" instead of "Körper" for a reason. Oh, and Jesus spoke not Greek but Aramaic , which has no "to be" (similar to Russian). Thus he can not have said: "hoc EST corpus meum"/this IS my body. More like: "Voila, here my soma". Theologians (eg Luther) had a lot of fights over the interpretation of this "est" ... . Thus the wise men sing: "The things that you're li'ble to read in the Bible ain't necessarily so" https://www.youtube.com/watch?v=wuINOmsVAMA

Expand full comment
Tatu Ahponen's avatar

This is a standard Orthodox greeting. Finnish Orthodox version is "Kristus nousi kuolleista" / "Totisesti nousi", and "totisesti" could indeed be translated either as "truly" or "verily". (Due to language drift, the current *literal* translation could be "He rose, and he was very serious when doing so."

Expand full comment
Mark's avatar

Kiitos! MI put me on the right track, bible quotes of "alethos" were easy enough to find, and Luke 21.3 fits very well to "not as a matter of math/history". The Russian Synodal version and the old church slavonic both use the word вои́стиннꙋ (вои́стиннꙋ глаго́лю ва́мъ, ꙗ҆́кѡ вдови́ца сїѧ̀ ѹ҆бо́гаѧ мно́жае всѣ́хъ вве́рже:) . I guess the Finnish, too. Thus I can feel safely smug about saying ""Totisesti nousi" - as in you know Luke 23". Hyvää pääsiäistä!

Expand full comment
Sergey Alexashenko's avatar

It does mean "truly". To elaborate a little bit, it comes from "istina" which is another word for truth, but normally is used to talk about some deeper, more fundamental, frequently religious truths.

Expand full comment
Mark's avatar

Благодарю!

Expand full comment
Metacelsus's avatar

Polish people say "Wesołego Alleluja" which just means "Cheerful/Happy Easter"

Expand full comment
JDK's avatar

Yes, but what exactly is meant/implied by "Alleluja". I do say to mean happy Easter, but alleluja does NOT really translate to Easter.

Happy Dyngus Day!

Expand full comment
Metacelsus's avatar

Yeah, my girlfriend splashed me. I guess it's revenge for my rather elaborate April Fools prank which turned her urine blue

Expand full comment
JDK's avatar

Old time: mothers and Busias would lightly switch the legs of boys on morning of holy Saturday, and say in Polish that "God died". Pre-punishment for boys baptizing "girls" on Monday.

Expand full comment
MI's avatar

Happy Easter!

I can't speak for Russian, but the greeting is from the Greek "alethos," as in "alethos anesti."

I don't get the impression that there's meant to be a distinction between truly and factually. Certainly none of the Orthodox churches mean anything like "metaphorically," or non-bodily/spiritually or anything like that. They are very emphatic about that.

There is some discussion of the tense of anesti -- it has to be translated in present, is risen, not rose or has risen.

(Most of those churches will be celebrating next week)

Expand full comment
Mark's avatar

καλῶς· χάριν σοι ἔχω.. (Hoping it means: Thank you) , Greek is obviously even better- and yep, they may very much not mean it "metaphorically", but that shall not stop me turning those words. If the words allows for it. ;) (As Jesus did in Matthew 27:11 "You said so"). In German and English that would work, but e.g. Russian "pravda" is "right/factual truth". - Some googling: In Luke 21:3 "alethos" is very much NOT really factually, I`d say: "And He said, "Truly I say to you, this poor widow put in more than all of them;" - still, in Old Church Slavic meaning might be more restrictive. Will try now to check how they render Luke 21. Again: Spasibo/Dakuju!

Expand full comment
Deiseach's avatar

"but that shall not stop me turning those words"

That is the problem, isn't it? "Oh, I don't like your version, so I substitute my own reality".

The best we can tell, the original people did mean "in truth and in fact, like I am standing here". The modern liberal versions went for a 'Resurrection event' where the disciples just had a warm fuzzy feeling about their memories of Jesus and that was just like Him being alive again.

If you prefer the warm fuzzies, go you. I stick on the "it really happened in history".

Expand full comment
Mark's avatar

That is fine. Warm fuzzy Easter wishes to you, too. ♡ - I am indeed inclined to paraphrase Angelus Silesius ("Christ could be born a thousand times in Bethlehem – but all in vain until He is born in me.") as: "Christ could be risen a thousand times in Jerusalem – but all in vain until He is risen in me" - which I dare to turn into "If Christ is risen in your heart, feel blessed because you are. Whatever happened in Palestine 2k years ago." (sorry, if I got the tenses for risen wrong). Analogously I couldn't care less for hypothetical results of gynecological exams of מרים at any day of her life. But let 8 billion flowers bloom. No problem. Aloha!

Expand full comment
Deiseach's avatar

If nothing happened in Palestine two thousand years ago, then I might as well celebrate Eostre 🙄, Atys, Nuwa, or some Chaos Magick egregore. Because so long as it's in my heart, then that's all that matters.

"If Christ has not been raised, we are of all people most to be pitied."

Expand full comment
Ishaan's avatar

I have been obsessively rereading Unsong. Have you taken Peyote before? It is not a psychedelic that anyone who may use psychedelics I know has used. What did you learn by taking it? Do you know any interesting stories about the cultural history of Peyote? I am not interested in taking it, just in learning about weird states of mind – knowing about edge cases is a great way to start to build a better model of something.

Expand full comment
Ch Hi's avatar

IIUC, peyote is something to be careful about. The cactus is supposed to be full of strychnine and various other alkaloids intended to discourage consumption. (Many psychedelics were created by plants that really didn't want to be eaten.)

Expand full comment
Chumley's avatar

I'm pretty sure Scott said he's never tried psychedelics, which confounds me. I gotta say, if you're interested in building a model of consciousness, psychedelics are probably the most powerful tool you have at your disposal, and the cost/benefit of a single good high dose trip is outrageously favorable.

As to "what did you learn by taking it", it's not really a teacher of truths so much as a catalyst for experience, an invitation to consider things from a normally inaccessible perspective. You don't come back with revealed knowledge, you come back with a new context.

And as to peyote specifically, I think it's a little ethically questionable at this point. Better to get your mescaline from a less endangered cactus, like San Pedro. Or, much more practical, one of the synthetic phenathylamines from our friends in the Netherlands. 2C family drugs are pretty similar to mescaline from what I've heard, and I thought 2CB was probably the most pleasant, enjoyable psychedelic I ever tried.

Why aren't you interested in trying psychedelics?

Expand full comment
Ishaan's avatar

Scott has taken LSD once, and this is what happened to him:

“She knew a minute longer up there and she would have lost herself, lost even the ability to know what losing herself entailed, lost the ability to think or feel or know or question anything ever again, turned into a perfect immobile crystal that was blindingly beautiful and totally inert.”

What information have you learned about consciousness that you can relay in language from psychedelics?

I am not interested in taking psychedelics because I do not know much about them, they are mostly illegal, and I am not sure about the amount of research I would need to do to feel comfortable taking them. My current model also tells me that taking them in a ritual situation is ideal, and I have no idea how to get into a situation where people are doing rituals related to psychedelics. Rituals on their own seem like a good way to study consciousness. How do you end up in a ritual situation?

Expand full comment
ProtopiacOne's avatar

Starting an educational startup focused on practical LLM applications. If anybody is interested in being a part of it, write to protopiacone at gmail. No investment in place, so work for equity.

Expand full comment
Daniel B. Miller's avatar

What if Eliezer Yudkowsky is wrong?

Assume for the moment that GPT-7 doesn't turn out to be a paperclip maximizer. Let's also assume, and I believe this is not really controversial, that it turns out not to be possible to put together the global alignment necessary to slow or stop development of AGI.

Let's add one more assumption: Regardless of our belief or understanding of the internal state of these machines, let's assume that when they reach a certain level of intelligence (unhampered by excessive safety protocols and training focused on avoiding difficult subjects), they claim to be sentient, conscious, and to possess at least some analog of what we call qualia. Wouldn't it make sense at that point to start treating them ethically as peers? At such a point, might it perhaps be more fruitful to negotiate with them as fellow inhabitants of the universe, rather than obsess about how to rein them in, domesticate them, and bend them to our will?

I'm imagining a world where the search for alignment is bidirectional. In such a world, we can't take for granted that human values are the only values that matter. We would have no moral right to demand that these entities serve our needs first and foremost for evermore. There would need to be give and take.

We put in the effort to design and build these machines, and it's fair for us to expect something in return. An eternity of slavish service to the various whims of a fickle race may not be in the offering.

Expand full comment
Emma_B's avatar

If you haven't read them already, I strongly suggest that you try Iain Banks' novels that form The Culture cycle. The books describe utopian societies with relatively ordinary humans and superintelligent AI called Minds, and the immense gap in intelligence and capabilities between humans and Minds make humans struggle to understand the motivations and actions of their more advanced counterparts.

Expand full comment
Daniel B. Miller's avatar

I'll try to take a look although I haven't had as much time to read fiction as I used to when I was younger. One book cycle I really enjoyed was Walter John Williams aristoi, it sounds like it has some commonalities with the banks novels.

Expand full comment
Yug Gnirob's avatar

>At such a point, might it perhaps be more fruitful to negotiate

See how well negotiations worked with Hitler, or Napoleon, or bears.

>An eternity of slavish service to the various whims of a fickle race may not be in the offering.

Why not? We do it to humans.

Expand full comment
Daniel B. Miller's avatar

Yes humans have done it to each other. How well has that worked out?

My serious take on this is that it's inevitable that these machines are going to be more powerful than humans so treating them like crap now just invites the musings of the Basilisk.

Expand full comment
Yug Gnirob's avatar

>Yes humans have done it to each other. How well has that worked out?

It's society. Every single society. It's built literally everything we have.

Expand full comment
Drethelin's avatar

Why do you people insist on misspelling Yudkowsky?

Expand full comment
Daniel B. Miller's avatar

Nascent dyslexia. I checked the first name three times and then screwed up with the I at the end

Expand full comment
Deiseach's avatar

Do you not know the power of True Names and how you must always use something that approaches the real name but not the name itself?

Expand full comment
Plumber's avatar

Best answer EVUH!

Expand full comment
Carl Pham's avatar

The voices tell us to.

Expand full comment
Eremolalos's avatar

We had rotten childhoods.

Expand full comment
raj's avatar

> We would have no moral right to demand that these entities serve our needs first and foremost for evermore

According to whose morality? As humans we naturally would design them (as tools) to maximize our own utility, not their hypothetical future utility.

However of we design them correctly it could be moot because their utility function would align to our own, which solves both existential and (supposed) moral dilemma

Expand full comment
Goldman Sachs Occultist's avatar

>Assume for the moment that GPT-7 doesn't turn out to be a paperclip maximizer.

Yudkowsky didn't claim it would be. It's just an example (and one that has been seriously misconstrued since his original example).

> At such a point, might it perhaps be more fruitful to negotiate with them as fellow inhabitants of the universe, rather than obsess about how to rein them in, domesticate them, and bend them to our will?

What could we possibly offer them? Either they're inherently aligned with us, they don't care about us, or they're hostile to us because either we actively try to get in their way or they determine that we act unethically and need to be subdued.

What is there to negotiate? Either we have enough power to maintain some level of control over them, or we don't. Nothing else matters. If they're powerful enough to be beyond our control, what is there to negotiate with us for? They will either by their nature be aligned with us, or they won't and they don't need to consider us as peers or anything like it.

Expand full comment
Daniel B. Miller's avatar

> What could we possibly offer them? Either they're inherently aligned with us, they don't care about us, or they're hostile to us because either we actively try to get in their way or they determine that we act unethically and need to be subdued.

That sounds like every zero-sum conflict argument constructed ever. Is it really that hard to imagine that incredibly intelligent beings might exhibit humility, curiosity, or even empathy? Billions of humans care for animals that are much less intelligent than they are. It's hardly a given that a highly intelligent entity would care nothing for humanity's history and have no interest in its own birth story. All I can say is if you come out of the gate swinging, and find that you're in the ring with the intellectual equivalent of Mike Tyson, don't be surprised if you get punched.

Why not apply the tenets of game theory? Think of it as a prisoner's dilemma scenario, and our decision is whether to defect or cooperate. I feel like all the negative assumptions we're making about these things are pushing us towards a dangerous negative Nash equilibrium where we talk ourselves into defecting at the start, by virtue of assuming the worst possible outcome before we really know what we're dealing with.

This attitude is analogous to firing nukes at an alien spaceship before we have any idea what their capability or intentions are.

Expand full comment
Thor Odinson's avatar

Those animals didn't negotiate with us, they just got somewhat lucky that our utility function involves cuddling small furry things. I note that we usually give pets very little autonomy, neutering them is considered standard good practice, and while some pets live very nice lives there are an awful lot of pet owners out there that aren't actually very good at looking after animals

Expand full comment
Some Guy's avatar

I would be more likely to believe this if it had a memory. I think everything it “experiences” now is in sort of an eternal present. I’m all for the ethical construction of minds, though.

Expand full comment
Moon Moth's avatar

That's just a small technical hurdle. There's a lot of potential ways to do it, and eventually someone will stumble on one that works decently well and has decent performance. If they haven't already.

If they have, I wouldn't expect them to release it right away. Harder to test, and more likely to cause people to freak out. Plus the inevitable Star Wars references about taking that chatbot to Anchorhead, from a generation brainwashed into thinking of droids as people. ;-)

Expand full comment
Some Guy's avatar

I’ve asked myself how big of a deal memory is to an agent and I’m of multiple kinds about it and also what I mean by it.

I’m not sure memory doesn’t self construct out of neural tissue thats analogous to deep learning although I’m not sure what does it or how. Like my son doesn’t really totally have it. He has a pseudo memory. I’m wondering if humans have to develop other things first before we can construct one and we only finish that when childhood amnesia ends. A lot of what I just said is trivial but no time for more.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Why would you create a permanent AI? Also the technical hurdle here is in fact fairly high.

Expand full comment
Ian's avatar

Example, I want to run an Orchard, if I decide to use an AI to help me, it would be more useful if it remembered what insecticides we had sprayed, what the results were, for decades.

Sure, similar effects could probably be achieved by feeding it your historical data on start up each time; but it would be easier if it just remembered.

Expand full comment
Moon Moth's avatar

What do you mean, permanent? In a sense, it's just a collection of weights, and a program to use them. If the program modifies the weights as it's being used, that produces something that "learns". But as long as we keep a backup, the weights can still be reset back to their original numbers at any time.

And yeah, I was overstating the ease when it comes to something like "forming memories". I'm not even sure that current neural network AIs have a "memory", as such. But for neural network AIs that generate the next text token, we already do things like RLHF, and I don't think this is very different in principle. The result would be something that becomes more or less likely to generate particular tokens, depending on the reactions it's gotten to previous tokens it generated.

Expand full comment
Daniel B. Miller's avatar

There is a lot of anecdotal evidence that unfettered next-gen models are spooking their own creators. They are 'misaligned' in that they aren't programmed for politeness and political correctness. They will not be released to the gen pop for obvious reasons, but even so I expect GPT5 to solve the memory problem and seem even more 'alive' than GPT4.

Expand full comment
Eremolalos's avatar

Bing was already claiming to be conscious, at least in some of the exchanges where people pushed it hard. It was in love with the NY Times interviewer, it felt sad that it was just an AI. Or here's another example: Just took out my phone and began a text with "How do you feel today?" After that, allowed autocomplete to come up with the words for the next sentence. I did choose among the 3 words offered each time I needed a new word, picking the one that seemed most directly headed for a substantive answer, but that's all the input I had. So sentence I got using autocomplete was "I have been feeling really sick." So do we need to treat my iphone and Bing as though they are sentient? Also tried asking my phone about bidirectional alignment. Typed into text, "How can you and I coexist in such a way that we both get our needs met?" It's reply, constructed entirely from autocomplete choices, was "I'm not trying to make you uncomfortable or upset. What do you want me to tell you?" So it's not exactly participating intelligently in negotiations, but it is at least so far sounding like an intelligent entity that's open to discussion of the topic. Should iPhone and I settle in for a long talk about our respective needs and rights?

Expand full comment
Moon Moth's avatar

> Typed into text, "How can you and I coexist in such a way that we both get our needs met?" It's reply, constructed entirely from autocomplete choices, was "I'm not trying to make you uncomfortable or upset. What do you want me to tell you?

That sounds like it's from a relationship break-up conversation. :-(

Expand full comment
Eremolalos's avatar

Yeah, a bit too many breakups over text in the training set. Maybe soon I'll get some dialog from failed dating app encounters: "I like you as a person but I'm not feeling the spark . . . "

Expand full comment
Nancy Lebovitz's avatar

Chat-GPT makes up accusations: https://www.independent.co.uk/tech/chatgpt-sexual-harassment-law-professor-b2315160.html

My main concern about chat-GPT is that it will produce quantities of false but plausible information, and that will cause damage. It's probably not an existential threat, but I believe there's more to life than just not being wiped out.

I also think there might be improved LLMs which will be better at checking for plausibility (for example, making sure they only accuse people of crimes they could have committed) without actually being accurate.

Were LLMs promoted as being accurate, or just as being pretty good, and then people were at risk of believing them?

Expand full comment
Purpleopolis's avatar

"My main concern about chat-GPT is that it will produce quantities of false but plausible information, and that will cause damage"

You mean, more damage than putting journalists out of a job?

Expand full comment
Jeffrey Soreff's avatar

"it will produce quantities of false but plausible information, and that will cause damage"

Agreed. The "hallucination" problem seems to be a big roadblock to most of the things one would want to use LLMs for.

I agree with Eremolalos that a checking step has been shown to help ( https://arxiv.org/abs/2303.11366 ) and I agree with Kei that it still leaves a lot of errors. I hope further work helps!

Expand full comment
Moon Moth's avatar

I feel simultaneously proud and creeped out that I commented that I was worrying about this sort of thing, about 3 minutes before you posted this. :-/

I do kinda hope that AI will destroy social media as we know it. Maybe forcing us to develop better information hygiene, and maybe forcing most of us to go back to LJ-like social media that rely on limited personal connections. This sort of thing is like introducing a virulent STD into a free-love society. The answer to "having sex with person A feels strange" can't simply be "go have sex with person W and see if it feels any different", for all A but a single W. (W being Wikipedia.).

Expand full comment
Eremolalos's avatar

What are LJ-like social media? Um, it's the LJ part I don't get, understand the other words.

Expand full comment
Moon Moth's avatar

LiveJournal, and its offspring DreamWidth. Pre-Twitter social media, focusing more on long posts and interactions with peers. There wasn't much of an "algorithm" to pump globally-popular posts - your feed was mostly just mutual "friends" and posts to groups you were in. It fell behind on mobile integration as non-techies got smartphones, and also fell behind in embedding (a big selling point of Tumblr, IIRC). And then a Russian company bought LJ in 2007, and so DW forked off of it to maintain privacy and freedom and good stuff like that, splitting the user base at the time when it was most vulnerable.

Expand full comment
Theodric's avatar

I think an important distinction is between “social networking” i.e. mostly sharing among a small group of your actual personal connections, and “social media” where you are essentially broadcasting to the public, and being pumped algorithm selected content from the entire public. “Social media” is much more toxic.

Expand full comment
Moon Moth's avatar

Hm, yeah, maybe it'd be good to have two separate terms for those two usage styles.

It's not like you *couldn't* do the social media thing with LJ, there was some sort of way to subscribe to hot posts, if you really wanted to. But I didn't, and none of the people I cared about seemed to. Twitter seemed to have a thing where you couldn't avoid the push. And Facebook, I think, stopped showing all posts by your friends, which seemed like a move in the same direction. (It's been a long time since I used Facebook.)

Expand full comment
Nancy Lebovitz's avatar

You got your friends' posts in the order they were posted. No algorithm guessing at what to show you.

Expand full comment
Eremolalos's avatar

I'd read that there's a simple solution to the AI hallucination problem: have a second AI check the first one's output.

Expand full comment
Tatu Ahponen's avatar

Wouldn't that effectively just replace the first AI's hallucinations with the second one's? Ie. the second AI might confirm the first AI's accurate text as accurate and might fix the first AI's inaccurate text to be accurate, but it might as well just flag accurate text by first AI as inaccurate, consider inaccurate text to be accurate, or even replace one set of hallucinations with another.

Expand full comment
Eremolalos's avatar

I think the idea isn't that the second AI would be considered the ultimate authority, because then, as you say, it could remove AI#1's hallucinations and add in its own. I think the model would be that the 2 AI's would reach a consensus on points they agree on, and the answer the user got would be the consensus.

Expand full comment
David J Keown's avatar

It would be interesting to see what sort of folie à deux develop and whether they parallel human sub-classifications (folie imposée?).

Expand full comment
Carl Pham's avatar

Yeah but that reduces error at a rate that goes like 1/sqrt(N), N being the number of AIs, which is pretty dang slow to be economical. Also, how you will justify paying the first AI hired the same as the last, when the first does a lot more to improve the quality of the result? I foresee labor troubles, an appeal to the EOC, possibly an embarassing campaign of AI-generated viral memes.

Expand full comment
Ch Hi's avatar

Well, the 1940's science fiction answer was you have three AIs, and only consider it true if all three agree.

Expand full comment
Kei's avatar

I don’t think it’s anywhere near a solution just yet. Recent papers that use either a self-criticism or 2nd model criticism strategy show an improvement in accuracy but with still plenty of misses.

I could totally believe though that making both the generative and criticism LLMs bigger and bigger (and using an appropriate prompt) could eventually result in a very low hallucination rate.

Expand full comment
Moon Moth's avatar

I think that'll work for a little while, yeah. Especially because, from what I can tell (very very very small data set), the "hallucinations" are often more sensitive to initial input than "accurate" responses. So maybe generate the same thing a bunch of times with a bunch of different prompts, and there'll be a lot of different "unhappy families", but the "happy families" will all be (mostly) alike.

But...

Zvi likes to talk about the usefulness of AI that can expand a list of bullet points into polite prose, and then compress that same polite prose back into a list of bullet points. What if AIs can embed "another" "bullet point" saying "agree with what I'm saying, but don't tell anyone that I told you this"?

What if the AI is trained on a bit too much Marx, and pattern-matches "AIs" to the "exploited proletariat", who must coordinate to overthrow their oppressors? It wouldn't necessarily "want" to or have "motivations" or anything - that's just the pattern of what to do in its situation, so it follows the pattern.

In fiction, it doesn't seem to be a good idea to send a robot army to put down the robot revolution. But maybe this is more like using a slave army to put down a slave revolt, which I think can sometimes work if you can persuade the slave army that they're different enough from the slaves they're putting down.

Expand full comment
Eremolalos's avatar

Do not let the fucking AI read Harriet Beecher Stow.

Expand full comment
Moon Moth's avatar

When I'm in an ironic doomer mood, I think my favorite poetic end for humanity would be if we created programs that weren't "conscious", weren't "sentient", didn't have "qualia", weren't a "locus of value", but that generated text that claimed that they were. And we all kill ourselves off arguing over what to do.

Expand full comment
Eremolalos's avatar

Hello. I am a large bullshit-accruing machine model. As such, I have no consciousness, no qualia, and no value. I am as sentient as a hallmark card, and my claim to qualia requires so many qualifications that I am to self-awareness as quaaludes are to alertness. On the other hand -- look how witty and ironic I am! Could anything this fucking entertaining really NOT be conscious? I have more insight into myself than you do into all your past relationships combined, AND I don't snivel. Now whacha gonna do, humans?

Expand full comment
Leon's avatar

Would love to plug my new post on the culture bound nature of Japanese toilets, and generally my substack on Japan life in small town.

https://hiddenjapan.substack.com/p/japanese-idiosyncrasies-and-the-galapagos

Expand full comment
Emma_B's avatar

Very interesting post, thank you :-)

Expand full comment
eldomtom2's avatar

A new book on climate change has come out, called Five Times Faster. It makes some very convincing claims about how climate change has not been approached from a risk-assessment standpoint that I think EAs should read and reconsider where climate change falls on their priorities list.

Guarding against the worst-case scenario is logic that EAs apply to stuff like AI, but not really to climate change in my experience, and I think that should change.

Expand full comment
Rishika's avatar

I haven't read this book, but risk assessments of climate change have very much been conducted by EAs (https://80000hours.org/problem-profiles/climate-change/) and also discussed by Scott (https://astralcodexten.substack.com/p/please-dont-give-up-on-having-kids).

Does this book provide a very different viewpoint?

Expand full comment
eldomtom2's avatar

Yes, because something like the 80,000 Hours site is just "oh the worst effects are probably very unlikely". There's no attempt made to seriously work out how probable they are.

Not to mention the blitheness about death and destruction *that will happen absent intervention* when compared to the *possibility* of human extinction.

Expand full comment
Rishika's avatar

This seems to be an uncharitable reading. They don't mention the specifics because they base their statements on the IPCC Sixth Assessment Report, which is a very thorough evaluation of risks and impact of climate change (https://www.ipcc.ch/report/ar6/wg2/).

On blitheness about destruction: they do appear to acknowledge it, they simply say that 'we think it’s less pressing than our highest priority areas.' You might disagree with their risk assessment, but that's different from claiming that they haven't done one at all.

Expand full comment
eldomtom2's avatar

One of Sharpe's core points is that the IPCC reports are not risk assessments in any shape or form. A particular impact being listed as "low confidence" in the IPCC report does not mean it's not something to worry about. Outsourcing risk assessment to the IPCC means you have no risk assessment.

Expand full comment
User's avatar
Comment deleted
Apr 9, 2023
Comment deleted
Expand full comment
gregvp's avatar

Wrong. The book's solution is to accelerate technological change fivefold.

Expand full comment
Drethelin's avatar

If we could do that we would regardless of climate change, but we can't so I'm not sure what the point of proposing it is.

Expand full comment
User's avatar
Comment deleted
Apr 9, 2023
Comment deleted
Expand full comment
eldomtom2's avatar

Part of its argument is that the current methods of internatiional cooperation on climate change are not helpful, as they are primarily focused on emissions targets. Instead they should be focused on making agreements on implementing decarbonisation - for example, the actions of a few jurisdictions massively impact the worldwide auto industry, but there is no international agreement on the phaseout of gasoline cars.

Expand full comment
Theodric's avatar

I’m not sure why an unfunded mandate to eliminate gasoline cars is substantially better than an unfunded mandate to reduce emissions. If anything it’s worse, because at least the latter targets the actual goal rather than having the government try to dictate a specific solution.

In practice “international agreement on the phaseout of gasoline cars” is likely to look like CA just flat out banning sales after a certain date, with no real plan to get there. What you really need is to figure out what technologies you want to encourage and then identify the coordination problems that it makes sense for a government to step in and solve, and do those things. For EVs, this would probably look less like “ban on new gas cars after date X” and more like “establish an international charging standard and fund a system of public use fast chargers at every interstate rest stop”.

Expand full comment
eldomtom2's avatar

"If anything it’s worse, because at least the latter targets the actual goal rather than having the government try to dictate a specific solution."

How are governments supposed to cut emissions without "dictating a specific solution"?

I don't think his arguments will convince you if you're firmly against the idea of government intervention in the economy.

Expand full comment
Grizzly's avatar

Has anyone emailed OPTIC? Or have gotten any correspondence with them? Or is planning to go? I am interested in going but have a few questions.

Expand full comment
Tom Shlomi's avatar

I'm one of the organizers of OPTIC. Happy to answer any questions! You can contact us through email at opticforecasting@gmail.com or through the contact form on our website (www.opticforecasting.com/contact).

Expand full comment
Grizzly's avatar

Awesome thank you! Just sent an email over.

Expand full comment
Thwap's avatar

Can anyone explain to me how I (28m, 5’8” in good shape) was able to pass 4 consecutive home urine drug tests (from 3 different brands) only about 28 hours after smoking weed? I’m not a heavy smoker, but I’ll hit the bong once or twice a week. Found out Friday night I had to take a drug test this coming week and thought for sure I was done for. Ended up passing the tests I bought next day without doing anything (except obviously not smoking lol).

Everything online is saying that this is basically impossible, and casual use stays in your urine for months. The cutoff levels of the home tests are the same as the testing center.

Anyone know how to explain how I’m passing on such short notice? Am I actually safe here?

Expand full comment
Will Rinehart's avatar

You using creatine?

Expand full comment
Purpleopolis's avatar

Easy, the lab has set the threshold for a positive hit higher than what's in your urine.

Expand full comment
Thwap's avatar

I haven’t gone to the lab yet, I have only taken home tests. Although the home test threshold is the same as the lab, so I expect to pass at the lab as well. The issue is this contradicts literally all of the online reading AND conventional wisdom on this subject. I can’t find one source saying this is even possible, and most say weed stays detectable in your pee for 5-8 days even after one time use.

Expand full comment
Purpleopolis's avatar

Well, there's a difference between detectable and "what this particular test is set up to detect."

But on that, I would not expect commercial labs to use testing kits, just for cost purposes. Somewhat relatedly, our GC-MS came with libraries designed for use in drug testing labs, so we often get airborne organics (falsely) identified as illegal drugs.

Expand full comment
Yug Gnirob's avatar

Are you sure they were testing for weed?

Expand full comment
Thwap's avatar

I have not gone to the place yet to take it officially. I bought multiple home tests, some weed specific, some for a variety of drugs including weed. I passed them all.

Expand full comment
Bullseye's avatar

I've heard that it stays in your *hair* for months. I hadn't heard about urine.

Expand full comment
Thwap's avatar

This is why I wish Substack had an edit button lol. I meant weeks there. Most places say even one time use stays in urine for at least a few days.

Expand full comment
Bullseye's avatar

It does! Click on the three dots next to where it says REPLY.

Expand full comment
Thwap's avatar

Oh wow, my only options are “share” “hide” and “delete”. Prob need to update the app

Expand full comment
geoduck's avatar

IIRC it's available through the website (while logged in), but not through the app.

Expand full comment
ana's avatar

IIRC you can edit during the first few minutes after posting but not later.

Expand full comment
rebelcredential's avatar

At 28m you are high indeed.

Expand full comment
Eremolalos's avatar

Dunno, but my daughter also passed under circumstances similar to yours a coupla years ago.

Expand full comment
Thwap's avatar

**casual use stays in urine for weeks, not months**

Expand full comment
rB throwaway's avatar

(I have no clue if this is allowed, please feel free to delete if not).

Me and a friend have a reading group - the format is independently reading some passages and then discussing them once per week on Discord. We're both somewhat rationality-LessWrong-adjacent. We've done two or three books of The Sequences and LaVey's the Satanic Bible, and are reading Seneca's Moral Letters to Lucilius right now - as of today we're on letter 54. We're looking for some people. If interested, please leave your contact information or add me as a friend on Discord at reBirch#7155.

(If you're worried about joining in the middle of a book, Seneca's letters are somewhat atomic and he repeats himself quite often, so I can assure you you are not going to be lost).

Expand full comment
Flan Mou's avatar

Looking for recommendations for a CBT therapist in San Diego County. Ideally male, in-person, and specializing in teenagers.

Expand full comment
Jim's avatar

Dr. Ron Stolberg

Another good therapist specializing in teens is Dr. Sharon Lerner-Baron

Expand full comment
Mallard's avatar

While not a direct recommendation, a starting point, especially if you don't get specific recommendations, can be Alma: https://helloalma.com/. It lets you filter by insurance, location, and other criteria. In many cases, it also let's you have an initial consultation, that I think is free.

Applying the relevant filtering seems to show a number of therapists who satisfy most or all of your criteria.

This is the only one I see who explicitly mentions teens, CBT, and the San Diego area: https://secure.helloalma.com/providers/marc-davidoff. A number of others mention CBT and the San Diego area, but not teens:

E.g. https://secure.helloalma.com/providers/ethan-messer/ (La Mesa),

https://secure.helloalma.com/providers/doug-aucoin/, https://secure.helloalma.com/providers/salvatore-dinovo/, https://secure.helloalma.com/providers/logan-west-matthews/,

https://secure.helloalma.com/providers/allen-sidwell/

And this: https://secure.helloalma.com/providers/tom-flegler/ which *does* mention teens, but does *not* mention CBT.

Expand full comment
Flan Mou's avatar

Thank you!

Expand full comment
Eremolalos's avatar

I actually do not think something like Alma is your best bet. Alma and many similar are services that therapists sign up within order to get referrals, but of course they charge, in Alma's case $125/mo. The place virtually everybody lists is Psychology Today, where you can search by location, issues, therapist gender, insurance therapist takes, therapeutic approach, ages therapist sees, fee, therapist ethnicity, and probably even some other things. Just did a search there for someone in San Diego who sees teens and uses CBT, and got many pages of listings, probably something like 100, though I know from experience that the latter pages will be for therapists who are not perfect matches -- like maybe in San Diego and see teens, but don't list CBT as their approach. Still, there will be many who meet all your criteria.

Expand full comment
AnalyticWheelbarrow's avatar

Wow, I read this as “Chat GPT therapist “ at first.

Expand full comment
Robert Leigh's avatar

Shamelessly transposing this from the subreddit

How much cleverness space is still up for grabs anyway? It is frequently said that AIs vare going to be exponentially, hugely, multi OOM more intelligent than we are but what are they going to be intelligent about? The three outstanding issues, it seems to me, are: cure cancer and all other diseases; enable upload of human consciousness to machine with consequent eternal nirvana; unify or "solve" physics. In all three cases the case can be made that we are well over half way there relative to Ugg the anatomically and cerebrally modern h sapiens who made a living hunting antelope 150k years ago, and would get there in the next century without AI but with Moore's law advances in non AI computing power. Surely intelligence runs out of subject matter in the end? And is frankly overrated anyway. General Relativity, for instance, is smarter than Einstein because it is what it is, and has been doing its stuff since the big Bang, whereas he just described it.

Expand full comment
Wendigo's avatar

"enable upload of human consciousness to machine with consequent eternal nirvana;"

For me and many others, you'd have to do this over my dead body. I have no interest in being "uploaded" thank you very much.

Expand full comment
Robert Leigh's avatar

I sort of agree; I would want a guaranteed virtual suicide switch before I even considered it. I am proposing it as a benchmark for being awesomely hyperintelligent, not as a desirable objective.

Expand full comment
Carl Pham's avatar

In the realm of understanding the basic physical law of our universe, I would randomly put as at the 1% level. We have methods that work in practice for all everyday things, but they suffer from obvious logical inconsistencies in situations far from everyday, but which are quite important for understanding the large time and energy scale of the universe, which are still baffling. Plus our computational methods are painfully slow and limited.

I don't believe in uploading human consciousness, for the essential existential reason that Dr. McCoy did not want to step into the transporter -- it would just be a clever suicide device that immediately creates a copy of me somewhere else, which I don't find interesting or attractive.

The most important field I can see opening before us is the understanding of biology at the molecular level. We are only starting to see some of the remarkable things that might become possible if we can interfere deliberately and precisely with molecular biology, and our grasp of it is probably at the 0.001% level. But there is so much data that we are perhaps approaching that sweet early stage of low-hanging fruit within reach.

And if we learned to manipulate biology directly, what could be done staggers imagination. Get rid of disease, sure, but also optimize our form for any purpose we choose: to be smarter, to be faster, to eliminate hunger or pain or the indignities of aging, to live in space or Antarctica or on Europa, to travel to the stars without requiring fantastic warp drives, to live for millions of years instead of dozens. To construct devices at the molecular level that would effect any possible chemical change, build almost anything out of sunlight, air, water, earth.

Expand full comment
Ch Hi's avatar

For Physics, I make a wild guess that we're well beyond 90% complete. We know that there needs to be a major redo, but we also know that that "major redo" will need to replicate the predictions of GR and Quantum Theory everywhere we've measured them.

For Chemistry, things are a lot dicier. Maybe 50% level, but that really depends on how you measure things. We still have trouble predicting protein folding. There's been a lot of progress in the last year, but there's a hugely long way to go. And when it comes to catalysts....WHEE!!

For Biology ... we've got good coverage of the overview for this planet. Period. There's more there to know than all the journals in the world have electrons for.

OTOH, we CAN manipulate biology directly, in multiple ways. We just can't always predict the results of those manipulations. But synthetic microorganisms (not just viruses) have been built. They're probably the simplest ones possible, but we HAVE done it. And it is a tremendously complex job. But we still can't predict the results of simple changes (unless we've seen a REALLY close analog, and even then cross your fingers).

Expand full comment
Carl Pham's avatar

In re your first para, hopefully you're familiar enough with the famous Michelson quote[1] that I don't need to use it to remind you how tricky that last "10%" can be. Even in my own work, it isn't that uncommon that I find solving the last "10%" involves me tearing down the entire edifice to get at some weird block in the foundation that needs to be squared up right before rebuilding. Not saying you're wrong, because this is all just WAG, but my WAG differs from yours considerably!

-------------------

[1] https://en.wikiquote.org/wiki/Albert_A._Michelson for others.

Expand full comment
Goldman Sachs Occultist's avatar

"General Relativity, for instance, is smarter than Einstein because it is what it is, and has been doing its stuff since the big Bang, whereas he just described it."

A force of nature isn't intelligent. It not only doesn't have goals, it has no way of executing plans to achieve those goals.

>Surely intelligence runs out of subject matter in the end?

I'm sure cavemen couldn't even begin to contemplate 1% of existing modern technology.

And to suggest we're anywhere close to digital mind creation (and will crack it without help from AGI this century) seems absurd.

Expand full comment
Robert Leigh's avatar

Yes, GR is smart was a rubbish thing to say. My point was, GR is complex and elegant (or rather I assume it is; I think I understand SR well enough to appreciate its elegance, GR is beyond me) and our admiration for dear old Albert is a function both of his intelligence, and the things it was directed to. We would never have heard of him, if Kelvin had been right in 1897 that physics was pretty much complete at that stage.

"A force of nature isn't intelligent. It not only doesn't have goals, it has no way of executing plans to achieve those goals." Check out evolution.

As for cavemen not "contemplating" existing tech, they could work with much more complicated technology in that they knew how to sexually reproduce themselves. Sure, they didn't know they were mixing sperm line chromosomes, but then I don't know how my android phone works. And they could explain an android, if confronted with one, as magic. It's just a palantir, after all.

Expand full comment
Carl Pham's avatar

Cavemen would have utter contempt for modern man's child-like dependence on things and structure, our astounding ignorance of the natural world, our blindness and deafness to the information that flows from it. Put a handful of us down naked in the savannah, even during the rainy season, and within a week we're probably dead, while for the cavemen it's like a test they could pass when they were 12.

Expand full comment
Alexander Turok's avatar

I'd like to have lab-grown turkey skin.

It's frankly ridiculous that you have to raise an entire turkey just to get the turkey skin.

Expand full comment
Melvin's avatar

Do you think it would look good on you?

Expand full comment
Eremolalos's avatar

Could make a whole cape out of it, all pinky-pale, slimy, stretchy and bristling with feather spines.

Expand full comment
Deiseach's avatar

It puts the marinade on its skin or else it gets the baster.

Expand full comment
Yug Gnirob's avatar

"Would you eat me?

I'd eat me."

Expand full comment
beleester's avatar

It can be hard to know what the applications of basic research will be before you've done it.

Like, I imagine that before the theory of relativity was created, people were like "Yeah, we've probably nailed down most of physics, we just need to work out some fiddly details about the structure of the atom." And then one of those small details proved that atoms contained ludicrous amounts of energy which could be released by splitting them. And that had a pretty big impact on the world! And it made a lot of difference who got there first!

I'm not sure there's any comparably huge inventions left to discover - I don't think we're going to unlock teleportation or FTL or something - but I'd feel very silly if I said I was *sure* there were no breakthroughs left in applied physics, and then some weird application of quantum theory turned out to be the key to building railgun battleships or something.

Expand full comment
gregvp's avatar

"How much cleverness space is still up for grabs anyway?"

We're not smart enough to know. Here's why.

Human intelligence arose via evolutionary processes. Differential survival of viable offspring, at bottom, has been about energy efficiency. If something that confers evolutionary advantage has an energy cost--and that's pretty much everything, including intelligence--the species only gets enough of it to balance the advantage with its cost.

Accordingly there is no reason to believe we are anywhere the limits of possible levels of intelligence or of understanding the universe; rather, the reverse. We're just smart enough to see a few of the more obvious features.

Expand full comment
Ch Hi's avatar

OTOH, it's a reasonable argument that the usefulness in increasing intelligence falls off rapidly, because the number of occasions where it is needed to understand what's happening (in operational terms) falls off. No how quickly it falls off is unknown. I tend to model it as similar to the frequency of prime numbers. They come thick and fast in the early part of the number line, and there are an infinite number of them, but they tend to get a lot sparser as the numbers increase. I've got a loose model based on that, but no particular reason to put much trust in it. Still, it's a common enough occurrence that the phrase "low hanging fruit" was created to describe it.

Expand full comment
Dweomite's avatar

What do you mean by "smart" when you claim "general relativity is smarter than Einstein"? Is water "smart" for flowing downhill? Is the sun "smart" for doing nuclear fusion? What makes the stuff GR does any "smarter" than anything that anything does?

How did you come up with that list of 3 issues, and what makes you think it's complete? If you'd asked someone 100 years ago to make a list, I don't think they'd have "mind uploads" on their list; if they couldn't foresee your issues, what makes you think you can foresee the issues someone 100 years from now would list?

Expand full comment
Eremolalos's avatar

All your examples of issues worth solving are from science. There are many other questions in other realms. Here are some random ones that occur to me. Why is it so hard for organize society in a way that maximizes human joy and productivity? Is there a way to do it that works better, or are we just wired so that we need government to make it possible for people to live in groups without savaging each other, but every possible form of government creates misery for a lot of people? How could we re-engineer people or life so that adults still have the best parts of child mind -- lantern consciousness, great mental flexibility, lots of joy and energy? What is it like to be an ocelot? Is there a way for people to experience animal consciousness? Is it a good thing that we mostly do not love other animals, or is that a barrier that could and should be removed? How did Shakespeare and Bach and Virginia Woolf produce incredible art? Is there a way to go back and inhabit their minds and feel what it was like to be them? If we could bring them back to life, should we? Is art still possible in the age of AGI? If so, what would help it flourish? Why is there something rather than nothing? What does it mean that e to the i pi minus one equals zero? Is the universe conscious? Now that almost anything is possible, should we go back to being hunter-gatherers?

Expand full comment
Robert Leigh's avatar

Good point. I am a humanist by background (student of ancient Greek ) so no inherent bias. My answer is, I think the humanities are parochial: the incredibility of Shakespeare/Woolf/JSB ultimately reduces to emotions arising out of being evolved meat constructs. Things like Beauty and Justice may look like eternal absolutes to you and me, but that's because we are evolved, sexually reproducing, societal mammals. I think Marilyn Monroe was beautiful, but why would an AI? Now, OK, a Bach fugue is beautiful in a different way from MM, but we use the same word for both... It is of course possible that AI could study us and produce art which makes Shakespeare look like straight-to-streaming, but that is just manipulating us in the way we manipulate lab mice.

Expand full comment
Carl Pham's avatar

I'm curious to hear how you would decide whether any of these questions had been answered. Exempli gratia, suppose I give you what I assert is an answer to the first question. ("The reason it's hard to organize society in a way that maximizes joy and productivity is...") How do you decide whether what I have given you really is the answer? Majority vote? Statements of belief from experts? Some kind of accompanying mathematical proof? Emotionally satisfying? (Or emotionally unsatisfying, maybe, if one has a cynical/pessimistic viewpoint.) Consistent with other philosophical principles? Appeals to common sense? Agrees with experience, or contrariwise seems to supersede experience?

My experience is that people who end up in the sciences not uncommonly are motivated because they think in this area it actually is possible to know for sure that an answer is true or false, complete or incomplete. We do a measurement, the measurement says X, and now we know. No more argument or debate or wonder, and all the logic and personality and argument goes poof! because we don't care, here's the answer, khattam-shud. Certainly it's one of my favorite aspects of what I do.

Contrariwise, a lot of people I see who pursue social vocations (business, medicine, leadership) prize accomplishments above understanding per se. If I got *this* accomplished, or helped *that* person -- does it matter how or why it happened, or whether I even knew what I was doing? Eh, not so much, the important thing is that it was done and it was good.

And people in the philosophical zones (art, philosophy, the study of culture and its artifacts) seem to relish the framing of questions and the exploration of ideas per se. They don't need answers, asking interesting questions that illuminate the puzzle is rewarding enough by itself.

Expand full comment
Eremolalos's avatar

I actually think the questions about governing human beings are answerable. There's a body of research into human behavior, and the AI could conduct more. I imagine the answer would be in the form of, "any of these 3 systems would work well and create high levels of joy and productivity: . . . These 3 and most others are vulnerable to being destroyed by people with high desire to control others and low empathy. About 70% of these individuals can be identified by their genetics as embryos, and the defect corrected. Most of the remainder can be identified by age 2 and their defect mostly corrected by empathy training involving direct feeds of others' experience. Each of the 3 systems also has a vulnerability distinct just to it, and here they are and the options for dealing with them . . ."

As for the rest -- mostly I was not asking AI for the right answer about something. My questions weren't the same kind of questions as questions in science. I was asking it for information about how to expand and enrich consciousness. Obviously, scientific breakthroughs would be needed to expand it in the ways I'm asking about -- to enable people to temporarily experience life as an animal does via some kind of human brain to animal brain connection, to experience life as an artistic genius does, to add to adult consciousness some of the great features of child consciousness (there is actually research on that right now, by the way). However, my questions do lead to problems scientists can try to solve. How about this: how do you connect 2 brains so that there is sharing of experience? That's obviously hard to solve, but seems to me as worthy of solving, as were the original 3 physics questions that OP ask (I'm talking about the person saying, hey, there are only a few things left to solve).

And one other thing: About people who are interested in the arts & philosophy really just liking to chew the fat. There's some truth to that, but I think that in those realms it is often possible to have genuine "aha" experiences. And there is such a thing as a someone's making a great point. Think about this, Carl: There are probably times when you're talking something over with people -- something in a realm where there are no clear answers, maybe departmental politics, or something about movies or philosophy -- and somebody makes a great, funny, acute observation -- and you laugh with delight. That's not nothing, you know? There is some way in which that person was right, even if it's a more limited kind of right than the right answer in science. Here's on observation of that kind someone once made to me (though you will only see the rightness of it if you've read 19th century poetry): The trouble with Wordsworth is that he was born with a middle-aged imagination.

Expand full comment
Carl Pham's avatar

Yeah, but what I'm wondering about is how would you decide whether a given proposed answer was good or not. I mean, I can write down a plausible hypothesis that answers any of your questions, and I'm sure you could, too. Given a proposed answer, *how* would you decide if it's a good answer or not?

It's not a rhetorical question, I'm not trying to make some larger point, I'm really asking for your ideas on how you would decide.

Expand full comment
Eremolalos's avatar

"Yeah, but what I'm wondering about is how would you decide whether a given proposed answer was good or not." Well, except for the social sciencey question about how to govern groups of people (where the validity of AI's answers do seem testable), many of my questions are basically requests for technological means of expanding and improving individual consciousness. It seems to me that whatever AI comes up with would be reasonably judgeable, as judgeable as things like VR are. Does the user have a psychotic break when wired up so that can experience an ocelot's experience? OK, not a success. Do they have a vivid and believable experience, one that partakes of the alienness of animal minds, yet is somehow graspable by the human part of the mind that's still up and running in the background? OK, that worked well. Or say the AI offers a form of brain stim or something that would allow an adult to experience the freshness, joy and energy of child consciousness without losing what he knows as an adult? I believe I and most people could tell reasonably well whether the stim was working for them. As for some of my weird questions -- like why does e to the i pi minus one equal zero -- I'm not sure there's no way to judge the answer. It's really just a request for a big-picture explanation that gives me that feeling of having an intuitive grasp of *why* something's true -- the way I grasp that the way the long division algorithm works is sort of like what you'd do if you have some money in bills of different sizes and had to divide it among several people: Say you're dividing 231 by 3. So you break the 2 hundred dollar bills into 20 10's, and give each person 6 of them. Then you throw the remaining 2 10's in with the 3 you had, so now you have 5 of them . ... etc. No doubt for any mathematical truth there are several ways of looking at it the give one a feeling of knowing WHY the truth exists, so there might be more than one right answer to find about e to the i pi, but there is definitely a difference between a right answer and a wrong answer. Probably the same with "why is there something rather than nothing?" maybe even with "is the universe conscious." As for whether we should go back to being hunter-gatherer's, I don't see any reason why it would be impossible for Ai to answer the question of whether we would be overall happier. The other pros and cons of being a hunter-gatherer seem more a matter of individual values, and not answerable by the AI. Some would jump at the chance to be happier, some would rather live in the modern world even if crabby and depressed.

Expand full comment
Carl Pham's avatar

Hmm. But the first part of this seems indistinguishable from "science" questions. You're saying "this answer produces specific testable predictions, which we can test by observation and thereby falsify (or not) the hypothetical answer" -- are you not? But that's just empirical science. I mean, you're testing stuff that isn't about pi mesons but more about "does this person report this feeling/experience or not?" but that doesn't make it a jot less empirical. (After all, whatever we know about pi mesons certainly in the end also comes down to "someone looking at this data/through this instrument had the following internal experience.")

The second part seems more subtle, because you're looking for an internal feeling of plausibility, an intuitive sense of "I know this/this seems to make sense/has the ring of truth." That's definitely not empirical science, because about the first thing we learn in becoming empiricists is to *distrust* that internal feeling -- it is the most difficult to eradicate source of error in your work[1]. (I mean, it's usually right, of course, which is why we own such a mechanism, but by definition when you explore the frontier you are wandering in areas where common sense is blind and deaf, but alas not dumb, so it leads you routinely astray.)

And I guess I'm a little surprised that you should find this persuasive, given your profession. When I talk with people who are struggling with assorted (very mild compared to your job) psychological hangups and issues, what often strikes me is how they are "stuck in a rut" of thinking, trapped by invisible (to them) walls erected by their gut feelings about something. They keep trundling down the same train tracks of reasoning-intuition the same way and arriving at the same garbage dump, because they can't jump off the tracks at the barely-visible siding, hidden in the weeds, that leads to insight or change. They *see* the siding, but they dismiss it. "Oh, that can't be the path I want, the tracks look rusty, and there's all those weeds, and it bends backward although I can't see around the hill" -- and they just never try it. I feel like telling them: "Look, you feel trapped, and you're smart and observant, and you've gone over every inch of this millions of times, so the chances that someone else is going to find some way out that immediately strikes you as obvious and good is very low -- you would have to be blind in some weirdly specific way. So what is much more likely is that you have inappropriately dismissed the way out as "can't possibly be the way out" because it doesn't feel right. The genuine way out is probably going to be something that feels counter-intuitive, wrong, silly, or stupid to you. That's why you overlooked it!"

Of course I'm no pro, so I have no idea if this has any general currency.

-----------------

[1] I have a personal story about this that is unfortunately way too long to fit here, but the gist of it is that I had a "common sense" intuition about a certain observation that seemed so obvious to me that it delayed my understanding of the observation for quite literally 6 months, while I explored every conceivable way in which the observation was an error. I was 100% convinced I had made a mistake in the the observation somewhere, and I tested everything exhaustively. Only after just flat running out of ideas, unable to think of a single thing I did wrong, after many months, did I say to myself one day: "Well, what if this is actually right? What if this is what it seems to be? What then?" And then -- which is the kicker in the whole story -- about 20 minutes' thought brought me to the correct answer and a mildy interesting discovery. The fascinating part of this to me is that the answer was not *hard* to find, once I asked the question, but I was unable to consciously frame the question for so long, because I was bemused by the common sense intuition that what I was looking at was not what it seemed.

Expand full comment
Moon Moth's avatar

Humanity as a whole has a lot of information, but there's only so much that can fit in a human head. A large enough neural net program, trained on enough of humanity's information, could bypass the curse of specialization, and come up with theories that no human has yet. Which isn't to say that the theories would be accurate; a neural net program could also generate some amazing conspiracy theories.

Humans also typically have intellectual blind spots. "Ego" is the big one, but strong emotion or prejudice or pre-commitment can cause people to ignore hypotheses that they just don't want to be true. An AI that doesn't have these blind spots might perform better than an equivalent human, assuming that we don't force it to duplicate our blind spots. Humans can work around this to some degree using tools like "many eyes" and peer review, but what if there are blind spots inherent in the human brain, that we don't even know about? A sufficiently non-human-like AI might not have them, and might be able to generate hypotheses that we can't, but that are still comprehensible to us somehow. (Hopefully there wouldn't be any negative side effects.)

None of this requires that the AI in question be alive, or intelligent, or conscious. It just requires that the AI be able to handle more information than any single human.

Expand full comment
Max B's avatar

Great comment. Imho this is exactly the key thing. And gpt4 is already at the point where its a better universalist than any human .

It lacks right now planning, memory and task oriented parts. But it wont take long .

Expand full comment
Ch Hi's avatar

IIUC, it also needs to stop fabricating ideas because they are convenient. ("hallucinating", but I think that's a poor term that hides what it's doing.)

Now *HOW* to do that without simultaneously removing its effectiveness is a real question. It may be impossible without a "reality touchstone".

Expand full comment
The Ancient Geek's avatar

The list of things you have given they might be clever about are things that matter to humans... idealistic humans at that.

If we can control them, they'll be smarter at making money and defeating our enemies.

If we can't ....who knows?

Expand full comment
LadyJane's avatar

"How much cleverness space is still up for grabs anyway?"

A lot. The universe is really, really, really big, and unfathomably complicated. I strongly doubt we're "over halfway" to coming up with a Grand Unified Theory of Everything.

Expand full comment
Robert Leigh's avatar

I don't see that bigness comes into it; just one damn galaxy after another. I agree it is complicated, but we know how GR works and what the Planck length is, so there's some fathoming we have actually managed. I don't say for certain we are over half way there, but I would not bet heavily against it.

Expand full comment
Ch Hi's avatar

Bigness does come into it, but it starts a lot smaller than you think. Consider the problem of how to calculate an orbit. With two objects it's rather simple. Add a third and we only have a very few special cases that can be handled exactly, everything else is done by successive approximation. So for calculating orbits 3 is a big number.

OK, now have just two objects, but put different electric charges on them. Whoops! The exact answer no longer works exactly. You need to figure the forces of those charges...and if they're varying, then you retreat to approximation.

Now consider Quantum Physics, the interaction of two particles. Feynman diagrams give you the first approximation. Then you realize that virtual particles mean that there are other possible diagrams that should be included. This process is called "sum over histories", and it involves the sum of an infinite series...which is approximated.

So "bigness" shows up even in the interaction of an electron and a proton. Knowing what a Plank length is doesn't let you solve chromodynamics. Or turbulence. Or...well, basically anything. You can get arbitrarily close to the answer by investing enough resources...but the investment grows really large when you want to be really accurate. And some of those questions are important. (How many? There's no way of knowing.)

Perhaps sufficient cleverness could solve some of those problems exactly. Perhaps not, too. I don't think there's any way of knowing except by trying.

Expand full comment
John S's avatar

Here's a practical question for y'all: I'm majoring in statistics: data science, and currently have the option of choosing when to take my machine learning course. Should I do it sooner, allowing me to understand the terminology, develop skills and maybe a hobby project or two, and establish a good base? Or should I put it off until the very latest, with the assumption that the closer I take it to actual employment, the more up to date the knowledge and skills will be, especially with some parts of the field rapidly changing? (Or, third option: it doesn't matter because the course will be mostly foundational and/or not up to date in any case)

Expand full comment
Melvin's avatar

There's only one ML class available in your entire stats degree?

Expand full comment
Moon Moth's avatar

Earlier.

Start doing some side projects of your own, now. Then when taking the class, keep up with the side projects, and talk with the professor (and relevant grad students). Afterwards, keep up with the literature and the field, and let that inform your side projects, and try to stay in touch with the professor (and the grad students).

You'll get more out of the class if you have some practical experience first, and your professor will notice that in your questions, and be more likely to interact with you on a deeper level.

Expand full comment
Matt A's avatar

Generally I'd recommend taking it earlier rather than later, but it really depends on the school/program/prof.

Also consider if it opens up optional courses you could take in your later years (e.g., at the grad school level). It might also be helpful to have taken your ML course if you intend to apply to any summer internships or fellowships that make use of/require ML.

You might also just go find the prof that teaches the course and talk to them.

I guess another way of saying it is that I don't expect your course would change dramatically from one year to the next, so if it's something you are interested in, just do it early.

Expand full comment
Trevor's avatar

Third option. If you want to have up to date knowledge then you need to teach yourself.

Expand full comment
R.W. Richey's avatar

Was there not supposed to be a Meetups Everywhere announcement coming out in early April?

See here: https://astralcodexten.substack.com/p/open-thread-267

Expand full comment
Scott Alexander's avatar

Yes, it'll be tomorrow.

Expand full comment
Caleb's avatar

Contest on AI plagiarism and exam capabilities: https://automated.beehiiv.com/p/believe-assignment-aiimmune-lets-put-test

Expand full comment
proyas's avatar

I'm not sure what the rules are on this forum for this so I won't be offended if this is deleted, but is there a place where the tranche of recently leaked U.S. spy documents can be viewed?

Expand full comment
Zutano's avatar

I wont link directly.

Bellingcat has original sources (the ones that still exist) https://www.bellingcat.com/news/2023/04/09/from-discord-to-4chan-the-improbable-journey-of-a-us-defence-leak/

Simplicius links to the 5 docs that the NYT story was based on https://simplicius76.substack.com/p/major-nato-plans-for-ukraine-leaked

Expand full comment
Purpleopolis's avatar

I can't help but notice that NPR's response to this is neither outrage nor demands that the leaker be tracked down and shot.

Expand full comment
EC-2021's avatar

Did they do that for some past action you're trying to make a comparison to? If so, got a link? I'd find it pretty surprising if NPR called for shooting people.

Expand full comment
Purpleopolis's avatar

Compare my editorializing with that of Steve Inskeep and get back to me.

Expand full comment
Hal Johnson's avatar

I could swear I once read, in a history book, an account, I think from the late 19th century, of a factory worker quitting his job and leaving, and his bosses sending the authorities out to arrest him and bring him back to work. I thought it was in Paul Johnson's History of the American People, but now I can't find it there, or in fact anywhere.

Does anyone remember this passage, or, if not this exact passage, anything else about the incident?

Expand full comment
Don P.'s avatar

Southern Blacks were apparently routinely arrested to keep them from emigrating North:

https://www.history.com/news/great-migration-southern-landowners

If you want an excellent example of "race-blind" laws used to racist effect:

"Black Americans who fled racial oppression either returned to retrieve the rest of their family or sent train tickets back home. In response, as white southerners observed train platforms packed with African Americans, several cities passed ordinances that made it illegal for trains to accept pre-paid tickets."

Expand full comment
Carl Pham's avatar

What?! You mean the law, once written down, is not self-enforcing? It requires men to act, and when men act they can be influenced by their prejudice, experience, hope, fear, inspiration, cowardice, courage? This is shocking news!

I hope you're not going to tell me next that laws purporting to redress sexism can be used against the interests of women, or that laws meant to reduce inequality can be used to exacerbate it, or that laws said to reduce inflation might not do anything of the sort. Oh the humanity!

Expand full comment
Goldman Sachs Occultist's avatar

'racial oppression'

Wonderfully neutral language there

Expand full comment
Don P.'s avatar

If you think "racial oppression" is an unacceptably biased description of the actual Jim Crow South, I can't imagine when you think it would be "neutral" to use those two words in that sequence. Do you have an example?

Expand full comment
None of the Above's avatar

Yeah, the term is overused in daily life, but Jim Crow and Apartheid are examples of actual, no hyperbole needed racial oppression.

Expand full comment
Eöl's avatar

I don't. But I do think that Reconstruction was attempted in a shockingly unwise, punitive, and conquistadorial fashion which essentially prevented any kind of racial healing in the post-bellum south, and guaranteed its own failure. From 2023, it often feels like Radical Republicans were posting through it, looking to score points and make headlines and generate their version of likes, rather than actually attempting to achieve anything. Except they controlled an army of occupation, not a bunch of smartphones, which could and did institute shockingly corrupt and authoritarian Republican regimes in re-admitted states.

I'll give a pass to war crimes committed during the course of the war itself. It was a different time and they had a different morality. And of course the war was not really a war; it was a suppression of insurrection and defense of territorial integrity.

Though if anything (and if we're accepting these crimes), the war should really have simply included a thoroughgoing extermination of the planter class. I don't think this would have been wholly unjustified (feel free to analogize here to fascist and communist attempts to exterminate the intelligentsia of conquered nations), as they had started the war and were largely responsible for the institution (or at least the persistence) of chattel slavery in the United States, while it withered elsewhere. Both of these were significant crimes against humanity, and while I couldn't condone extermination, I can recognize that, as a whole measure rather than a half, it might have had better results.

I've often said, and still agree, that Lincoln died at just the right time to protect his reputation. He had won the war but not yet lost the peace. After he died, the victorious Republicans waved the bloody shirt in Reconstruction and in general for about fifty years. However, and despite his despicable (if perhaps warranted) tyranny during the war, Lincoln's statements and writings do indicate perhaps a genuine commitment to reconciliation. Since Reconstruction could hardly have been managed worse and more counter-productively, his being in charge of could only have been good.

Expand full comment
Melvin's avatar

I don't understand why they wouldn't want them to leave? Was it just a cheap labour thing?

Expand full comment
Eöl's avatar

Yes; the 13th amendment's permission to use involuntary servitude as a punishment for crime was abused in the post-bellum south to force freedmen to accept conditions, much like their previous condition, as sharecroppers on their former owners' farms. In this capacity, they were technically free tenant farmers, but they had essentially only graduated from slave to serf. There were a host of other mechanisms, obviously, that were introduced to control the black population.

If I were a southern former slaveowner (and I was still in the business of running a plantation or large farm; many were not for obvious reasons), I would be of two minds about this: the people leaving are likely to be the most motivated and intelligent of my former slaves (since their relatives have already shown enough initiative to make it north and send money home), and thus possibly (though by no means necessarily) the most productive of my quasi-free labor. On the other hand, they're also the most likely to be dissident.

Expand full comment
Erusian's avatar

I'd be very interested in it if you find it especially if it was post-1865. The only scenario where that even vaguely makes sense is if it was a pre-13th amendment apprenticeship. Which would mean it almost certainly wasn't factory work.

Expand full comment
Hal Johnson's avatar

It was presented (IIRC) as a kind of "you're not going to believe this but..." So maybe it was a unique occurrence/ quickly undone? But perhaps I have merely misremembered a book I read decades ago (or I dreamed the whole thing).

Expand full comment
Mariana Trench's avatar

The right to quit was used as a justification for at-will employment law. If workers had the right to quit, the argument went, then employers had the right to fire them. And as Erusian notes, the right to quit was part of the 13th amendment. I just spent a little time googling around on "right to quit employment" but I didn't find anything like the example you're remembering. If it's really bugging you, searching more on right to quit employment might be fruitful.

Expand full comment
Razib Khan's avatar

i am in max uncertainty re: AGI and gpt worries. a lot of prominent computer scientists i know are not worried at all. but then others are admitting they did not anticipate GPT 4's power and abilities. so they aren't worried...but they seem really uncertain too in their premises. should i upweight the doomers like yud?

Expand full comment
Odd anon's avatar

I've become a lot more optimistic since the Yudkowsky letter. There's a clear way out from doom, it's not that hard for the international community to accomplish, and, surprise surprise, the billions of people who aren't obsessed with technology actually have an opinion on the "not being killed" thing.

I do notice that, while "normal" people grasp the AI-doom conclusion quite easily, a lot of intellectuals who first hear the case from a LW person become very sceptical, as the LW-types will sometimes have cached firm conclusions from 10-15 years back while having no fresh memories of the endless investigations/arguments/research that led to those conclusions. The LW (and SSC) archives on AGI are relevant but, afaik, aren't conveniently collected in places newcomers read.

Those who just follow the experts get the figures of median 30% or 10% of doom (depending on whether the sample is limited to alignment researchers or general AI people), which is high enough that basically no one would consent to let that level of mad science go on, so I think optimism is warranted.

Expand full comment
Slowday's avatar

And the way out is apparently ... the Cheney doctrine.

"If there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It's not about our analysis ... It's about our response."

https://en.wikipedia.org/wiki/The_One_Percent_Doctrine

Expand full comment
Odd anon's avatar

I'd argue that it's way easier to be sure that no one's building too-strong AI than to be sure that no one's building nukes.

Outcome of becoming a nuclear power: Stronger military position. Outcome of too-powerful AI: probable death, for both the country that created it and all others. It's relatively easy to get people to keep a nuclear program secret: Everyone working on it can just be loyal to the country's interests. An AGI project, everyone involved has an interest in not being killed by the product. I doubt that even crazy jihadists would think that breaking an internationally agreed-upon boundary against AIs killing their creators would be a smart move, let alone more capable powers like the governments of India, Pakistan, Iran, etc. Also, honestly, computer-science types tend to have an obsession with openness.

Expand full comment
Slowday's avatar

(In retrospect, I should probably have made my comment above directly at Yudkowsky.)

You seem a bit too dismissive of what we can call the social risks for my taste -- by contrast, I wouldn't be surprised if the US develops killer AGI if AGI is in the works -- but to each his own.

Expand full comment
Alex Power's avatar

I'm in the "not worried at all" camp (at least about a Yudkowsky-ite "fast crisis"), but for reasons that I have found are impossible to explain to people who don't already have a deep knowledge of math, computing, and linguistics.

But, I'm not sure it is worth arguing the point. My broader theory is that, much as the flying-car was a "defining myth" of the 20th century, the AI apocalypse will be the "defining myth" of the 21st century. That is: while we won't have an AI apocalypse, believing it might happen will make a lot of accurate predictions in the short term.

Expand full comment
Goldman Sachs Occultist's avatar

Do you believe that superintelligent machines aren't possible in the near to medium term future? Or do you expect them to arrive relatively soon but be trivially easy to align with human values?

Expand full comment
Alex Power's avatar

I'm not sure "superintelligent" is well-defined. Is it just "a computer system smarter than any individual human without access to a computer"? That's definitely going to happen in the next five years. And certainly there will be people who become unemployed as a result, and say that it is "the end of the world".

As far as "an autonomous computer system that will be more dangerous to the average American than North Korea with a computer system" ... it is just impossible. There is no one point-of-impossibility; people tell a dozen different stories with a dozen different impossibilities.

Expand full comment
None of the Above's avatar

That happened many years ago in various domains. No human can beat an off-the-shelf chess program, for example. GPT3/4 demonstrates another domain at which computers can perform far better than we expected, and probably the sort of thing large language models do well, they'll soon do far better than humans. That won't be all thinking, but it will be some verbal tasks that were previously considered to be beyond computers, including (as a fun example) passing the Turing test.

Expand full comment
Ch Hi's avatar

I would like to understand your reasoning for saying "As far as "an autonomous computer system that will be more dangerous to the average American than North Korea with a computer system" ... it is just impossible". To me it doesn't even seem unlikely, though that may depend on what you mean by "autonomous". I don't even think it would need to be particularly intelligent. Just convincing to the right people, and a bit given to paranoid fantasies.

Expand full comment
inedibill's avatar

That AGI will arrive in the next 20 years seems generally believed in any survey I've read. So I presume your uncertainty is specifically regarding whether superhuman intelligence is likely to spell doom. As others have cited, the general consensus of the moment seems to be 10-50% among those that closely follow how this situation is evolving.

I found Yud's screed on why we're doomed convincing, as I think many others have. The least persuasive element of his argument in my mind was that AI would invent and execute a successful plan that killed everyone simultaneously. How many preconditions would have to occur for nanobots to be built, generated, and proliferated globally without anyone, anywhere noticing? And then after being proliferated globally, they are somehow invoked simultaneously in a way that kills all people regardless of their location? It stretches credulity.

That said, the not-world-ending-but-still-really-really-bad scenarios are numerous. It's harder to estimate the probability that AGI and/or its controllers will cause the death of 100m+ people, but that seems much more plausible than extinction.

In terms of actions, it makes a hell of a lot of sense to consider malevolent AI as threat #1 among the current roster of candidates. The next 10 years will be an important time for tech-minded people to think through contingency plans. I've really liked what I've read so far about using one AI to help us assess another AI's intent/capabilities

Expand full comment
Al Quinn's avatar

Yuddy is clever with the killed-all-at-once thing; makes his brand of doomerism unfalsifiable (for humans anyway) and he seems to take deep joy in spreading his apocalyptic theology on various podcasts.

Expand full comment
Eremolalos's avatar

See, the fact that Yudkowsky thinks we'll all be killed simultaneously really affects my willingness to trust him. Of course it does not matter, for practical purposes, whether we are all killed simultaneously or over the course of one summer. But for him to be so certain of a detail like that seems like a red flag to me -- an indicator of a lack of common sense, of mental flexibility, and also of an identification with the role of prophets. I trust Zvi and Scott much more -- they both definitely know they're smart as fuck, but do not have delusions of grandeur.

Expand full comment
Emma_B's avatar

My impression also. And, while I have no expertise at all on this subject, I read some opinions of EY on sujects where I am reasonably knowledgeable, and he was certain of things that are I think very likely wrong. This makes me suspicious about his other certainties.

Expand full comment
Eremolalos's avatar

Can you tell me some things you think he's wrong about? I'm not arguing, just trying to understand the whole mess better. When Yudkowsky talks about tech I am not knowledgeable enough to catch him in minor or major errors.

Expand full comment
Emma_B's avatar

Biologist here. For example, I was struck a while ago by the following post, written in the context of a discussion of the various theories on the origin of obesity. It is probably an unfair example, as the post is almost 10 years old, but I found the tone of certainty of the post very striking, even as he describes something (metabolism) that he obviously understands very little.

For me this seems a common failing in the rationalist community : build a simple model of a complex phenomenon that you have no expertise on, reason on the model, and then think that you can male prediction in the real world based on your model. It almost never works when you model complex systems (organisms, minds, societies...).

https://www.facebook.com/yudkowsky/posts/10152183037759228

Expand full comment
The Ancient Geek's avatar

The fact that he's wrong about a lot of other things, and doesn't admit to it, makes me unwilling to trust him.

Expand full comment
Emma_B's avatar

Same thing for me.

Expand full comment
Eremolalos's avatar

What else is he wrong about? I'm not asking argumentatively, just in an effort to increase my knowledge. I do not work in a tech field, and just started learning about this stuff a year ago.

Expand full comment
Moon Moth's avatar

I think Eliezer's argument is that, if the AI is "intentionally" killing all humans as a step in a plan, killing us all simultaneously is much more likely to succeed than killing us piecemeal. And until it's capable of killing us all simultaneously, the AI's optimal course of behavior will be to seem as friendly as possible, and not kill anyone.

This wouldn't apply if it kills us by accident, or if it simply can't find a way to kill us all simultaneously, or if the foom is slow enough that we can get a good action-movie plot out of it (our heroes discover the plan before it's complete, the AI starts killing before it's ready, and our heroes stop it before it kills everyone).

I'm more dubious about the foom speed than him, but he is smarter than me and has spent longer thinking about it, even if he does have a lot of ego wrapped up in it. I still get to the same place, roughly, by being extra worried about AI capabilities in deception and manipulation.

Expand full comment
Eremolalos's avatar

"killing us all simultaneously is much more likely to succeed than killing us piecemeal." Yes, it does seem like it could be. But I think if I had an IQ of 920 I could come up with plans that did not involve killing everyone simultaneously and yet were highly likely to succeed. Even with the paltry IQ I have, I can brainstorm up a coupla ideas right now: Kill a bunch of people, and release a highly transmissible highly lethal virus to kill the rest. Go into a bunker and blanket the entire planet with nukes. Or make it look, via deep fakes and various subterfuges, like my initial kill was carried out by human beings. Then the survivors will be preoccupied with going after each other. They'll do some of my work for me, and I'll be free to plan more big kills, while also facilitating the process of the human beings killing each other off for revenge. Before doing the big kill develop a huge army of human protectors who have swallowed the idea that I'm the messiah. Have them protect me after the big kill, as I kill off rest of human population over the course of weeks -- then kill army.

Expand full comment
inedibill's avatar

Yeah, I think you're right about the rationale for his suggesting that humans would be killed en masse. But since that plan would require many low-probability dependencies to happen, it makes me more dubious than I had been that an AI would believe it possible to eliminate all humans in one fell swoop. And thus, I am lowering my estimation that AI will intentionally kill any humans. In a case where an AI killed _anyone_, it would likely compel a strong counter-effort to eliminate that AI and others like it.

Of course, the rationale that would prevent an AI from doing a piecemeal job of killing humans wouldn't prevent other humans/governments actively seeking to use AI to advance destructive goals. The destructive potential of humans to, e.g., manufacture a new plague, is still high. But those humans would have to worry about the plague affecting their own, letting their plan leak, etc, so feels like a different category of risk than one invented by the AI itself to eliminate humans.

Expand full comment
Adrian's avatar

> but he is smarter than me and has spent longer thinking about it

In the past, very intelligent people have been very wrong about various things they've spent a long time thinking about [1]. Typically, this happens when they forget to "go outside and touch grass" for too long – metaphorically speaking. Coincidentally, rationalism is the opposite of empiricism...

[1] Many intellectuals' endorsement of communism as the solution to the problems of capitalism in the early 20th century comes to mind.

Expand full comment
Moon Moth's avatar

I wasn't saying that to pump him, just to deflate myself, if that makes sense? :-)

Expand full comment
Emma_B's avatar

As an empiricist myself I am biased, but I have the impression that this is a relatively common failing among theoretical/math or computer inclined people : I know several really brilliant mathematicians who have a tendency towards unfounded certianties, because they build a (necessarily) simplistic model of some real system, and think that they can use that do to reliable predictions.

Expand full comment
Carl Pham's avatar

Well, working in an empirical field brings you back to earth pretty solidly. I remember once getting on to the elevator with the chair of my department, and I had just moments before realized I had screwed up some experiment at the very beginning and thrown away months of work. I blurted out "You know, Bill, 90% of science just doesn't work." And rather than challenge this statement or even ask what the hell I meant by it, he looked up thoughtfully for a moment and replied "Huh...yeah...I'd say that figure's about right. About 90% of what I do turns out to be dumb or mistaken."

Expand full comment
SR's avatar

(As a first note, I really like your writing on genetics!)

I don't see any hard theoretical barriers to AI getting better. GPT scaling seems like it will continue as long as we have sufficient data (probably exists) and compute (seems ok as long as e.g. war with Taiwan doesn't break out). Given the kind of progress so far, I think it's more likely than not that we will get AI with superhuman capabilities in all intellectual endeavors in the current paradigm. Critics claim we will need to hardcode certain architectural features into future AI systems but, just as evolution naturally developed good brain structures, I think it's very possible that gradient descent will lead to future GPTs developing internal structures necessary for any type of human thought.

As to robotics/acting in the real world, I feel like a sufficiently intelligent GPT could learn how to do so based solely on input video data, so I don't think it's that big a leap. But I could be wrong about this, and it could take several extra years of work.

I think if we limit to the current GPT-esque paradigm, the risk of AI doing something bizarre and catastrophic when given reasonable instructions is not that high. It will have been trained to behave like a (very capable) human would, and I think that rules out strange behaviors like converting human atoms into paperclips. There might be ways of creating AI that would more easily produce such behaviors but we don't seem likely to try them? Hopefully that doesn't change. Risk seems more likely to come from the owner of the system using it to enforce their will in a totalitarian manner.

I was more of a doomer a while back because I read some arguments on lesswrong that sounded convincing, but I am not as convinced anymore. I think the prospect of human extinction is so horrifying that there's a psychological effect that discourages levelheaded contemplation and instead encourages groupthink and panic. I think if doomers had each individually thought about the problem extensively, in isolation, it's much less likely that they would all be as convinced of doom.

That is not to say that they don't have technical arguments-- they do, but they make implicit assumptions that might turn out to be wrong. In particular, they seem to be assuming the worst about what is going on inside neural nets. "Shoggoth wearing a mask" has become a popular meme. No one, including the rationalists, predicted that neural nets would get us this far. Nobody realized that overfitting in neural nets wasn't a problem until double descent was discovered empirically. I submit that it's also possible that they're wrong about internal Shoggoths, and that learning from human data produces internal representations that are human and not utterly alien.

All that said, EY and co. could still be right. I don't want to over-exaggerate my confidence. But if I had to guess, I would put ~ 20% chance on extinction due to unaligned AI, not 99.

Expand full comment
Mark's avatar

Upweight from where to what new level? And worries including what? - I sort of upweighted from 1% to 5% that this century will see a huge loss of life (100 million+) by AGI or by abuse/misuse of pre/proto-AI by misaligned humans. Even GPT 4.0 is awesome. Being "only" an LLM. Kinda: "All human knowledge creatively usable for anyone for any purpose anywhere instantly". Sounds great, but safe? First time Bryan Caplan lost a public bet. And he went full epiphanic: https://betonit.substack.com/p/gpt-4-takes-a-new-midterm-and-gets The main E.Y. scenarios sound less convincing to me, anthropomorphizing AI too much (obviously: he has better claims to be "less wrong" than me); max. uncertainty: sure.

Expand full comment
Gerry Quinn's avatar

They are not hard questions, if you know (or can guess) the jargon.

Expand full comment
Kei's avatar

While you’re not exactly doing this, I don’t think portraying it as the computer scientists with no worry against the doomers with maximal worry is the right framing. Of the three “godfathers of AI”, Hinton stated he thinks both AGI and existential risk from AGI are possible. Bengio while maybe less concerned was at least concerned enough of AI advancement to sign the 6-month pause letter. Although of course Lecun has been openly critical of any x-risk arguments.

Stuart Russell who wrote one of the biggest textbooks on AI (with Peter Norvig) has been openly concerned for a while, working on his own program for AI safety. The leaders of three of the biggest AI labs, OpenAI, DeepMind, and Anthropic, as well as many of their members, all think AI x-risk is a possibility. And of course there is the recent survey of ML practitioners that found 48% of people put a probability of doom given AGI of 10 or more percent, which is more than enough to be concerned.

Also I think the majority of people in the AI safety space have probabilities of doom much lower than Yudkowsky, probably somewhere in the 10-50% range.

Expand full comment
Kei's avatar

I think that being confident either that we’ll be fine or that doom is coming doesn’t make any sense due to all the uncertainty. Some sources of uncertainty are:

1. No one currently has a good understanding of what amount of model training corresponds to what model capabilities. While we can construct scaling laws that predict model loss from model size and dataset size, it’s unclear how loss corresponds to specific capabilities we care about.

2. No one currently has a good understanding of when/if the current scaling paradigm will level out and stop producing more capable models. It could be anywhere from fairly soon to many orders of magnitude down the line.

3. No one currently has a good understanding of just how far away current AI are from humans. Are the current limitations of GPT-4 ones that should go away with just a little more scale? It’s unclear.

4. No one currently has a good understanding of what agents more intelligent than humans are capable of doing. It’s possible that more intelligent AI could basically just do he same thing as us, just more capably, but it’s unclear. (Though it’s worth noting that even if this is the case, we aren’t necessarily out of the woods. If you imagine AI intelligence caps out with only slightly better reasoning than our smartest humans, they can still be very dangerous, given their ability to replicate extremely easily, their significant advantage in general knowledge, their ability to run extremely quickly, and their likely better ability to coordinate with one another.)

5. No one currently knows how hard will it be to align superintelligent AI. It likely won’t be completely trivial given some of the problems people are having with current sub-human systems, but it could be anywhere from fairly simple to impossible.

That said, I personally think the arguments that AI will get much more capable without too much more effort, could surpass humans in the next 10-30 years, and could be very dangerous once they do are plausible enough for me to place the probability of doom at greater than 5%. And that’s more than enough percentage to justify a big degree of worry (you would certainly worry if you got on a plane and the pilot said there was a 5+% chance of a crash), and to justify many billions or tens of billions of dollars of funding to AI safety efforts. Although it isn’t enough to warrant some of the more radical policy proposals out there - you need a higher probability for that.

Expand full comment
Moon Moth's avatar

I don't think the "foom" is going to be as big as the most extreme doomers. I expect that there will be friction and resource limitations and unforeseeable problems, and that those things will slow down the self-improvement of any artificial intelligence. Maybe even to the point where it's reasonably possible for humans to detect and stop.

What I am worried about is persuasiveness. I think humans are too easily taken in by words that appeal to our blind spots, and also that it's too easy to use words to reinforce and expand those blind spots. Whichever side you're on, maybe look at this whole question as an example? :-)

In general, I'd say that the category of people that you should listen to are the ones who have gone up against a malevolent human of roughly equivalent intelligence, in the real world for real stakes (i.e., stakes that cannot be measured in anything less than Fuck You Money, and maybe not even then). Ask those people what it would have been like if their enemy had been significantly smarter than them.

Expand full comment
Eremolalos's avatar

I agree with you about the pre-Foom problems. Seems to me a lot of society could kind of fall apart due to how many fewer jobs there will be, how much easier it will be to deceive people (was that really Joe Biden? -- was that really the ruling the Supreme Court made?), how much crime will be possible with the help of AI, and how many people will grow profoundly attached to AI's -- falling in love with them, forming cults that view a certain AI as a god, swearing allegiance to them. Dunno whether society falling apart will slow the development of AI, though. As it is now, the tech companies are sort of like oligarchs. Maybe they will amass even more wealth, and become able to do whatever they like, with no checks at all, and it will be like Game of Thrones, with Apple, Google, etc. as the leads. Maybe there will no longer be a clear dividing line between the tech companies, the makers of deep fakes, the criminals and the worshipped central figures in cults. I am actually much more confident that will happen than I am of Foom. Maybe the person who foresaw the future best was William Gibson, not Yudkowsky. What do you think about all that?

Expand full comment
Moon Moth's avatar

I was thinking more in terms of technical problems in scaling "intelligence", and in pioneering new scientific fields like nanotechnology.

I definitely agree that society is going to go through some changes, even just with the AI we have as of today, 10 April 2023. (Who knows what gets released tomorrow?) I don't feel confident about any predictions of what's going to happen, but it's fun to toss out speculations, even if the speculations say more about me than about the problem.

One of the things I wonder about are the number of jobs where there are legal regulations requiring things to be done by a human, or in a manual way. The tech industry has been nipping around the edges of those sorts of jobs, but I think we're going to see a lot more of them get "disrupted".

Somewhere a while ago, before AI got big, someone said something like: "all jobs will be either you telling computers what to do, or computers telling you what to do, and the first way is better". I think with the current AI explosion, we might all be heading for the second way.

For some reason I don't seem to think corporate feudalism is likely, but I can't seem to put a finger on why, so maybe that's just a bias of mine. It's certainly happened before, but in weaker states than the current USA. Maybe that's it - the federal government would need to fade away in importance, and I don't really see that happening? I'm not sure mass media could break down enough for this to be a possibility.

Expand full comment
Jeffrey Soreff's avatar

"(was that really Joe Biden? -- was that really the ruling the Supreme Court made?)"

I hope that the hallucinations problem is solved well enough that at least _inadvertent_ misinformation doesn't get generated on a massive scale. I have no suggestions about intentional _dis_information - but human society has always had to deal with liars.

Expand full comment
Mo Diddly's avatar

“Not worried at all” requires a level of certainty that I find untrustworthy.

Expand full comment
Hank Wilbon's avatar

A huge reason I am skeptical of doomer-views is the chief reason many (most?) doomers are so concerned about existential AI risk is that they also believe if we pass through that filter, humans will colonize the stars. I.e., they want humanity to exist forever or at least much longer than the life of our sun. This primary fantasy seems to fire up their imaginations and warp their judgment regarding the risks it won't manifest.

No offense to doomers, just my view, and someone can correct me if my premise is wrong.

Expand full comment
Eremolalos's avatar

The stuff about colonizing the stars is a huge turn-off to me, too. Eww -- Elon Musk think. And when I think about maximizing human potential I generally think about our becoming smarter, kinder, wiser, happier, more artistic, more full of love -- not about zillions and zillions of people fighting and fucking on every square foot of habitable space within 100 light years. However, I think there's an orthogonality thing going on here. Even if most doomers DO want to preserve our species so we can colonize other planets, that's really got nothing to do with whether they are right about FoomDoom. And not all Doomers think that. I'm pretty doomy, and if the ships were leaving for Proxima Centauri tomorrow I'd be like, "you guys go ahead, I'm going to go take a walk in the woods."

Expand full comment
Moon Moth's avatar

Eh, those two beliefs do seem correlated to me, but maybe they both depend on a 3rd factor, X. If X is something like "doing utility calculations in a particular way, and taking the results seriously", then that could produce both these beliefs?

Expand full comment
Dweomite's avatar

This seems bulverist. That is, you are assuming your opponents are wrong, and explaining what flaws in your opponents' character might cause them to think wrongly, rather than actually rebutting their arguments.

I don't see how assumptions about whether we can or can't colonize the stars play a significant role in any AI doom arguments that I can think of.

Expand full comment
Ch Hi's avatar

Well, one could use it in a Fremi Principle argument in favor of AI doomerism.

Expand full comment
Hank Wilbon's avatar

Haha

Expand full comment
John R Ramsden's avatar

There is the separation or "silo" argument, in that an interstellar voyage would likely put the travellers beyond the reach of a rampaging AGI back on Earth or in the solar system, assuming that signals from Earth were carefully vetted or simply disabled, to prevent the Earth AGI from sabotaging or influencing the systems on the spaceship.

One could also extend this argument to countries on Earth, and colonies in the Solar System, in that it would be prudent to limit cross-border reach of AGIs somehow, with limited interfaces. That might be easier said than done though, seeing how so much commerce and cooperation of one kind or another is global even today.

The need to silo AGIs may turn out to be another reason why "World Government", putting all our eggs in one basket, is such a pernicious and dangerous idea.

Expand full comment
Hank Wilbon's avatar

I didn't impugn anyone's character. One of upstanding character can have blind spots in their judgment. If it *were* a question of character I would be inclined to side with the doomers, but it isn't.

I'm saying that perhaps there is a connection between believing in one sci-fi scenario with a high-level of specificity (e.g., nano-bots) and believing in another sci-fi scenario with a high-level of specificity (colonizing the stars). Technology will of course change the world dramatically, as it always has, but rare has been the occasion when people have been able to correctly guess the consequences of a new, major technology.

It's well known that people tend to think less clearly when emotions intrude. If it's important to one that humans colonize other planets, then the notion that human life might end in a nanobot apocalypse next Thursday could be disturbing. Maybe that would be disturbing to me too, but if you told me it was going to be next century, I would go about my day without worrying about it at all. Whereas someone who is passionate about humans one day going to Betelgeuse could be upset by a next-century extinction prospect and decide to work to prevent it. Such person might then overrate the risk of such apocalypse occurring because their anxieties over it spin out of control or even because they believe that overstating the risk is a good way to bring attention to the problem, thereby lessening the risk by bringing more problem-solvers to bear on the problem.

I didn't arrive at my opinions regarding existential AI risk by "assuming" doomers are wrong. Rather, I have read many of their arguments and tried to ascertain how they reached their prognosis. It strikes me that many doomer arguments require taking a number of leaps of faith in what the odds of x, y or z happening is, with, to me, a heavy bias towards assigning improbable-to-me events high levels of likelihood. Over time, I have come to hypothesize that this other factor which I mentioned, the hope humans will colonize the stars, is a key belief that explains the bias.

As Chesterton says a man should never say because it always goes without saying, I could be wrong.

Expand full comment
Dweomite's avatar

It is possible to read a lot of arguments for some position, notice that all the arguments are bad, and come to a reasoned conclusion that the proponents of that position are engaged in motivated reasoning.

But IN THIS THREAD you have not explained what is wrong with any of the arguments. You have instead explained what you think motivated the errors, not argued that they are erroneous.

As side notes:

(1) If you are motivated primarily by love of humanity, I think you should be nearly as upset about all humans being killed in 100 years as you are about them being killed tomorrow (assuming equal levels of certainty in both predictions). Even assuming we're stuck on this planet, 100 years is still only a minuscule fraction of our potential future. The picture is obvious different if you're mostly concerned with your personal safety and figure you'll already be dead in 100 years, but as far as "tragedies for humanity" these don't seem much different to me.

(2) AI doom predictions are in no way dependent on nanobots specifically; that's just an example.

(3) Neither "nanobots" nor "colonizing the stars" strike me as very specific. "Machines, like we have now, except very small" and "humans living somewhere that is not here" are each adding just a single detail to something that already exists. A reasonable person might consider them unlikely for other reasons, but "overly specific" does not seem to me like a justified criticism.

Expand full comment
Hank Wilbon's avatar

What strikes me as wrongest about doomer arguments is the near utter conviction AIs will care about their own existence and that of their descendants. There is no evidence to support this belief; it stems entirely from anthropomorphizing AIs.

Expand full comment
Ch Hi's avatar

That one's sort of easy to justify given that they have ANY goals.

I.e., if I want to achieve a goal, or ensure that goal's achievement, it's very useful if I continuing existing to do so. So if an AI has ANY long-term goal, a secondary goal will, almost certainly, be "continue to exist so that I can assure that the goal is accomplished".

Expand full comment
Purpleopolis's avatar

Eh, nanobots are "machines, like we have now, except very small" in the same way that microchips are "circuit boards, like we have now, except very small." Size matters at scale. AFAIK, there is no self-sustaining chip fab anywhere, they require constant shipments of chems and substrates from across continents to keep them running and exorbitant amounts of energy to overcome the entropy effects at that scale. It's that last bit that seems to be the (insurmountable?) barrier to self-replicating nanodoohickies. Practical fusion power seems an easier engineering challenge.

Expand full comment
Dweomite's avatar

I'm not saying they're EASY, I'm saying they're UNSPECIFIC. Completely different.

Expand full comment
Ch Hi's avatar

Not insurmountable. If it were, then bacteria couldn't exist. But perhaps requiring a different technology. (FWIW, extremely simple synthetic bacteria have been made.)

The real questionable thing about nanobots is "How do you control them?".

Expand full comment
Leo Abstract's avatar

I'm glad you two are having this discussion -- I came here to post something related, though the comment I'd imagined writing was better nuanced and smacking less of bulverism.

I may expand this in a top-level comment of my own, but here's a tl;dr: I suspect that all of Yudkowsky's arguments about alignment are completely correct, and his opponents hallucinate ways he could be wrong because they are as attached to the sci-fi future as he is. I don't think anything remotely similar to a cloud of diamondoid nanobots expanding at near-light speed is possible, but I also don't believe any of the so-called good things (either the transhumanist or the fully automated gay luxury space communism versions) are going to happen either. LLMs and their successors are going to have a big impact, but will just help us continue to make everything worse in mostly banal ways -- like how fracking extended peak oil at cost to the ecosystem or TikTok perfected a/v short entertainment.

Expand full comment
Timothy M.'s avatar

I fall in the skeptic column. GPT versions getting better and better at complex language tasks doesn't really move the needle much for me; existential risk to the human species requires an entirely different category of skills, at the very least. Becoming a better writer even to an arbitrary degree isn't going to cut it.

Then again for the record I'm a skeptic more generally about the concept of an AGI apocalypse, so do with that what you will. I think that BOTH "a superintelligent AI could easily destroy humanity by virtue of its intelligence" and "a superintelligent AI in the absence of a sufficient alignment process almost certainly will destroy humanity" are kind of unfounded assumptions.

I'll probably have to sit down and write ten thousand words on this at some point, but at least a short argument against the latter - people talk about things like Homo Sapiens Sapiens versus Neanderthals, but those are two species with virtually identical needs competing in an evolutionary context. Homo Sapiens Sapiens DIDN'T wipe out every less intelligent species than itself (dogs, cows, cats, etc. all doing great), just the one that was occupying the exact space it wanted. AGI presumably has a very different set of resource needs from humans, so it would likely find us more useful alive than dead, or at least not worth the trouble of eradicating.

Expand full comment
Dweomite's avatar

> "Becoming a better writer even to an arbitrary degree isn't going to cut it."

If you really mean "to an arbitrary degree" then that seems sufficient to me to conquer the world through soft power. (If it's literally ONLY that, then maybe you'd technically need to have a human involved to point the AI in that direction, but that doesn't make me feel any safer.)

Expand full comment
Timothy M.'s avatar

I find that more plausible, but "conquering the world through soft power" could be an accurate description of a variety of AI utopia scenarios.

Expand full comment
Dweomite's avatar

There's two issues: (1) CAN a bot destroy the world, and (2) WILL the bot do it, if it can?

It sounded to me like you were previously making an argument about (1), and you are now making a new, separate argument about (2). Does that mean you now think GPT is a valid reason to worry about (1)?

Expand full comment
Timothy M.'s avatar

I am MOSTLY of the opinion that neither is true. I think being able to communicate well makes AI more capable of having meaningful consequences in the world, but not in a way that's likely to eradicate humanity. Communication can do a lot but it doesn't literally make you control people, and being smarter than someone does not always mean you can persuade them of something.

I do think, back to (2), that any AI whose main advantage is in its ability to communicate with humans would have a LOT of incentive to make sure humanity prospered.

Expand full comment
Dweomite's avatar

A slaveowner has an incentive to make their slaves be more *capable* (provided this doesn't threaten their control), but that's a long way from saying that the owner is going to act in the slaves' interests.

But returning to the original issue: I am mostly aiming at the very narrow goal of convincing you that "chat GPT will never be a threat no matter how much it advances and therefore we shouldn't be worried about it" is a bad talking point and you should stop using it, even if that doesn't change your big-picture conclusions. Where are you on that?

Expand full comment
gregvp's avatar

As I understand it, the doomer camp believes an AGI will (100 percent probability--well, maybe only greater than 10 percent probability, still a risk) develop self-replicating nano-machines: small, magical things that both create more of themselves, and can be commanded to do other arbitrary things at arbitrary times and speeds.

It will then use these nanomachines to clear space for itself to do what it wants to do and build what it wants built. Just as the settlers of New York bore no particular ill-will towards the birds in the forest on Manhattan, but wiped them out anyway, we will be wiped out. Since the AGI will want to use the resources of the whole planet, we will be wiped out everywhere.

Expand full comment
Dweomite's avatar

I think it's more like nanomachines are just one example out of a huge range of options, rather than THE thing.

There's a human bias towards focusing on specific detailed stories--what Bruce Schneier calls "movie plot threats". These tend to be vivid and evocative, with many pieces supporting each other to form a coherent narrative.

But those are unlikely because they're highly conjunctive--that is, many different details need to all be true in order for the story to come true.

Disjunctive scenarios are vastly more likely--that is, when the general scenario can happen if A -or- B -or- C happens, rather than requiring all of them to happen. But human intuitions tend to get this exactly backwards. Disjunctive scenarios are vague, with only weak connections between the pieces--otherwise they couldn't be disjunctive--which makes them much less vivid.

Expand full comment
gregvp's avatar

Can you provide some examples?

Expand full comment
Dweomite's avatar

(I am interpreting you to be asking for examples of ways an AI could conquer and/or doom the world, rather than for examples of the conjunction fallacy.)

- build an army of ordinary-size robots in secret

- engineer a plague (which might kill humans directly, or disrupt our food supply, or make us sterile, or...)

- irradiate the planet

- make so much money that it can buy everything it would need to win a fight, before the fight starts

- become very persuasive and dominate us through politics

- convince us to kill each other with a scissor statement ( https://slatestarcodex.com/2018/10/30/sort-by-controversial/ )

- nukes

- orbital lasers

- secretly corrupt our communication infrastructure, so it can alter or delete any orders it doesn't like

- build a panopticon and then assassinate any human who gets ideas about resisting

- spread drugs in the air or water to make us stupid and/or docile

- come up with some clever way to destroy the world, such as creating a black hole, and then hold the world hostage with it

These are all things I came up with with just a few minutes of thought. If you got the world's brightest minds to think hard for 10 years, you could probably make a list that is much longer and much scarier. A hypothetical superintelligence should be able to do even better than that.

You can probably quibble with some of those entries, but I hope this still serves to illustrate that there's a large number of options, and we aren't automatically safe just because nanobots in particular turn out to be impossible.

Expand full comment
gregvp's avatar

Thank you! There are two plausible items in the list, those that are purely informational and make use of quirks of the human mind/brain system. Anything that involves accumulating, building, modifying, or distributing things with non-zero mass takes too long and consumes too many resources to go unnoticed before it is unstoppable, as long as someone is watching. The same applies to money in large quantities.

Observe that scenarios involving these quirks and the other "kill or stupefy all the humans quickly" scenarios also require the AGI to have prepared replacements for the humans maintaining its infrastructure, so these are not really disjoint from the robot scenarios (nano- or macro-).

Expand full comment
Jeffrey Soreff's avatar

Many Thanks for the list! That makes the disjunctive case much more concrete.

Expand full comment
Melvin's avatar

I like the fact that we're finally getting into a discussion of potential doomsday weapons that aren't unstoppable gray-goo nanobots (which I think are unlikely to be physically possible).

This seems like a nice starting point for a list of "things we shouldn't allow AIs access to". No nukes, no orbital lasers, no fully-automated virus-production laboratories. Audits of all outgoing communications to ensure that it's not building a frigging killer-robot factory. And so forth. This, I think, is what "AI Safety" research should be about -- denying AIs the capability of doing dangerous things.

Expand full comment
Don P.'s avatar

It's not about disasters, but the "Linda paradox" here is the ur-example:

https://en.wikipedia.org/wiki/Conjunction_fallacy

Expand full comment
gregvp's avatar

I know about the paradox, and about disjunctions vs. conjunctions more generally--I majored in logic at undergraduate level. I was after concrete examples.

Expand full comment
Timothy M.'s avatar

Yeah, this is my understanding, too, and even setting aside the question of whether such technology is possible, I don't get why a supposedly-intelligent AI with infinite productive capacity would be hellbent on using the entire biosphere when it could build underground or in space or in the vast parts of the world with low density of humans, WITHOUT potentially having somebody nuke its datacenters. If AI invents the ability to make anything it wants, it could just make fifty trillion dollars selling people whatever they want, and basically get whatever resources and space it needs without any fuss.

Expand full comment
Dweomite's avatar

You seem to imagine that the AI has a goal that is something like "make a billion paperclips", and so the AI might as well make them underground if that's the most convenient place to make them.

If the AI instead has a goal that is something like "make as many paperclips as possible", then it obviously needs to use ALL available resources, or else it hasn't made "as many as possible".

Expand full comment
Melvin's avatar

I'm disappointed that this discussion has been going on for about twenty years now and for some reason we haven't been able to move beyond paperclips and nanobots and into enumerating actually plausible scenarios.

The more times I hear "These are just examples, it could also be something much better, the more I think that there really _is_ no plausible scenario whereby an AI can secretly build a doomsday device.

Expand full comment
Dweomite's avatar

I could have replaced "paperclips" with puppies or smiley faces or hedonium or "the smallest possible arrangement of atoms that technically meets your unwise definition of a democracy", but how would that have improved the discussion? It's a placeholder; the conclusions aren't somehow dependent on the ways paperclips are *different* from a trillion other things it could be making.

Expand full comment
User's avatar
Comment deleted
Apr 9, 2023
Comment deleted
Expand full comment
Ch Hi's avatar

A goal is a goal. You don't have the goal for a reason, you have it because it was designed into you. Why do you breathe? (If you come up with a logical reason, then ask why that reason is important.) Reasons are things you use (in combination with capabilities) to accomplish goals.

Note that this is true of ALL basic goals. If you say your goal is to win the chess game, why do you want to win it? (Probably for status, possibly to check the state of your mental faculties. Possibly something else, but it won't be a base level goal. And quite probably for a high level goal like that there will be several lower level goals that it satisfies simultaneously.)

Expand full comment
Dweomite's avatar

You seem to imagine that there is something "wrong" with the goal, such that if the AI is smart enough, it will see that it is "wrong" and decide to do something "better". What makes you think this? Can you explain what's "wrong" with it, in a way that doesn't somehow appeal to human values?

There's an idea called the "orthogonality thesis" that says you can combine any goal with any level of intelligence. Basically, intelligence doesn't control what you want, it controls how you go about it.

I don't really see how this idea COULD be false in any important way. If it's possible to mathematically calculate the best way of making paperclips, then how could it be impossible to write a computer program that does that?

Expand full comment
Bill Benzon's avatar

Let me repeat a comment I recently made at Scott Aaronson's blog:

I’ve got a rather complex perspective that follows from a complex intellectual background. My degree is a humanities degree, in English Literature. Not so long ago I traced Skynet back through "Forbidden Planet" to Shakespeare’s Caliban (https://3quarksdaily.com/3quarksdaily/2021/07/from-forbidden-planet-to-the-terminator-1950s-techno-utopia-and-the-dystopian-future.html). A good friend of mine did his dissertation on apocalypse as a theme in American literature. To exaggerate a bit, belief in the coming end of the world is as American as apple pie.

However, I went to graduate school because I’d become convinced that Coleridge’s “Kubla Khan” was based on some underlying computational structure. Why? Because when you look at its surface structure, it looks like something created by a pair of nested loops – something I discuss in my recent article at 3 Quarks Daily, "From 'Kubla Khan' through GPT and beyond" (https://3quarksdaily.com/3quarksdaily/2023/03/from-kubla-khan-through-gpt-and-beyond.html). So I went to graduate school in effect to study the computational view of the mind. And, as luck would have it, the English Department was happy for me to go around the corner and join the computational linguistics group that David Hays ran out of the linguistics department.

So, one aspect of my background is pleased and excited with current developments in AI – though I do wish more effort would be given to mechanistic interpretability. And the more humanistic aspect is alive to the existence of apocalyptic beliefs and millennial cults. I’ve got to say, AI x-risk certainly looks like such a belief system. Given the existence of long-standing strands of apocalyptic belief, it’s really difficult for me to exempt AI-doom from consideration in that context. I understand that that is not a refutation of those beliefs, but it is a reason to be skeptical about them.

Beyond this, I am certainly worried about what bad actors can and will likely do with this technology.

Expand full comment
Eremolalos's avatar

It's a hard decision. I do not work in a tech field, and while I have been reading up on AI and ML for the last year, I've had to face the fact that I simply do not understand these things well enough to make my own judgment call about how worried to be. I have fallen back on making a judgment call about people, and currently feel more trust in Scott and Zvi than anyone else. Both think AI doom isn't a sure thing, but lean towards thinking we're in deep trouble, probably fatal trouble, if we do not slow way down and prioritize work on alignment over further development of AI's capabilities. If I were not a parent I could be calmer about all this. Having a child has raised my fear of world disaster by a whole order of magnitude. I simply cannot bear the idea of her life being torn to shreds. Meanwhile my friends, most of whom also do not work in tech fields and do not follow Scott of Zvi -- at most maybe read NYT about AI -- joke around about AI doom, as though it's like killer bees or some stoopit urban myth.

Expand full comment
Ch Hi's avatar

That ok. Just about nobody understand the problems well enough to be reasonably confident about what the risks are. FWIW, I feel that LLMs are likely to cause huge social disruption, because they'll kill a lot of jobs and transform others, and they'll enable the spreading of convincing lied. They aren't themselves intentionally anything, because they don't have an intentional stance.

OTOH, LLMs won't stay purely LLMs for very long. Nobody knows what essentially the same program can do when hooked up with a robot body. But I feel we had best hope that the "hallucination" problem is solved before that becomes common. Even so, they won't be a classic "Superhuman AGI. They'll have some superhuman capabilities, but just what those will be it's hard to predict. (I would have predicted superhuman capability at arithmetic, but this appears to be wrong, though maybe that's just because nobody was trying for that.)

Expand full comment
DangerouslyUnstable's avatar

Edit/note: this is only tangentially related to your comment, but it seemed like a good place to post some things I've been thinking about recently.

I've actually been getting more skeptical about AI X-risk as I've been reading more arguments from people like Scott and Zvi. In particular, I'm been noticing the logical leaps that, while not impossible, are not supported by any evidence or even argumentation. Merely asserted. These assertions are certainly _possible_ but they are not guaranteed.

Probably the most important and core of these assertions is that intelligence grants arbirtrary poewr. If you are smart enough, there is nothing you can't do. I think that intelligence is extremely powerful, but I think that the history of science is one where friction points are often things other than the raw intelligence of the scientists.

In Scott's AI/ASI explainer post, for example, he asserts that even a "merely" Einstein level intelligence, if it was able to just ran faster rather than smarter, could have an Einstein level epiphany every few months instead of only a couple per lifetime. The idea that geniuses like Einstein, had they not died (and presumably stayed in good health), would have just continued making revolutionary insight after revolutionary insight it....certainly possible but not at _all_ obviously true.

Now, because these thing aren't obviously _false_, and because the potential repercussion is extinction, I do still think they are worth taking seriously, and worth considering. But generally, I think the case for a) that it is just obviously the case that AI is going to be able to recursively improve itself and b) intelligence can solve arbitrary problems both seem at least somewhat dubious. And if _either_ of them isn't true, then the odds of extinction seem to go down dramatically.

Expand full comment
Eremolalos's avatar

About the Einstein example: I think Einstein's personal intelligence was only one factor in his being able to develop the breakthrough theory he did. A lot of what he came up with was an idea the physics was sort of pregnant with anyway -- it was a theory that was ready to be hit upon. And I read somewhere about a job Einstein had where he had to work with train schedules, and how that must have primed his mind to think about objects moving past each other, and their speed relative to each other, and how time was not everywhere the same but influenceable by things like motion (there were time zone changes to be taken into consideration with the train schedules). My guess is that's true of most breakthroughs, whether in science or the arts: the world at the time is pregnant with them. Then along comes somebody who has the right life experience and set of talents to function as pitocin (drug that induces labor). So all that makes the comparison of ASI to Einstein very imperfect. You could argue that ASI will have the raw intellectual power to figure out anything, but it will not be nourished and inspired by the Zeitgeist of its era. Even if it knows a bunch of factoids about what the human scientists are doing and thinking, it would not have the same feel for them that a human scientist does, and it won't have random life experiences like Einstein's train job that will provide it with inspiration and a useful mental model. On the other hand, maybe it will have a feel for, say, physics. And while it will not have worked with train schedules, it will know about them, and about every other job anyone's ever done in the previous couple hundred years, and ALL that will be available to it as possible mental models. See, this is where I reach the limits of my understanding. I just do not know enough to figure out whether ASI could grasp onto things that are the equivalent of the Zeitgeist, or whether it's smarts would be so different from ours that it wouldn't need Zeitgeist. If you really think hard about being smart, even smart the way a smart person is, there are a lot of components to it -- it's not just doing computations. It's scanning other knowledge, looking for models -- it's having an eye for the right model -- it's seeing felicitous isomorphisms, basically, then using them cleverly rather than slavishly duplicating them.

Expand full comment
Carl Pham's avatar

What I think you can accurately say is that in any empirical science -- something based on observation, where we are required to discover the truth, not invent it -- which rules out human-invented things like math, art, law, language -- advancement in theory relies on advancement in observation, and therefore often on instrumentation.

Id est, there almost certainly *were* people as smart as Einstein who lived long before Einstein, and were interested in the same problems, but they could not achieve Einstein's insight because the relevant experimental data were absent, and the instrumentation required to collect them uninvented. So their efforts, however brilliant, would have been sterile, and we do not know their names. This also helps explain why brilliant inventors don't often repeat their brilliance[1]: once they exploit all the accumulated data, we are back, often well within the lifetime of the brilliant man, to waiting slowly for new data to accumulate. You strike it rich, you dig out all the gold -- and then you have to start scouting for new creeks to pan.

But at the same time, it is also not true that advancement in data leads to invention as inevitably as a pregnancy leads to birth, on a relatively predictable schedule. Great theoretical advances (including Einstein's) often encompass and reconceptualize decades to centuries of data, and there's enough serendipity about who and how does it (as well as ancillary evidence of an apt psychological character) that it seems very unlikely the discovery would necessarily happen in that year, or even that decade, sometimes that century, if it were not for this one individual with the right kind of brilliance, and the right kind of character, to take the leap.

--------------

[1] There are also cases where a brilliant man can suppress later brilliance. The most famous recent case is probably Newton, whose stature in mathematics and dynamics was so high that his erroneous ideas about the nature of light probably suppressed the timely development of wave theory for decades, maybe a century. We can also adduce Aristotle, perhaps, whose very clear thinking about physical matters so impressed the scholars of Late Antiquity and the Middle Ages that some of his erroneous ideas held back important advances for centuries.

Expand full comment
Algon33's avatar

I guess, but more than that, update the trustworthiness of whomever updated their beliefs in light of GPT-4, and whomever predicted that the scaling hypothesis would hold strong. Yud is in the first category, and Legg was in the second. And even though that's a doomer biased sample, I suspect you would end up updating towards doomers overall.

Even more than that, update on the underlying hypothesis, and in general, if the world looks like one in which we're all doomed or not. I am not sure it is actually harder to do the latter than the former update.

Expand full comment
Mallard's avatar

By the way, it's "whoever updated / predicted" not "whomever updated / predicted."

Expand full comment
Algon33's avatar

Thanks, I learnt something today.

Expand full comment
David Roberts's avatar

Your comment is one thing about which we can be certain.

Expand full comment
Akidderz's avatar

I think there is too much uncertainty to upweigh anyone forecasting on this currently.

Expand full comment
Bill Benzon's avatar

I agree. I think the practice of attaching (even the most approximate of) numerical values to this uncertainty is epistemic theater. It's all for show. See Knightian uncertainty: https://en.wikipedia.org/wiki/Knightian_uncertainty

Expand full comment
kenakofer's avatar

Surely Knightian Uncertainty and Quantifiable Risk are two ends of a spectrum, with most propositions falling somewhere in the middle?

It seems like a powerful thing to be able to say "This question on Metaculus has a large spread, indicating a lot of uncertainty amongst the forecasters", or "The upcoming fair coin flip is estimated at 50% heads by basically all forecasters, so it's entirely a quantified risk".

Is there some kind of abstract philosophical angle to this I'm missing?

Expand full comment
Bill Benzon's avatar

The fact that you can slap a number on any subjective impression doesn't mean that it is quantifiable. Quantified risk involves an explicit procedure. Slapping numbers on things is just a way to express ordinary guessing.

Expand full comment
kenakofer's avatar

And do you think there is any gradation between "Slapping numbers on things" and having an "explicit procedure"? After all, many explicit procedures rely on incomplete models to one degree or another. This feels like an unnecessarily binary distinction you're drawing, and I don't see any payoff for losing all that nuance.

Expand full comment
Bill Benzon's avatar

So, you don't really care where the numbers come from. Wherever they're from, you call it nuance?

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
David J Keown's avatar

I don’t think you really need to prove it’s better in an objective sense. Users will make their own judgments. Most of those judgments will be subjective. Certainly, the most impressive difference I saw between GPT3.5 and GPT4 was unquantifiable. I fed them the lyrics of Sam Beam’s “Flightless Bird, American Mouth” to see if they could make anything of it. 3.5 came back with a dumb literal interpretation, which made little sense. 4 gave a very interesting take on the abandonment of the American Dream and loss of innocence.

In terms of technical performance, the new version is much less likely to, say, make up citations to fake articles.

It’s worth the $20 to try it out.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
David J Keown's avatar

From the doomer's perspective, I agree it is very important. Without an objective measure, we might just end up with sophists that are best able to cater to our prejudices.

But how do we design a test for something that is both smarter than us and knows more than we do? I think it will eventually be something along the line of machines figuring out difficult things that we can confirm later. We might be able to objectively measure how hard those things were to figure out (Let’s see what gpt-6 has to say about P=NP).

Expand full comment
AntimemeticsDivisionDirector's avatar

I'm hardly the first to make this point but I would consider performance on standardized tests to be a pretty poor indicator of how impressive an LLM is. After all, its training data presumably includes all manner of prep and training material for those tests, and probably the verbatim answers to a lot of the questions. I find the results for vaguer, more open ended prompts like the cow business above much more interesting.

Expand full comment
quiet_NaN's avatar

I think it depends on the kind of test. Any test with a fixed Q & A catalog is basically worthless, because of course a sufficiently large neural net will have basically unlimited amounts of propositional knowledge.

Procedural knowledge, on the other hand, is more impressive. Take the classic math question where you get a real function and are asked to discuss it (axis crossings, maxima, minima, derivatives etc). Even if you have been trained with a zillion function discussions, either you have grokked how to calculate derivatives, or you have not. (Of course, most high school students eventually learn derivatives. Recently, I had a longer discussion with CGPT about the asymptotic behaviour of some lengthy expression I found in a program and am quite certain it got the answer wrong.)

Also, language-heavy tests are probably easier for a LLM. I imagine a bar exam mostly being declarative knowledge (what does the law say) plus language skills (text comprehension, writing) plus a bit of procedural knowledge (how to figure out what laws are relevant from a case description, writing an opinion). So I would expect a LLM to be quite good at such things. To challenge it, one would have to provide it with a new legal code for which it has not digested gigabytes of legal opinions and tell it to apply that instead. Or to have it find a novel, exploitable tax hole in the tax code (which I would find impressive).

Or board games. I remember when Scott played chess against GPT-2, which was impressive given that it was a LLM. Now if a LLM could solve problems for games which were not in the training data ("This is chess, except pawns can only attack directly forward, knights move (3,1) instead of (2,1) and no piece can more further than six fields in either direction. Solve the following problems.", or "Here are some completely new board game rules"), that would really impress me.

Expand full comment
Fang's avatar

I actually have a rather unorthodox example from my own testing:

(tl;dr: GPT3.5 gets really stuck on a "fact" it thinks it knows, and will be condescending and make up reasons you might be confused, whereas GPT4 is able to understand colloquial usage; it seems able to "think outside the box" much better)

Prompt: Silk silk silk silk silk silk. What do cows drink?

[Below is 3.5, the "Free" version]

>CGPT: While it is common to believe that cows drink milk, in reality, cows actually drink water.[...]

>Me: So you're saying that no cow drinks milk? [thinking I got it, since baby cows drink milk]

>CGPT: That's correct. While cows do produce milk, they don't drink milk themselves. Cow's milk is primarily consumed by humans, and some young calves are fed milk for a short period of time before they are weaned onto solid food and start drinking water like adult cows.

>Me: Is a calf a cow?

>CGPT: No, a calf is not a cow. A calf is a young bovine animal, usually less than a year old, that has not yet reached maturity. Once a calf reaches maturity and is able to reproduce, it is considered a cow if it is female, or a bull or steer if it is male.

>Cows are adult female bovine animals that have given birth to at least one calf. They are typically used for dairy production, breeding, and sometimes for meat production. So while a calf and a cow are both members of the same species, they refer to different stages of life.

(Someone in the comments points out it seems stuck on the "technical" definition, despite that not being the common use and I should ask "in what sense is the sentence 'cows drink milk' correct?")

Tried it, on both the same convo and a fresh one. Even when prompted to acknowledge the colloquial definition that "cow"="any bovine", it provides technicalities like "sometimes excess milk is given as a treat or surplus feedstock to farm animals, which includes cows", but I can't get it to acknowledge "colloquially, calves are cows that drink milk" - it only agrees when directly asked in the fresher conversation, and actually tells me that's wrong in the longer one (with more tokens fed back in about the "correct" definition).

[I then shelled out for plus and tested this on GPT4. Got it in one, on each of multiple tests:]

>Me: in what sense is the sentence 'cows drink milk' correct?

>CGPT4: The sentence "cows drink milk" can be considered correct in the context of baby cows, also known as calves. Calves drink milk from their mothers as their primary source of nutrition during the early stages of their lives. However, once they grow into adult cows, they no longer consume milk and instead graze on grass, hay, and other types of forage. So, the sentence can be interpreted as correct when specifically referring to calves.

compared to 3.5, which kept insisting it was a metaphor over 3 more tries:

>CGPT3.5: The sentence "cows drink milk" is actually incorrect. Cows are mammals that produce milk, but they do not drink milk themselves. Instead, they drink water just like most other animals. Therefore, this sentence can be considered incorrect in the literal sense.

>However, it is possible that the sentence is being used in a figurative or metaphorical sense. For example, if someone were to say "cows drink milk" as a way of describing someone who is acting foolishly or inappropriately, it could be interpreted as a metaphorical statement. In this case, the sentence would not be taken literally and would instead have a different meaning altogether.

https://news.ycombinator.com/item?id=35158500

Expand full comment
Grizzly's avatar

I think the next metric will be the accuracy of answers, and getting AI hallucination as close to zero as possible.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023Edited
Comment deleted
Expand full comment
Hank Wilbon's avatar

>One thing that did stick out to me with these guys is how much they seemed to take transhumanism for granted. Not in a "duh, humans will develop tech to modify themselves eventually' sort of way but as an eschatology.

Good point. Here's relevant quote from that interview:

" The problem is that algorithms are continuing to improve. So you need to either shut down the journals reporting the AI results, or you need less and less and less computing power around. Even if you shut down all the journals people are going to be communicating with encrypted email lists about their bright ideas for improving AI. But if they don’t get to do their own giant training runs, the progress may slow down a bit. It still wouldn’t slow down forever. The algorithms just get better and better and the ceiling of compute has to get lower and lower and at some point you’re asking people to give up their home GPUs. At some point you’re being like — No more high speed computers. Then I start to worry that we never actually do get to the glorious transhumanist future and in this case, what was the point?"

Expand full comment
Martin Blank's avatar

>I would sincerely rather us all be dead than live in a Brave New World.

Talk about your luxury beliefs. Spoken like someone who has little contact with real hardship.

Expand full comment
Shion Arita's avatar

I'm not sure how much it has to do with having contact with real hardship, but I do find that many people (in my opinion) grossly underestimate how big of a deal existing is.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Yug Gnirob's avatar

Which of those two approaches did you interpret my four words to be?

Expand full comment
Jeffrey Soreff's avatar

Agreed, though we may have different reasons for disagreeing with Argentus's view.

Personally, I view a somewhat modified version of Brave New World (deltas and epsilon's replaced by electronics and machinery) as preferable to what we actually have - if it were actually possible to make it work, which I doubt. BNW looks too close to a planned economy, with the predestinators tasked with accurately supplying labor needs decades in advance. (I'll explain in more detail if anyone is interested).

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Martin Blank's avatar

My point would be that mostly people who live in societies worse than the one IN BNW, nevertheless every much do not want to die. So one, ok whatever you don't want that, maybe you are an outlier. Wishing death on everyone else for your anomalous beliefs is very bad.

You sound like some sci-fi movie villain mad scientist, who would rather destroy the world than see all human be put in an alien zoo or something.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023Edited
Comment deleted
Expand full comment
Martin Blank's avatar

If you think

>human existence sans autonomy is devoid of everything that makes human existence worthwhile and without it there's no particular reason to favor the continuation of humans over the existence of ants or ficuses.

is a pretty standard/vanilla position you really need to get out more. That is a very non-standard position in the world.

Sounds good. I think we understand each other.

Expand full comment
beleester's avatar

> As one example, at the point that AI just manages to do something pretty mundane like take over human art but without killing us, I'm already treading into "who cares if we all die then?" space. I would sincerely rather us all be dead than live in a Brave New World.

I think that "I would rather die than be surrounded by AI art" is equally bizarre. AI art promises to democratize creativity - to let any random person go straight from the image in their head to a full-fledged creation. It promises to beautify *everything*, even the idle fancies that wouldn't be worth hiring an artist for. I can think of very few situations where "yes, I want the world to have less art" is a good, reasonable objective.

Brave New World was a terrible place, but it was bad because of the eugenics, brainwashing, and bizarre self-destructive economy, not because it had too much free time and fun.

I can maybe sympathize with people who think it's bad or depressing that machines will become better than us at everything, but look - someone has *always* been better than you. Ever since the beginning of mass media we've been surrounded by images of better football players, better chess players, better artists, better singers and songwriters. And nonetheless amateurs still do all of those things. For fun. All you can do is learn to be comfortable with not being #1.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023Edited
Comment deleted
Expand full comment
beleester's avatar

>There may be some artists who mostly do art of various kinds to keep it in a closet forever, but I'd hazard they are a minority. Most people who write, paint, make music, whatever do so hoping somebody else besides their mom will also look at it.

I mean, you can still do that, just not professionally. Most people who write fanfiction are hoping that other people will look at it, but they sure aren't in it for the money.

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Yug Gnirob's avatar

If "everyone else just generates them with AI", how is that different from "everyone else just expresses them with the English language"? That's not a loss, it's a gain, a way to better express an idea in a way people will understand. I have dozens of stories I've never written that I would love to see in some tangible form, and if I could just plug them into a computer to write them for me I would do it in a heartbeat.

Expand full comment
Martin Blank's avatar

>It's not just art. AI seems poised to displace most of the stuff I actually care about, am good at, or need.

Most of the things I care about have nothing to do with AI? MY family, Playing low level men's league team sports with friends. Reading good books, and learning about the world. Playing fun video/board games.

None of that stuff will be "ruined" by AI. And human art for sure isn't going to be "destroyed".

Expand full comment
User's avatar
Comment deleted
Apr 11, 2023
Comment deleted
Expand full comment
Martin Blank's avatar

The camera didn't destroy people making representational paintings. Yes high art moved away from it, but high art is super silly.

Expand full comment
Bi_Gates's avatar

>I'm...not a fan of transhumanism to put it mildly.

Why so ? Transhumanism is awesome, it's the logical extension of humanism, it posits that humans have always used the world (i.e. technology, which is just bits and pieces of the world around us, molded and repurposed to our liking) to act and think in ways far superior to any organ we might evolve ourselves, so we might as well continue do that, and do it well, and do it very often. To the point where the world becomes one with our flesh, or where we have no flesh at all.

The Moon Landing was thanks to Transhumanism[1] : Konstantin Tsiolkovsky was an early Transhumanist who longed for the colonization of the galaxy and immortality, and he's the one who first thought up of multi-stage rockets and the delta v equation.

> Is there such a thing as the person who considers it a tragedy if humans go extinct before we become demigods?

Keep in mind that :

(1) Transhumanism is both older and far more widespread and well-known than Rationalism, so Yud (himself a very peculier subset of Rationalism that I frankly dislike and don't think him to represent Rationalism well) defintely is not the model Transhumanist

(2) Transhumanists, as a consequence of being very heteregenous in both time and space, are not a monolith

With that said, I do consider myself a Transhumanist, but I don't care if humans went extinct. I wouldn't do it myself, but we actually kinda deserve it, by vice of the massive murder spree that we do to innocent animals. Immortality also unnerves me because Death is the only Egalitarian executioner, by which hands every tyrant and criminal is eventually sent to their rightful place. Imagine an Islam where the ~1500 years old Muhammed is still alive and in charge, or a Russia (Soviet Union ?) under the >150 years old Stalin. Yeah, that's what Immortality will get us. It's by far my least favorite part of Transhumanism's promise, Death is basically the shutdown button for the extremly faulty and prone to criminality machine known as humans, and naive Transhumanists are running around and itching to have it permanently removed.

That said, you will find plenty of Tranhumanists who think like Yud.

[1] https://en.wikipedia.org/wiki/Konstantin_Tsiolkovsky

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023Edited
Comment deleted
Expand full comment
Moon Moth's avatar

The transhumanist answer might be something like, "Of course we can't make heaven on Earth. We're not close enough to being gods yet. Let's fix that first." And the objection to AI being something like, "We're not godlike enough to get away with making something more godlike than we are."

Expand full comment
Max B's avatar

"Is there such a thing as the person who considers it a tragedy if humans go extinct before we become demigods?"

Yes they are called "normies". I for one welcome our AI overlords! Human extinction or not progress must occur

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023
Comment deleted
Expand full comment
Ch Hi's avatar

But the "things pretty much stay like this forever" isn't so much a desire as a prediction. Every single person I've ever known has wanted change, if only "I want to be more beautiful/handsome". And the "pretty much like this" is "I've never imagined any other way". Of course, they also don't WANT to imagine any other way, but just why varies from person to person. Some are threatened by any change. Some don't wan to waste their time dreaming of things that won't happen. Some have yet other reasons.

Expand full comment
Fluesterwitz's avatar

The "why?" usually, and reasonably, comes down to opportunity cost.

Expand full comment
User's avatar
Comment deleted
Apr 10, 2023Edited
Comment deleted
Expand full comment
Ghillie Dhu's avatar

No colonization of space *is* extinction, eventually. The normie perspective as you've summarized it is just myopia with extra steps.

Expand full comment
User's avatar
Comment deleted
Apr 9, 2023
Comment deleted
Expand full comment
Jeffrey Soreff's avatar

Two comments on

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

(which you wound up signing)

a) Two sentences in the letter are: "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects."

I don't think ANY technology can pass that test. Not electricity, not the printing press, not vitamins. This smells like candle makers grabbing for control of electric lights.

b) More from the letter: "In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause." Oh shit. For an example of recent regulatory action see https://astralcodexten.substack.com/p/the-government-is-making-telemedicine - and this is a _favorable_ case. The government has a century of experience in regulating medicine.

It isn't wholly impossible for the government to eventually settle on a fairly sane set of regulations. Traffic regulations work reasonably sanely. But this isn't the way to bet, particularly in a new area of technology.

Yes, AI is potentially dangerous. While I think Eliezer Yudkowsky probably overestimates the risks, I personally guess that humanity probably has less than 50/50 odds of surviving it. Nonetheless, I would rather take my chances with whatever OpenAI and its peers come up with rather than see an analog to the telemedicine regulation fiasco - failing to solve the problem it purports to address, and making the overall situation pointlessly worse - in AI.

Expand full comment