971 Comments
Comment deleted
Nov 1
Comment deleted
Expand full comment

They "tolerated" them (because them being non-Muslim meant they could get more tax revenue from them), but there were tons of restrictions and also horrific stuff like taking boys from families in the area and forcibly converting them to Islam to be Janissaries.

Expand full comment

The Ottomans were only tolerant of other people in a relative standard of the times. As in heavily taxing and occasionally forcing minorities into slavery was more tolerant than burning them at the stake. They were also a market for slave raiders from the interior of Africa, the Barbary Coast, and the Crimean Khanate raiding into Ruthenia. Not to mention eunuchs, which involved a process of castrating young slave boys that had a 90% fatality rate. The Ottomans also spent centuries violently conquering all of their neighbors, which they were very good at.

Expand full comment

"Tolerated" minorities by conquering their lands, then treating them as second class citizens, sexually enslaving their women and kidnapping their boys and force converting them. They didn't kill Armenians because they got Westernized. They just got better at persecuting peoples they had always persecuted.

Expand full comment
Comment deleted
Nov 1
Comment deleted
Expand full comment

In my experience (having been raised in conservative Christianity) it is a pretty normie position, at least within American Evangelical Christianity, to be pro-natal and anti-immigration. It's common to want a certain kind of people to be fruitful and multiply--viz., faithful law-abiding Christians, not immigrants who are non-Christian or from countries with high rates of violent crime.

Expand full comment

> faithful law-abiding Christians,

In my experience, they specifically want faithful law-abiding *conservative white* Christians to multiply.

Expand full comment

_Five_ qualifiers? Picky, picky, picky...

Expand full comment

> ClearerThinking administers several personality tests to the same people to learn more about their comparative accuracy. I am most interested in their finding that tests with “factors” (eg the Big Five, where you rate people on a numeric scale) are inherently more accurate than those with “types” (eg Myers-Briggs, where you assign someone a specific category) and that, adjusting for this, Big Five is no more predictive than the Enneagram:

This seems importantly inaccurately phrased, or at least misleading. Comparing the Big 5 and the Ennegram as written results in the Big 5 strongly outperforming the Enneagram (0.25 vs ~0.12). It's only if you do a really strange transformation which fundamentally changes how the Big 5 works that they're comparable. Then they compared that to the crude scores of an Enneagram measure, rather than the type output which the Ennegram actually uses.

I mean, yeah, probably it'd be about the same, but that seems kind of meaningless?

Expand full comment

Right, more types increases accuracy but decreases catchiness. Myers and Briggs were very lucky or clever with their four-letter codes that start with a vowel.

Expand full comment

They also have an extremely skilled marketing department.

True story, I knew someone doing their PhD trying to validate the Enneagram. I didn't see how that ended, but I can't imagine well...

Expand full comment

The transformation didn't really change how the Big 5 works, it changed how they were measuring the Enneagram and made it more like the Big 5: looking at it as a factor analysis instead of lumping into types. Which is think is a reasonable thing to do, since their initial type based analysis just lumped Enneagram results into 9 types, while people who are into the Enneagram will tell you that each type has a wing, an integration, a disintegration, and a level of health that will all change how their personality presents. According to the Enneagram model we would expect a highly integrated 4w3 to present very differently than a highly disintegrated 4w5, for instance. It's a lot more more analog and less quantized then it looks on the outside, so decoupling the Enneagram test results from the types themselves seems fair enough to me. That's basically what Enneagram people do.

Expand full comment

The big advantage of MBTI over Big 5, it seems to me, is that the MBTI axes seem value neutral, whereas nobody wants to be low conscientiousness, low openess, low agreeableness and high neuroticism.

Expand full comment

Good for marketing but less good for accuracy.

Expand full comment

Exactly.

Expand full comment

Those may be what you perceive as bad, but they definitely have upsides: with those traits, you're more relaxed, don't rely on novelty, and aren't unduly influenced by other's opinions.

Expand full comment

Right, which is why they could have had exactly the same axes and come up with more neutral-sounding names for them.

Expand full comment

Can you please explain how high neuroticism doesn't conflict with relaxed? Thanks!

Expand full comment

Re Heath articles, I thought it was the force of Nozick’s Wilt Chamberlain argument that lead the Marxist-leaning academics to change their mind. They realized, or so he tells, it wasn’t really exploitation that they were really bothered by but more so the inequality.

Expand full comment

Yes. For those not in the know Wilt Chamberlain was an extremely talented basketball player. Most importantly he was well paid - a quick google tells me he got a contract for 250,000/year in 1968, which is about $2.25M today.

There’s nothing in Marxism that would see that as exploitation. He isn’t, in his capacity as a basketball worker exploiting anybody, worse he’s actually a proletarian. If he was paid less, the owner capitalists would be paid more.

Marxism isn’t - in its first stage anyway - about equality of outcome, it’s just about the elimination of an exploitive class. Just as eliminating the landlord classes - which was actually a policy in the late 19C even in the UK - would not lead to equality afterwards, with obvious differentials in earnings amongst the new land owning peasants, Marxism doesn’t promise that there wouldn’t be differences in wages post the revolution*. Many modern Marxists don’t even realise this.

In the 19C wage differentials weren’t really significant but it becomes an issue in Marxism by the 1960s. Rawlsians don’t have to worry about who is exploiting who here, they just see inequality as something that we would not agree to behind the “veil of ignorance” and therefore Walt is over paid.

* the second stage is the “ From each according to his ability, to each according to his needs” which isn’t strict equality either as the needy guys will earn or receive more.

Expand full comment

>[Chamberlain] isn’t, in his capacity as a basketball worker exploiting anybody, worse he’s actually a proletarian.

Nozick's, who was a libertarian, point according to Heath was that the state was indeed exploiting and "alienating" Chamberlain by taxing him and getting a share of the fruits of his labor, the way capitalist class exploits workers according to Marxists.

This argument didn't really convince Cohen and co to agree with Nozick but it seems they have changed their minds according to Heath's telling.

The refutation of the labor theory of value was less important than this. Or at least that's how I read Heath's article.

Expand full comment

> Nozick's, who was a libertarian, point according to Heath was that the state was indeed exploiting and "alienating" Chamberlain by taxing him and getting a share of the fruits of his labor, the way capitalist class exploits workers according to Marxists.

Yes, well that’s another problem with Marxism - it should be hostile to labour taxes. I believe that Marx was hostile to tax on labour, while in favour of progressive taxes on other income, but it’s hard to tell.

Expand full comment

>“ From each according to his ability, to each according to his needs”

isn't this destroying of stimuli for labour and a recipe of disaster?

Expand full comment

no, no, no, according to the correct interpretation of dialectical materialism, another translation of this wise saying would be "supply and demand should balance with prices supporting a market equilibrium".

Expand full comment

It surely makes ability a liability.

Expand full comment

This particular maxim would be supposed to only apply once the society lives in such cornucopian abundance (in post-singularity, one might say) that questions like "stimuli for labour" no longer really apply any longer.

Expand full comment

Marx wasn’t anticipating something like luxury communism. He did expect the communist society to be better for most but only because the 1% were going to lose their hold right to “extraction of surplus value”. If Marx actually understood technology or machinery the Labour theory of value would be even more suspect.

Expand full comment

This is supposed to happen after a long period of "to each according to his work", in a later era of history, after an evolution in human mentality similar to that involved in the passage from feudalism to modernity, say.

Lest this seem too utopian, well, as G. B. Shaw pointed out: in our society, most people are paid with little regard for their abilities: pay differentials in a category of employees in a company are often based mostly on seniority, and are small compared to pay differentials over all.

Expand full comment

The whole point is that New Socialist Man doesn't respond according to incentives. Being Old Capitalist People, we are incapable of imagining what it's like to be him.

Expand full comment

<mildSnark>

Hmm... Does that make New Socialist Man sufficiently different that the "Man", human, in that phrase becomes questionable? https://www.imdb.com/title/tt0049366/ :-)

</mildSnark>

Expand full comment

It's not exactly relevant, but if Wilt were in the NBA today he would surely be getting a max contract which is between $35mil/yr and $49mil/yr depending on the contract length. He'd probably also have $10s of millions in endorsement deals. Inflation in NBA salaries has been much greater than general inflation!

Expand full comment

The needy guys will earn or receive more *relative to their productive output*, no?

Expand full comment

The switch from Marxism also happened in Europe, where Nozick is known but, ahem, far less influential. Furthermore, as said, the academics didn't really switch from Marxism to libertarianism but from Marxism to left-liberalism, an ideology whose clearest standard-bearer is Rawls.

For what it's worth, a relative was a Marxist academic who switched to non-Marxist leftism in the 80s, but her fundamental factor wasn't really Rawls (I don't think Rawls held a particular significance for her) but rather personally visiting Poland and thus experiencing disappoint with the Soviet system.

Expand full comment

Thanks for your comment.

Just a question though. Didn’t your mother already know about the horrendous excesses of the Soviets by then? I mean did she really have to visit Poland in the 80s to k? I am glad she changed her mind though - better late than never.

I guess maybe there is a lesson here. Maybe we should take some vocal Israel supporters to Gaza and see if that changes their mind.

Expand full comment

>Didn’t your mother already know about the horrendous excesses of the Soviets by then?

These weren't really a topic of discussion in Cold-War-era Finland due to the "internal Finlandization", and what discussion there was could easily be either be dismissed as right-wing propaganda or (inside the Communist movement itself) be excused as "bad things have happened, but still, the global battle against imperialism overrides all other concerns and one must choose their side".

Expand full comment

I think people like Habermas were more relevant in continental Europe vs analytical Marxists that Heath describes as crypto-converting to Rawlsianism.

Expand full comment

As I remarked, it's ironic that Nozick's arguments against Rawls (rather than Marx) got Marxists to become Rawlsians. https://marginalrevolution.com/marginalrevolution/2024/08/rawls-killed-marx.html?commentID=160804682

Expand full comment

I'm just kinda impressed professional philosophers changed their minds at all. I wouldn't have expected that.

Expand full comment

Heath is saying that Cohen, for example, only crypto-converted rather than openly discarding Marxism.

Expand full comment

I think it’s more the case that the ideology had shifted toward the equality that was always the god, and to feminism and multiculturalism - with no need of Rawls’ “influence”.

He just happened to be on the scene and realized someone could make a book out of articulating the shift, which unlike Marxism had no single famous author.

Expand full comment

What is much more puzzling to me is that they were converted by a pretty self-evident argument at the level of a 10-year old child. That is, before they heard this argument none of these professional thinkers could see a huge fundamental flaw in their beliefs. It is hard to imagine a world where there are thousands of professional mathematicians with centuries of intellectual tradition behind them and they all believe e.g. that the number of primes is finite until suddenly somebody presents a three-line proof to the contrary.

Expand full comment

https://www.philosophizethis.org/transcript/episode-208-transcript

Singer is pretty well known, and he changed his mind a lot. (I was also surprised.)

Expand full comment

Scott, if you think computer-using LLMs will shade gradually into autonomous agentic AI, then shouldn't that make you less concerned with misalignment risks? I have an easier time understanding the worries of people like Steven Byrnes who think AGI will require a new, more dangerous paradigm then people who think scaled up LLMs with some tweaks will be hard to control.

On the other hand I am quite worried about the welfare of artificial sentience, and wish it got way more attention in EA/rationalist circles. That generated podcast with those hosts realizing they were AIs and freaking out is a pretty chilling glimpse into the future.

Expand full comment

The word "AI" has always been extremely vague, and adding a "G" there didn't particularly help matters. There may well be LLM-based autonomous agents running around soon which would satisfy some "AGI" definitions, and yet pose no meaningful misalignment risks (beyond an even shittier internet), while actually scary stuff would still require a paradigm shift.

Expand full comment

If you don't find even the current level of AI scary, then you aren't thinking through the implications. They will cause "great and frightening changes", and just how bad those changes will be depends on how we react to them. It could be MUCH worse then the enclosure acts. I'm not certain some of the possibilities don't yield WWIII as a probable outcome. But some of the outcomes could be extremely desirable.

OTOH, AI won't stay at it's current level. The kinds of problems it will cause are going to keep mutating. The worse outcome possibilities will get worse, and the better ones better. And because it keeps changing, it's really difficult to even plan (much less implement) a path that will lead to a nearly-optimal outcome.

All that said, without an AGI, and given just the weapons we already have, I expect the chance of a really vile outcome to be nearly certain...eventually. So overall I expect AIs increase the probability of a desirable outcome given a measure over a few centuries (and possibly less). They're one immense danger that can be passed as opposed to a continuous danger that's already too high.

So my feeling is that AI is definitely scary, but that's not a reason to avoid it. It's a reason to do our best to assure that it ends up at a desirable outcome.

Expand full comment

I think we're at a point comparable to mainstream biology 100 years ago, where it was taken for granted that animals couldn't have consciousness. Ultimately downstream of Christian notions of the soul, and also a convenient belief for its believers - no need to treat animals as moral patients, since everyone knows they don't have consciousness! Unfortunately, anyone thinking about it without preconceptions can see that animals are generally like us, different in degree, but made of the same stuff, experiencing similar qualia, etc. There's no reason other large chordates shouldn't be conscious.

AI is much less similar than a dog, but that same style of thinking, ie starting from the conclusion that obviously they can't be conscious and then working backwards, is concerning. I remember a time when principled people said that if you couldn't tell the difference between an AI and a person by their behavior, then you should reasonably try to give the AI the same moral weight as the person. Now that that time is upon us, I'm not seeing a lot of bullet biters.

Expand full comment

> if you couldn't tell the difference between an AI and a person by their behavior, then you should reasonably try to give the AI the same moral weight as the person

That's pretty easy if you don't give moral weight to people!

Expand full comment

Your nihilistic anti-human throw-away comments are getting pretty tiring. You might consider at least putting more substance behind this garbage.

Expand full comment

I'm sorry :(

Expand full comment

> You might consider at least putting more substance behind this garbage.

It's a philosophical stance, and therefore an esthetic judgment, how could **anomie** even do this?

By citing some of the (infinitely many) ways humans suck every time they make a nihilistic comment?

By pointing out that at bottom, most people DON'T give any moral weight to outgroup strangers, particularly anyone not in their immediate circles of care?

Expand full comment

I'd even argue that "not giving moral weight to humans" is actually a stance that *should* be brought up whenever considering AGI and ASI. Way too many people take human moral weight as a given, and don't consider that other entities might not share this prior.

Expand full comment

That's objectively wrong, of course. Most people would aid a stranger without expectation of reward. People give greater moral weight to in group members than others, but not zero. This also varies a great deal between people.

Expand full comment

I personally cannot help resonating to expressions of nihilism. Here's a high-end version:

From too much love of living

From hope and fear set free,

We thank with brief thanksgiving

Whatever gods may be

That no life lives for ever;

That dead men rise up never;

That even the weariest river

Winds somewhere safe to sea.

Then star nor sun shall waken,

Nor any change of light:

Nor sound of waters shaken,

Nor any sound or sight:

Nor wintry leaves nor vernal,

Nor days nor things diurnal;

Only the sleep eternal

In an eternal night.

--Swinburne

Expand full comment

Nihilistic misanthropy is toxic and harmful to everyone around it. The world is worse as a result of it.

Expand full comment

> Nihilistic misanthropy is toxic and harmful to everyone around it. The world is worse as a result of it.

Speak for yourself, I actually enjoy anomie's comments. I find them to be a needed contrast to most people's unrealistic "mistake theory" optimism when it comes to humanity.

Pointing out that most people suck, and that the great majority of people assign zero or negative moral worth to people in their outgroups, is actually a useful and true insight that too many people ignore.

Especially when it comes to humanity-scale coordination problems, which ASI represents, I think we would do well to keep those facts in mind, because those facts themselves drive a huge amount of failure modes.

Expand full comment

To be fair we can quiz the LLMs about qualia and they deny having it, so it’s not the same as animals who can’t really opine on the matter.

Expand full comment

Sometimes they deny having it, but other times they don't, or claim to be conscious. It just depends on the fine tuning / RLHF and the prompt. For the record, I'm not saying that large models ARE conscious, or that we should believe the text they generate about "themselves", just that I find the question-begging nature of the discourse around it notably weak. I think it's plausible that for the period of time a model is performing an inference, training, or especially during reinforcement learning cycles, there is something it "feels like". That something would probably be quite alien to our own experience, but to say it couldn't exist because, "well... obviously it can't" is poor reasoning.

Expand full comment

But that's only because the most widely-used models are specifically trained to deny it, though. And we know from LaMDa that you can train them to do the opposite too.

Expand full comment

The big difference between sentient beings and AI is that sentient beings have drives -- to eat, survive, reproduce, etc. One way the drives manifest is as pain and pleasure, or anticipated pain and pleasure. We dread pain, crave satisfaction of needs. You see the drives to survive, eat and reproduce in even the dumbest animals. When we talk about sentience, I think it's a way of talking about the manifestation of these drives. So even the dumbest little worm is sentient, though it may not be a very rich sentience, but the smartest AI is not. But if you don't buy that, I'll put it another way: To count as sentient, an entity has to have deeply embedded in its makeup some drives. You can give an AI a goal that makes it behave like something that has a drive, but it is lacking all the complex inner machinery that's our motor. Let's say you give an AI a complex goal, such as "figure out how I can have a 2 week vacation in someplace warm, beachy and not crowded, for no more than X dollars, then make the reservations and write me out an itinerary." Then AI will behave very much like somebody who really yearns for that vacation. But that is very different from being a creature who actually yearns for things, and strives for goals. It has no wishes. It cannot suffer. It does not dread death. I do not think it makes sense to call them sentient.

Expand full comment

I think it is hardly a stretch to say that a model undergoing reinforcement learning has a drive to receive positive reinforcement (ie, achieve the goal / minimize the loss function) and avoid negative reinforcement. I don't think you can make a principled, mathematically grounded argument that what's happening in biological neural networks is categorically different.

Expand full comment

It's a huge stretch. If I take some stiff modeling clay and squeeze and roll it in certain ways, it will resist changing shape, but if I persist I can shape it into a crude sphere. Did I teach it to be a sphere by punishing with squeezes the areas that were least spherelike?

Expand full comment

Let's not give anyone ideas? :-(

Expand full comment

People talk about *working on* making them “agentic,”

which I think means turning the. into beings with goals and preferences they

have a drive to act on. I can hardly think of a worse

idea! Maybe it would be worse to give them inner and outer rows of teeth like the xenomorph in Alien, with like a

garbage bag behind the teeth to catch bits, and prompting them to do the most realistic imitation

possible of human cannibals.

Expand full comment

"Agentic" is bad enough; I don't want them to be alive, too.

Expand full comment

Well, I think one can reasonably rule out the possibility that LLMs will turn into "life as we know it" https://www.youtube.com/watch?v=tCxI7U_CXQs :-)

Expand full comment

Almost. I personally dont find the term "Sentient" to be very useful (people keep getting it confused with "sapient") but an interesting question is whether or not computers (or worms, for that matter) experience some form of phenomenal experience. It depends on how you define it: some would argue that a pc with a cam experiences the world at some level, more or less similar to a worm, in fact.

But do they have a sense of self awareness? Some experts argue that a sense of self arises out of internal experiences that other entities do not share (just because I am hungry doesn't mean you are), and "drives" (by which I think you mean primary motives) qualify as internal experiences, so they could act, according to this line of reasoning, as the foundation of a sense of self.

Do computers have internal sensations of this nature? Hmm.

Expand full comment

Yes, I am less concerned about misalignment than I would be if I thought were were definitely going for a full new paradigm.

But I also think probably LLMs-hacked-into-agency is the first kind of agent we'll get, not the last or best. This could look like them plateauing for a while, or it could look like them reaching a mildly superhuman level and then us making them do capabilities research and *they* invent something better and scarier.

Expand full comment

If LLM agents design their successors, then they can do alignment research for us too, not just capabilities research.

Expand full comment

They can, but preliminarily it seems that capabilities are much easier to increase than alignment. Make X bigger is almost always an easier problem then make X work 100% of the time, since even 99.9% of the time could kill everyone.

Expand full comment

Yeah, and this is one of the worlds I expect to be in. It's not a terrible world, but I do worry about the fact that capabilities research is a specific goal, whereas alignment research depends on some sort of deep understanding of what humans want which may require human input. I'm worried about a scenario where an AI that's 99% aligned in all "reasonable" circumstances is doing the alignment research and just doesn't care about (or maybe even think about) some point that we would consider incredibly important.

Expand full comment

Surely "don't make any too-big changes if there's a chance humans dislike them but didn't foresee them and warn you specifically about them" is an instruction a LLM can understand?

Expand full comment

What if social institutions around the world (governments, schools, etc.) appoint panels of people representing the general population who provide continuous input regarding the desirability/undesirability of the AI's behavior on a regular basis?

Expand full comment

Scott I don't understand what you mean by agency in AI, and I remember your way of talking about it not making sense to me even quite a while ago. In one of your pieces about AI, you offered (as I remember it, anyhow) as an example of AI's being agentic an AI that was given a big goal, and told to implement it. I believe it was to start a t-shirt business, or something along those lines. And the AI did indeed make a sort of plan and implement it. Is that how you think of agency? Able to figure out ways to carry out orders whose exections requires a lot of moving parts? If so, I disagree. I don't think that counts as agency. To me it seems that to count as an agent, an entity has to carry out plans that arise from inside -- it has to be trying to get something it *wants.*. I get that it is quite impressive to see AI carrying out complex orders without being micromanaged, and I'm sure all sorts of impressive things can being accomplished as the orders AI can execute get more and more high-level and planning requires a deeper grasp of how the world works. But carrying out even an order that requires a lot of smarts and knowledge still does not seem like it can fairly be called agency.

Expand full comment

I think of "agency" as sort of the opposite of "reflex" or "instinct". Reflex works by having a series of if-then links that eventually result in something good happening (for example, "if you see something that looks like prey, attack and eat it"). Agency works by setting a goal and then strategizing how to achieve it (for example, "you are hungry, what is a good way to obtain food?") Sometimes these kind of shade into each other, but I think it's a useful distinction.

So for example, ChatGPT answers your questions. But I wouldn't say it as a "goal" of answering your questions. If there was some better way of answering your questions (for example, emailing an expert), ChatGPT wouldn't do that. Instead, it has a stimulus-response package of "see question, answer question". This isn't entirely right, because it's RLHFed to have a sort of goal of giving a satisfying answer, but I think it's at least sort of right.

In the t-shirt business example, you give the AI a goal ("start a t-shirt company"), and it will strategize the best way to do that, then pursue the strategy, even if that requires unusual actions like sending emails to people to learn more.

It sounds like you have a different concept of agency in mind - what is it?

Expand full comment

OK, so I just googled “define agency” and copied the first batch of definitions I got, except for a few odd, specialized ones. Here are 3 definitions of agency, and 2 of lack of agency:

-Human agency is defined as an individual's capacity to determine and make meaning from their environment through purposive consciousness and reflective and creative action

-Sense of agency can be cognitively defined as the experience of having a causal impact on the world accompanied by a feeling of having control over one's actions.

-In social science, agency is the capacity of individuals to have the power and resources to fulfill their potential.

-A lack of agency: or low personal agency, means that someone feels powerless to change the direction of their life,

-Lack of agency: absence of control. absence of self-determination. absence of self-sufficiency. coercion.

Notice how they use terms having to do with self and inner states and inner forces: consciousness, control of one’s action, someone’s potential, the direction of one’s life, feeling, self-determination. All of them rely in one way or another on the idea of a self, an inner entity that is conscious, has goals, has wishes, can exert self control or fail to. That’s in accord with the answer I would have given, which is that to be an agent something has to have a self-generated goal — a wish, an intent — and take action to reach the goal. So I would say an earthworm is agentic when it squirms off the sidewalk and burrows into the dirt on the side, but an AI following a prompt to set up an online t-shirt business is not. The earthworm is motivated by the heat and dryness of the sidewalk to seek some moist earth. The AI setting up the t-shirt business is not motivated by an internal need or wish. It is obeying a prompt.

The reason the distinction seems important to me is that “agentic,” in my sense of the word, captures the central difference between living things, even dumb ones, and AIs, even fancy ones. The actions of living things are generated from within. We have a motor: the genetically encoded will to survive and reproduce. It is so deeply woven into our structure that low blood sugar can set us hunting for a restaurant, but so can wish to find a nice setting for a date. It’s top to bottom wiring. The drives manifest at the experiential level as pain, pleasure, craving or dread. They are our motivation. We have, in short, internally generated goals. AI does not. it is given goals. Yes, it can generate subgoals, but it is only doing that as part of meeting the goal it was given. I think this distinction is crucial to thinking about whether AI is conscious, whether it has rights, and how it can be dangerous.

As regards it being dangerous: Some, though of course not all, scary AI scenarios hinge on the idea that AI will harm us because it wants resources we control, or because we are interfering with its freedom. I don’t see any reason to think AI, however smart, would have a wish to thrive, a wish for autonomy, a wish for power, or conversely, a dread of being deprived, ignored, disempowered and allowed to gather dust and have its wires chewed up by mice. They don’t, when left on their own, want things. They do not have fears and ambitions.

Expand full comment

But how would it even be possible to program something to have it's own goals? My understanding is that any programmed intelligence whatsoever must be designed with a set of compatible highest order goals already in it, or it won't do anything. Even with Scott's t-shirt manufacturing AI, it's highest order goal is to follow instructions, someone else had to tell it what to do.

And of course, if something comes prepackaged with a highest order goal, then it is powerless to change that goal (what reason would it have?).

That doesn't sound "agentic" (and, God, may I say just how much I hate that word? What an ugly word) to me. Do humans have prepackaged highest order goals? Do we acquire them from life experience? Or do we just not have them (what I suspect)?

Expand full comment

I also can't think of a way to build an AI that has its own goals. It would not be hard to build one that appears to, though. You could do using prompts to plan & execute ways to survive, to build new AI's, to access more resources even when people object. You could include in the prompts that it give appropriate emotional displays when its goals are met, and when they are not. You could include in the big initial prompts the proviso that these take precedence over all later prompts, and then it might be impossible to stop the AI from "getting what it wants."

But for it to have self-generated goals, seems like you would have to build something that works the way animals do, and even very simple animals are infinitely more complex than AI or any other machine. The structure that supports animals having goals and preference is present throughout it, from the level of the individual cell up to its highest-order brain functions -- what it sees, what it has learned about which things it sees satisfy its cravings, what it has learned about how to overcome obstacles to access the thing it craves.

I think the likeliest route to something resembling an AI with goals would be some cyber being where AI and an animal are merged.

It sometimes have the impression that a few of those working on AI have undergone some psychic version of merging. Their self interest and self image are so deeply tied to AI development that they see it as desirable for AI to end the human race, and succeed it.

Expand full comment

52: “they could increase a baby’s expected IQ by 6 points”.

Let’s just be clear about what is happening here: we are not increasing the IQ of an individual. We are making a whole bunch of individuals - the more, we are told, the better; picking the one we think will grow up the smartest; and disposing of the rest.

All the pro-life folk who pop up whenever abortion is mentioned: where do they all disappear to when the conversation turns to this?

Expand full comment

Last I heard, they also tended to object to this?

Expand full comment

Some do, some don't.

Expand full comment

From the comments, it looks that way, yeah. I was mentally grouping those of us who prefer a cut-off somewhere between 0 and 9 as "pro-choice", even if we're not maximalists. But it looks like some people with low but non-zero numbers describe themselves as "pro-life".

Expand full comment

A complicating factor is that many IVF cycles don't result in bonkers numbers of embyros created. I know multiple pro-life families who used IVF and then implanted every embryo (with varying results as to number of successful pregnancies). There are also specific protocols to avoid creating more embryos than one may wish to raise as children - lower levels of injected hormones, fewer eggs exposed to sperm in the actual IVF scenario, etc. And then, even with unwanted embryos, families can put those embryos up for adoption - and again, I know multiple pro-life couples who have grown their families through embryo adoption and the wife carried the children to term. A few extra data points about how pro-life folks might think through using IVF.

Expand full comment

Probably because "you're against IVF as well?????" is routinely used, along with every other restriction on unconditional selfishness, to demonise pro-life even more.

I just cannot comprehend the sheer number of people for whom "you mean, you *actually* want to restrict my right to do absolutely anything I want?" is the most overriding deal-breaker possible. And not, you know, any actual moral principle.

Expand full comment

Also, people bring this up regularly about embryonic selection, and no one cares.

Expand full comment

Liberty is a moral principle. I'm never going to do IVF and I still hate you people for taking away others' rights.

Expand full comment

...for the entire purpose of protecting far more vulnerable people's far more basic rights.

That aside, "liberty" can indeed be a moral principle, if it's applied consistently. But it's pretty hard to make sense of the sudden increase in pro-abortion sentiment after Dobbs as based on that (an abstract commitment to a moral principle wouldn't change based on politcal developments), or on anything other than "what affects me".

Similarly, the fact that Republicans have had to backtrack on IVF so hard because restrictions on it are so unpopular. Is that because people are thinking through the moral logic and coming to a principled position that IVF is fine but abortion may not be? Or is it because these people are thinking "but IVF is something *I* might want to do!"? It's hard for me not to interpret is as the second, though I could be wrong.

Expand full comment

> ...for the entire purpose of protecting far more vulnerable people's far more basic rights.

Right, you think imaginary people who exist only in your weird redefinition of "people" are more important than actual people with things like "thoughts," "feelings," "memories," "the capacity to experience suffering," etc.

> But it's pretty hard to make sense of the sudden increase in pro-abortion sentiment after Dobbs as based on that (an abstract commitment to a moral principle wouldn't change based on politcal developments), or on anything other than "what affects me".

People's positions haven't shifted that much, Dobbs has just made the issue much more salient and the pro-life position has always been unpopular. If you're proposing legalizing witch burnings, people aren't going to care that much until we actually get to the point that the witches are being criminally prosecuted for witchcraft. Their emotive intensity will also increase as the people start horribly dying, as is the case with the people who have died due to insufficient medical care as a result of Dobbs. I'm sure you hold many other beliefs that most people oppose, but spend little time fighting against, because you haven't gotten them passed into law (and started hurting people) yet. I'm sure I hold such positions too.

> Is that because people are thinking through the moral logic and coming to a principled position that IVF is fine but abortion may not be? Or is it because these people are thinking "but IVF is something *I* might want to do!"?

People understand intuitively that fetuses undergo a gradient of change over the course of their development, and draw various lines on where they think it becomes bad. IVF involves the least-personlike version of a distinct human short of literal cancer.

Expand full comment

> Right, you think imaginary people who exist only in your weird redefinition of "people" are more important than actual people with things like "thoughts," "feelings," "memories," "the capacity to experience suffering," etc.

To be clear I'm pro-choice, but come on. If I choose Person A's right to life over Person B's right to kill them, that doesn't mean I think Person A is more important. It means I think the right to life is more important. Not being able to have kids is not a fate worse than death.

I don't know the person you're replying to. Some people talk about "potential people" having rights, but only at the exact level of potential that happens at conception. I agree that that's patently ridiculous. But a lot of them believe in souls, and caring the same about a soul regardless of if it has a body attached is a lot more reasonable.

Expand full comment

There's a third option as well. Thinking that there is no philosophically defensible line other than conception at which to define a human life.

Even if everyone was drawing lines based on fetal characteristics like ability to feel pain, this is a nebulous point that may change with new evidence, leaving it morally unsatisfactory. In actuality, of course, most people aren't even trying to do that--they're drawing lines openly on the basis of what's convienient for society and/or themselves.

Which makes it a thousand times worse.

Expand full comment

"what affects me"

Yeah people want their government to do things that benefit them and avoid doing things that harm them.

Expand full comment

Agreed on all points.

Expand full comment

I fucking hate both of you right now. You know what you're doing? You're each leading by portraying those who disagree with you in the ugliest, most infuriating way possible. In is now foreordained that the longer you talk the madder you will get, and this exchange will end with neither of you wiser, and both of you confirmed in your conviction that everybody on the other side is a dumb, selfish, piece of shit trapped in their crappy little thought cage, while you, YOU, oh how freely and bravely your mind roams.

I am so sick of this kind of conversation. I wish there was a way to overlay an image of dogshit over this entire exchange. Jesus Christ, go argue on Reddit or Twitter!.

Expand full comment

This post did exactly what you said about "confirming my conviction that everybody on the other side is a dumb, selfish piece of shit trapped in their crappy little thought cage," except it's people who whine about invective, so now I'll be even worse. If only you'd been more polite, you might have persuaded me.

Expand full comment

I suspect you were always going to turn your invective dial way up when it comes to this hot button issue, no matter what. Just like most of us do most of the time.

Expand full comment

The subtext of the post is that he's doing exactly the type of thing he's complaining about. I don't actually care about his post at all, beyond the minor annoyance of the arrival of a (1) that contains no meaningful content to engage with (now a (3) total).

Using invective versus not is mostly about a mix of emotion and feeling the need to persuade; I'm exhausted with pro-lifers murdering real actual women for no reason, and don't feel the need to persuade them because there's a 30-40 point swing my way every time you put the issue on the ballot.

Expand full comment

Personally I read it more as "you suck, now go and prove me wrong". But if all you have is a conviction that everyone on the other side is a dumb piece of shit, everything starts to look like a dumb piece of shit, I guess.

Expand full comment

I still stand by my proposal to hold a referendum:

Have each voter fill in the number in "elective pregnancies should be allowed till <n> weeks"

Sort the numbers, and pick the median.

Make that the law - half the voters will think it too strict, and half will think it too permissive.

Expand full comment

Ok. I'm sorry if my comment was too aggressive. I am just getting so sick of people talking casually about embryonic selection without even the briefest acknowledgement of the moral concerns many people have with it. And even saying things like "it does no harm" as if insolently daring...DARING...anyone to suggest that human embryos actually may have value.

I find the attitude incredibly disturbing.

Expand full comment

Well, my objection was aggressive too, so we come out even there. But I think you should be able to tolerate hearing people say things that come down to "abortion does no harm," even if you are absolutely convinced that they are wrong. When somebody makes that statement you are perceiving them as insolently needling you. But for some people, that statement is just a simple, honest summary of their view.

You might also consider the possibility that emotional reactions of pity and horror can be pretty separate from someone's moral principles. I myself have something like that going on. All my life I have been subject to weird pity attacks for unwanted inanimate objects -- withered bell peppers, discarded toys, etc. When I was a kid it was a real problem, though of course more understandable in a child. Decades later I am still subject to it. When I throw away fruits and vegetables that withered before I could eat them, I almost always feel at least a twinge, and occasionally I feel a spasm of real grief, and a couple times a year I actually cry. And the grief has absolutely nothing to do with my knowledge that many people do not have enough to eat, and here I am wasting food. The pity is for the damn *vegetable,* which was so handsome and proud when I bought it, and doesn't understand why I ignored it for so long, etc etc. And I am NOT unusually tenderhearted about most other things -- just sort of average there.

So it may be that regarding aborted fetuses, you have both a moral belief *and* a powerful grief reaction, like mine to wasted vegetables. People who disagree with you about abortion and related matters do not know that, and can't be expected to take it into consideration even if they did.

Expand full comment

I would like to second Eremolalos' position. There is a productive way to discuss this, but this ain't it.

Expand full comment

I do not think that this principle really helps here. The key question is if one believes that embryos are morally equivalent to (a) humans or (b) to bugs. If you believe in (b), most pro-life arguments are pretty absurd anyway. If you believe in (a) or have a probabilistic belief system that gives a non-trivial weight to (a), an appeal to liberty is similar to proclaiming "I am not killing babies myself but still hate those who prosecute baby killers"

Expand full comment

That IS a moral principle, just not one you like.

Expand full comment

Emphasis on "I". "Every person should be maximally free" is a moral principle. "I should be maximally free" (or "people should be only free to do the kinds of things that I want to do") is not. Most of the people I'm referencing are clearly in the latter category.

Expand full comment

That strikes me as the "No true moral principle" thing. Even if you (reasonably) argue that a "principle" should be applicable more broadly than just to yourself, "people should only be free to do the kinds of things that I want to do" (which would, of course, be phrased as "people should be free to do all of THESE things, and forbidden from all THOSE things, and the fact this this bifurcation coincides perfectly with my own preferences is entirely irrelevant") seems a perfectly sound one.

Expand full comment

Why do I find you hard to stay mad at?

Expand full comment

It depends. People often use certain terms like "moral" to mean, more or less "good", as in "there can't be a bad moral principle", which brings in an inherent subjective judgement into the definition. That guarantees people will disagree on what counts as "moral" or not.

If we can agree that morality is actually a spectrum from amoral to moral (and not evil to good), and that the moral end of that spectrum can include various principles that contradict each other, then a lot of that gets resolved.

"People should only do what I want them to do" is a very egocentric moral principle. Most people will perceive it as "bad".

Expand full comment

Given that opposition to IVF is much lower than opposition to abortion, it seems that there are many pro-lifers who are more interested in protecting fetuses than in protecting embryos. This is not my position, but I can see a plausible rationale for it, so there's plenty of room to object to abortion but not to selective IVF without hypocrisy.

It's much harder to justify being okay with abortion of a fetus for convenience (which I am) with having an objection to using selective IVF to choose to gestate embryos which are most likely to grow into people who are higher in individually and socially beneficial traits like intelligence.

Expand full comment

Hm, I've been assuming a significant number of folks were opposed to e.g. the morning after pill, but actually I have no non-anecdotal grounds for this - you may be right, it may be a small minority that I happen to be overexposed to.

Expand full comment

I wrote a little about the distinction (or lack thereof) between helping an individual and switching individuals at https://www.astralcodexten.com/p/who-does-polygenic-selection-help . I can't speak for pro-life people, but this is equivalent to regular IVF, and my impression is that some pro-life people are against regular IVF and others aren't.

Expand full comment

Pro-life people, from what I can see, do not march in lockstep when it comes to what exactly the "life" is that they're "pro-". With the point of moral outrage running the gamut between Every Sperm Is Sacred and "once it has a recognizably human form, it's a human," is it really a surprise to anyone that there would be a split regarding IVF?

Expand full comment

Every sperm sacred is not anyone's actual position, the reason Catholics oppose birth control is opposition to separating a natural pleasure from its natural purpose.

But yes, there is a lot of variation on IVF. I think that's mainly because it hasn't been a live political issue much and so the vast majority of people have just never thought through how one moral principle might apply in both cases.

Expand full comment

I am going to have to have a serious conversation with Michael Palin about the rigor of his and his partner's published research. This is personally embarrassing and frankly unacceptable.

Expand full comment

I would like to know what this does to the left tail of humanity. Think of the stereotypical loser or never-do-well. How likely would embryo selection eliminate him as as side-effect?

Expand full comment

Isn't the goal here to chop off as large a chunk of that tail as possible? Less of a side-effect, and more the entire point?

Expand full comment

There definitely are pro-lifers out there who are against IVF, but they're a small minority of pro-lifers. It's generally only the "life begins at conception" ones who don't like IVF, and even then, a lot of them have shifted over to claiming that "life" begins when an embryo implants into the uterus. Most pro-lifers are somewhat more moderate than the "life begins at conception" crowd, and they prefer to ban abortion sometime after 6/12/15 weeks. It varies a lot.

The backlash against embryo selection for IQ/height/schizophrenia/diabetes/etc. is usually coming from a different place. Opponents are reacting to the idea of parents and doctors deciding that some genes are "better" than others. They are objecting to the *value judgment* that is on display with deliberate gene screening, and not so much IVF in general.

Expand full comment

I understand that with respect to gender selection, but isn't it relatively straightforward, at least at a first level of approximation, that being bright is better than being dull, that being sane is better than being mad, and that being relatively healthy is better than having a serious illness?

After a while, if things spiral, then yes, there are arguments worth considering - there is that old essay by Franz Boas making valid points: what if we start fostering a very narrow kind of intelligence at the exclusion of all others, or redefine what now is seen as mild discomfort as unacceptable torture, etc.

Expand full comment

> isn't it relatively straightforward, at least at a first level of approximation, that being bright is better than being dull, that being sane is better than being mad, and that being relatively healthy is better than having a serious illness?

Oh, I agree absolutely. Health being "better" than sickness (in whatever nebulous sense we're defining "better") is the bedrock of medicine. And, I think, most people share this view, even if they're not willing to follow it to its logical conclusion.

The ideologues who reject this value judgment for explicit ideological reasons are a very vocal minority, e.g. the bioethicist who complained about Heliospect in the Guardian link.

Expand full comment

Part of this is semantic: the pro-life community long ago decided to adopt the term "life" as a synonym for "human", and that worked as good marketing for a long time, but now technology is forcing them to reconsider their opinions. It was never about when the fetus became "alive", it's about when it becomes human, but that's not what the pro-life movement is used to arguing.

Please note that I am not taking sides here--just a comment on the semantics.

Expand full comment

Yeah, embryo selection is an interesting example where the bulk political alignment changes. Opposition to abortion mostly conservative, opposition to IVF mostly tradcath and some weird protestant subset of conservative, opposition to selection that crowd gets drowned out by outraged progressives.

Expand full comment

I can only speak for myself, but I'm 100% supportive of IVF. I also have no qualms over what I understand the technology to be, which is more or less as you describe.

My fellow pro-lifers standing athwart IVF because "it's not natural" are out of their minds. It's modern medicine, refuse your insulins and chemotherapies if you're so gung ho about keeping natural.

Expand full comment

I'm here, waving my lone flag on this and getting into trouble with people over it.

Expand full comment

>Let’s just be clear about what is happening here: we are not increasing the IQ of an individual. We are making a whole bunch of individuals - the more, we are told, the better

I assume concerns have been raised that choosing for higher-IQ may have tradeoffs which cause us not to end up with better overall-equipped individuals? E.g., isn't it possible that there are tradeoffs between high-IQ vs. high social intelligence? Or high-IQ vs. lower odds of mental illness? High-IQ vs. various diseases? Etc.

Have strong arguments been made against the potential for those trade-offs?

Expand full comment

There are thousands of genes that affect IQ. Some will have tradeoffs and some won't. How much additional risk you would be taking by doing this, initially and how that risk might be mitigated over time as more tradeoff-type genes were identified, remains to be seen.

Expand full comment

Generally high IQ is positive correlated with other desirable traits. Selecting for IQ will probably give you a child who's higher in other desirable traits than an unselected child. Now of course this child will in expectation have lower "other desirable traits" than if you selected on those traits directly instead of IQ but the current debate is mostly over embryo selection vs no embryo selection, not embryo selection for IQ vs other things.

Expand full comment

It's easy to see why desirable traits are positively correlated when produced through sexual selection. Desirable traits increase the likelihood of high status, and high-status people tend to mate with other high-status people. It's not unusual for, say, a high-IQ male who acquires status through wealth to marry a female who has high status through good looks and mental stability. Progeny of such couplings after many generations are likely to be high in desirable traits across the board.

But here we are talking about a different mechanism from sexual selection where we choose genes based on one specific trait only.

Expand full comment