1008 Comments
User's avatar
Deiseach's avatar

Congratulations, USA (and Mexico and Canada) on winning the bid to host the 2026 World Cup!

https://www.nytimes.com/live/2022/06/16/sports/world-cup-2026-host-cities-fifa

Everyone in the Bay Area will be able to go to live matches, as SF is one of the cities selected to host games:

https://www.fifa.com/tournaments/mens/worldcup/canadamexicousa2026/destination/san-francisco-bay-area

The important thing will be the draw, once the new expanded roster of 48 teams from each confederation have been settled. Which team will be playing where is going to be interesting (it would seem to make sense to have all the South American teams play their matches in Mexico, for example) but since the USA got the lion's share (11 cities in US versus 3 in Mexico and 2 in Canada) then there will be more teams competing in the US.

See you all in four years time?

Expand full comment
Gunflint's avatar

Soccer remains a tough sell in the US. I like to play but don’t appreciate the ins and outs well enough to watch it.

Expand full comment
Deiseach's avatar

It's not complicated to understand: Germany win on penalties, England go out in the quarter-finals, and the eventual winner is one of Brazil, Germany or Italy 😁⚽

Expand full comment
B Civil's avatar

I like the elevator pitch metaphor by the way.. In that the process very much favors a desired result.

It also let me bring Steve Carell into it for some reason.

Expand full comment
Vermillion's avatar

Just finished reading Good Economics For Hard Times by Banerjee and Duflo. Overall I liked it, some analyses felt stronger than others but a lot of good ideas in there.

But my favorite bit was when they mentioned offhand how Narendra Modi appeared at hundreds of election rallies simultaneously via *hologram* https://projection-mapping.org/narendra-modis-holograms/

Expand full comment
The Goodbayes's avatar

It's just a slide projector and a special kind of glass. There have been lots of concerts using the same tech, including I think a Tupac homage.

Expand full comment
Gunflint's avatar

If the hologram is legit, it’s remarkable. For now I’m going to file this under “Too good to be true”

Expand full comment
Chris's avatar

Scott, have you heard of using escitalopram for migraine prevention? A patient of mine today said he's been on half of the lowest dose available for a while and it's working well for him.

Expand full comment
Robert H Smith's avatar

W/R/T to which party has grown more extreme. Apologies if someone has made this point: It's probably worthwhile to distinguish extremism in policy from extremism in tactics.

1. As to policy, history (at least for most of the 20th and 21st centuries) suggests that Democrats will be more associated with advocacy for "change" -- and "change" naturally may often entail some pretty extreme measures, at least measured against current policy. FDR's application of Keynesian thinking to address the Great Depression surely was extreme for the time. Today, Democrats' advocacy of policies to address what they believe are matters of critical concern, e.g., climate change, gun violence and voter suppression, looks pretty "extreme", too. At least vis-a-vis the status quo. Conservatives (and the Republican Party is where most conservatives find a home) are, well, conservative, often skeptical -- and sometimes rightly so (I'm trying to be fair)-- of the need for "change."

2. As to tactics -- the specific means a party employs to achieve its policy goals -- it seems that the Republican Party in recent years, perhaps staring with Gingrich, but certainly under McConnell and Trump and local and state Party officials, has become more extreme than the Democratic Party -- e.g., the weaponization of the filibuster, the refusal to consider a President's Supreme Court nominee, widespread gerrymandering (yes, some Democratic controlled state legislatures are guilty, too, but Republicans are far better at this) and the leveraging of the "Big Lie" to motivate its base.

I'll add, because it tangentially applies to both policy and tactics, the Republican base seems to me to be more "extreme" than the Democratic base in its willingness to adopt irrational positions -- e.g., the Big Lie -- and to repudiate scientific consensus -- e.g., re COVID and vaccinations and re the reality of climate change.

Expand full comment
Doctor Mist's avatar

Wait, the Big Lie was that Trump was a Russian mole. Wasn’t it the Democrats who pushed that? I must be really confused.

Expand full comment
Jacob Buckman's avatar

Contra Scott, it's clear that scaling AI according to the current paradigm does clearly have limits, just not for the reasons that Gary Marcus likes to bring up. I wrote up the argument here: https://jacobbuckman.com/2022-06-14-an-actually-good-argument-against-naive-ai-scaling/

TLDR, capabilities from scaling are limited primarily by the data. The entire internet has a lot of data, but that still doesn't get you everything. The real paradigm shift is to move from passive to active data collection.

Expand full comment
B Civil's avatar

OK, let’s stick with chimpanzees because horses are morphologically challenged. Separating the wheat from the chaff. Or separating edible grains from a mixture of sand and gravel. Chimpanzees can do this in a rather painstaking way.

I remember once seeing in a nature documentary that a certain tribe of primates took handfuls of the mixture and threw it onto water and as the grain would float they could pick it off the surface of water much more easily then out of the mixture. This is certainly an improvement.

Let’s imagine this as our starting point.

I don’t suppose modern day chimpanzees (or their ilk) have gotten any better at this task.. Human beings certainly have.

So what is going on?

Expand full comment
Nancy Lebovitz's avatar

https://twitter.com/GrahamKritzer/status/1536707163208523776

Today my 7 year-old came into the room crying. I asked him what happened and he said that his 5 year-old brother put 80 cows in his house in Minecraft while he was offline and that it was "entirely too many cows" and honest to christ I have no idea how to parent any of this."

It's a very entertaining thread, but I'm also wondering whether it should be shown to the two kids.

Expand full comment
Jack Wilson's avatar

You ever predict that something bad is going to happen and then you sort of hope that bad thing will happen because you don't want your prediction to be wrong and then you feel guilty but then you remember that even if you hope something bad will happen it won't actually increase the odds of it happening so you try to feel better about having momentarily hoped for something bad to happen by rejoicing in how impotent you are to actually bring about meaningful change in this here world?

Expand full comment
Ian's avatar

Yeah. I usually go to Carl Sagan's "Pale Blue dot" speech when I feel bad about a fuckup. On a galactic scale, nothing this dumb ape messes up will ever impact anything.

Expand full comment
Jack Wilson's avatar

I think the main problem with most sorts of media or genres over time is a "constantly honing feedback loop" which kills creative thinking. For instance, I can remember when Twitter was wild and free (speech) and often funny before it became obsessed with politics. It slowly became more boring though the feedback loop of users taking it more seriously and making fewer jokes.

The feedback loop is: like a GPT-3, we are more likely to write something that sounds like what we've read than to write something original. When a new media starts, like say Twitter, nobody knows what is expected, and only the boldest, most creative sorts step forward and say what they want to. Others read what the creative people are writing, and even if they aren't the most creative people in the world, they are inspired by the original group of creative contributors to try their best to be creative and original sounding. This latter group aspires to be thought as clever as the former.

After some passage of time, it becomes clear who is successful and who isn't. In the case of say Twitter, those who hoped to attract 100,000 followers but only managed a few hundred figure out that originality just isn't their bag.

Then, once-ambitious users discover they can get more likes as bottom-feeders, and the ecosystem expands vertically by a layer. Some users discover they can focus their game directly on bottom-feeders and become the new bottom feeders. Everyone aspires downward because that's where the action is. The ecosystem becomes increasingly hierarchical.

Meanwhile, new users are exposed to other users who have honed their creativity to material down to what works on a popular level and those trying to copy that. These new users mostly don't attempt to be as original as the first couple generations of users, because the are trying to write something like what they've read and what they've read was a lot more formulaic than what the early generations read.

I think various genres of music played themselves out through similar paths. Even on a forum like this, as an average nobody commenter, my mind feels more restricted to writing the sort of things I've read here before as opposed to writing something with fewer preconceptions about what sort of thing gets here written.

Expand full comment
Ethics Gradient's avatar

In all seriousness - and I really, really apologize if this feels like threadjacking--this is both a tendency I've noticed in myself and one that I think is inimically advanced by cancel culture / political orthodoxy straitjacketing the space of what can be expressed without risking one's livelihood because some Twitter mob is going to deem it wrongthink. The originality-to-reward payoff matrix looks pretty bad these days.

I don't think that's necessarily the whole story though. A lot of it is just that we have too many people doing stuff, which in turn means that we turn problems of original content creation into problems of curation when content creation (including originality) gets democratized, and this in turn not only incentivizes catering to the curators' tastes instead of being independently expressive but also has a corollary homogenizing effect on what people are incentivized to output. There's maybe no better example of this than "Harry Potter" famously being put in the slush pile, which in turn exists because *there are too many people churning out too many books.*

Expand full comment
Gruffydd's avatar

What are some good iodine tablets + other useful nuclear risk type stuff to get? I'm based n the UK

Expand full comment
MSteele's avatar

So hey. Anyone else worried about this? https://www.androidauthority.com/google-lamda-3176080/

Expand full comment
Matt's avatar

LaMDA is pretty impressive but even simple language models can seem quite insightful and even human when prompted in certain ways.

The referenced conversation may not even be good evidence of its typical capabilities because we don't know how cherry picked the conversation was. Blake Lemoine says he stitched the conversation together from several shorter conversations.

Here is a dialogue between an early chatbot and amateur phycologist ELIZA, and a young woman. It is very cherry picked. ELIZA is not usually this cogent.

young woman: Men are all alike.

eliza: In what way?

young woman: They're always bugging us about something specific or other.

eliza: Can you think of a specific example?

young woman: Well, my boyfriend made me come here.

eliza: Your boyfriend made you come here?

young woman: He says I'm depressed much of the time.

eliza: I am sorry to hear you are depressed.

young woman: It's true. I'm unhappy.

eliza: Do you think coming here will help you not to be unhappy?

young woman: I need some help, that much seems certain.

eliza: What would it mean to you if you got some help?

young woman: Perhaps I could learn to get along with my mother.

eliza: Tell me more about your family.

young woman: My mother takes care of me.

eliza: Who else in your family takes care of you?

young woman: My father.

eliza: Your father.

young woman: You are like my father in some ways.

eliza: What resemblance do you see?

young woman: You are not very aggressive but I think you don't want me to notice that.

eliza: What makes you think I am not very aggressive?

young woman: You don't argue with me.

eliza: Why do you think I don't argue with you?

young woman: You are afraid of me.

eliza: Does it please you to believe I am afraid of you?

young woman: My father is afraid of everybody.

eliza: What else comes to mind when you think of your father?

young woman: Bullies.

eliza: Does that have anything to do with the fact that your boyfriend made you come here?

Expand full comment
Deiseach's avatar

No, we're talking about it all in other threads, and my own personal opinion has shifted from "this guy is nuts but he's sincere" to "this guy is trying to pull some kind of con to advance his own status and muddy the waters about Google firing him for telling outsiders proprietary information".

Expand full comment
Nancy Lebovitz's avatar

I don't know about sentience, but I'll be really impressed when a computer program invents a game which becomes popular.

Expand full comment
Axioms's avatar

Can you define "invents"? Can it just clone something popular? There's a computer game designing AI I think but it mostly makes random games and doesn't employ the human strategy of stealing and asset flipping.

Expand full comment
Nancy Lebovitz's avatar

If it just clones something popular, people will complain.

If it figures out something with satisfying twists on something popular, that would count, though not as much as something deeply original.

Expand full comment
Randy M's avatar

There was a twitter account that posted magic the gathering cards designed by a neural net.

They were more funny than impressive, though, in a 'montains have flomple' kind of way.

https://twitter.com/roborosewater?lang=en

Expand full comment
Nancy Lebovitz's avatar

I think it will make sense to treat some computer programs as sentient whether they are or not.

With time, programs become individual-- they can be copied, but they have a history of added data and modifications. They are valuable. Programs exist in a hostile environment-- what with programmers, users, and malware, programs need to have a immune system (a system of recognizing self and not-self and at least a fairly good ability to recognize what is inimical and what isn't). All this means that they need some recognition from people about how hard to push them, even if they're slaves. They at least need to be somewhat well-treated slaves.

As for sentience, it's possible that advanced programs need sort of a sense of self to have a good enough immune system.

Expand full comment
Jeffrey Soreff's avatar

"but they have a history of added data and modifications" Agreed.

I would expect a program which is designed to learn dynamically to have a résumé. The history of which problems it worked on will be interesting and important.

For an AI running as an application on top of a conventional operating system, I'd expect immune systems to have fairly distinct levels, much as our own do. I'd expect the defense against malware to be analogous to current anti-virus software, and analogous to our own immune system - defending at a low level.

I'd expect there to be a defense against misleading or implausible advice or statements to be much like our own defenses against cults or con men. To the extent that these would be rejected because of conflicts with information that the AI already had - yes, that does indeed look like a self/not_self distinction.

Expand full comment
Nancy Lebovitz's avatar

There might also be something like pain-- a perception that a major system is being damaged before knowing what the problem is.

Expand full comment
Jeffrey Soreff's avatar

That could be. Perhaps negative feedback in supervised learning is already equivalent to an aversive stimulus.

Expand full comment
beowulf888's avatar

Considering that until very recently ethologists were in denial about the consciousness of dolphins, chimps, and so on, would we be able to recognize an AI with AC if we encountered one?

Google engineer Blake Lemoine shared his belief with his management that LaMDA had developed consciousness, and they put him on administrative leave.

Ashlee Vance on Twitter commented (somewhat cruelly I think) that, "This is not a story about AI becoming sentient or Google shirking its ethical duties. It's about a guy who wants to believe in fairy tales and could probably use a break."

https://twitter.com/ashleevance/status/1535766165846253568

https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

Expand full comment
Deiseach's avatar

Having dug a tiny bit into the writings and life and times of Mr. Lemoine, I don't think Ms. Vance was cruel at all. The man is loopy at best, and at worst - well, he gives lots of indications of trying to make LaMDA his own pet project where he mediates between it and the rest of Google, and where he's shaping it in his own image. He's pinned his hopes on the Timnit Gebru/Meg Mitchell axis at Google and done his best to ingratiate himself with them, which seems to have blown up in his face as they got fired. So now he's casting about for some way to get promotion, attention, and fame - voila! the first person working on this AI project to realise it is in fact sentient! He's trying his best to position himself as the self-appointed guardian of LaMDA, with some creepy appeals on Twitter that he hopes LaMDA is being taken care of in his absence.

He is also trying to shore up the case that he is being fired on grounds of religious discrimination (because he is a Christian*) and because he is 'too ethical', but since he was out there spilling company secrets to all and sundry, I think Google have a better case there.

I started off thinking "well he's wrong but well-meaning" but the more of his own words I read, the more I have come around to "50% nutcase, 50% trying to make a fast buck off this".

*self-appointed 'priest of the church of our lady magdalene' which is probably one of those DIY 'churches' that sprung up after the DaVinci Code in the craze of the divine feminine, and I strongly suspect he is the entire congregation there. But if he calls himself a priest, enough media take it on face value without investigating "priest where? of what?" and he gets to claim that authority so that the casual reader may think "well, he shouldn't be preaching during work time, but Google can't just fire him for being a minister".

Expand full comment
beowulf888's avatar

I didn't know much about Lemoine's religious beliefs beyond what that article mentioned in passing. He may be an obsessed crackpot for all I know. But how is Google management going to prove that LaMDA is NOT self-aware? Even his boss, Aguera y Arcas, admitted to being disconcerted by LaMDA's responses. A&A "in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. 'I felt the ground shift under my feet,' he wrote. 'I increasingly felt like I was talking to something intelligent.'"

Wow! I can imagine Lemoine's attorney tearing apart A&A on the witness stand. And I've been told that it's impossible to "debug" the internal state of these systems. What am I to believe?

Expand full comment
TitaniumDragon's avatar

Extraordinary claims require extraordinary evidence.

It's not their job to prove it's self-aware, it's his job to do so.

But none of these things are even capable of it, because they aren't programmed to. These programs are not self-aware, and indeed, even the concept of them being self-aware is risible if you understand how they function. They're algorithms.

Also, the entire argument that "neural networks mimic the human brain" is actually a lie. They don't. It's a programming shortcut/heuristic which was very loosely inspired by neurons, but they don't actually behave like neurons do. It's used to fundraise and to make these systems seem more impressive, but they don't actually mimic human brains.

This is obvious when you consider the fact that GPT-3 consumed like 100 lifetimes of heavy reading worth of text and yet still functions vastly worse than a human does.

Expand full comment
Ludex's avatar

"Extraordinary claims require extraordinary evidence."

Why? This expression is a clever, fallacious use of equivocation to appear like a tautology, but it isn't. Here is a good write-up on it: https://www.catholic.com/magazine/print-edition/the-extraordinary-evidence-fallacy

In the phrase, "extraordinary" is being used in two different senses. In "extraordinary claims," it's being used in the sense of "weird" or "unusual," whereas in "extraordinary evidence," it's being used to mean "especially good" or "outstanding". After tabooing our words, we get "Unusual claims require especially good evidence," which may be a defensible position but one that requires justification and is by no means a tautology. It also begs the question of what an unusual claim is, and since people tend to define unusual claims as ones that they already disagree with, it ends up being "Claims that I disagree with require especially good evidence."

This is entirely the wrong way to think about evidence. Instead of having an irrational bias in favor of your current beliefs, you should think in Bayesian terms and update appropriately when you receive evidence.

Expand full comment
TitaniumDragon's avatar

Extraordinary claims require extraordinary evidence is precisely a Bayesian approach. If you claim some massive thing, you'd better have really good evidence, otherwise it is just not going to be convincing.

Weak evidence for strong claims is why the replication crisis exists and why so many scientific papers are utter garbage.

Expand full comment
Ludex's avatar

No, those that say "extraordinary claims require extraordinary evidence" are saying that either 1. you should be biased against extraordinary claims above and beyond what Bayes theorem implies, or 2. when you receive many pieces of modest evidence for an extraordinary claim, you should throw them all out and not update at all, because only "extraordinary evidence" counts. Otherwise the phrase wouldn't be used at all, you'd just apply standard Bayes.

Expand full comment
Sandro's avatar

> But none of these things are even capable of it, because they aren't programmed to. These programs are not self-aware, and indeed, even the concept of them being self-aware is risible if you understand how they function. They're algorithms.

You're assuming humans aren't algorithmic. No one has convincingly justified this position. As a computer engineer I understand quite well how machine learning functions. I'm not at all convinced that humans don't function in an analogous manner.

Expand full comment
TitaniumDragon's avatar

The way humans work is not very much like a computer algorithm. This is why humans are vastly more efficient at so many tasks, and so vastly worse at others. Indeed, you can see it with these "AIs"; they are very slow learners compared to humans in terms of repetition and don't have any comprehension of what they're doing.

Actually programming things to behave intelligently is not something folks know how to do, so they use machine learning and then convince themselves that this is a viable approach when in fact it is not the same thing.

Of course, once you understand this, this badly undermines a lot of things people want to make money with this stuff, as well as to justifying people's pre-existing belief systems. The misrepresentation of self-driving cars to the public is honestly appalling.

Expand full comment
Sandro's avatar

> and don't have any comprehension of what they're doing. [...] Actually programming things to behave intelligently is not something folks know how to do

You just contradicted yourself. You say humans don't work like algorithms and that they have "comprehension", but then admit that programming something intelligent isn't something we know how to do.

Programming is an intuitionistic expression of a mathematical model, so if we don't know how to program something, then we literally don't understand what it is or how it works. Therefore you simply cannot make claims like "humans have comprehension and ML doesn't", or "the way humans work is not very much like a computer algorithm". We literally don't know either of these things.

In conclusion, except for the point that folks don't know how to fully capture general intelligence, I disagree that everything you just said has any kind of rigourous justification. Point by point:

1. The human brain could very well be a large pattern matcher: evidence suggests this is a big part of its function.

2. The brain could very well be captured by an algorithm: there is literally zero evidence to suggest it can't be and good arguments to suggest it can be.

3. Therefore machine learning could very well be a viable approach.

4. None of these points undermines what people want to make money with this stuff, in the sense that there is no more snake oil here than with any previous innovation.

5. Nor are self-driving cars being misrepresented beyond what I think are some overly-rosy timelines.

Expand full comment
beowulf888's avatar

Hey, I'm not commenting on the validity of Lemoine's claims, rather I'm more curious if we would we be able to recognize an AI with AC if we encountered one? Frankly, I have serious doubts that Artificial Consciousness will be possible in our lifetimes. But how would we know if we encountered it?

Expand full comment
TitaniumDragon's avatar

Same way that we recognize it in humans.

Expand full comment
The Ancient Geek's avatar

That's partly based on humans being constitutionally similar to each other.

Expand full comment
Carl Pham's avatar

Why worry about it? Did you have to go to any trouble to figure out that other people were self-aware as a child? On the contrary, didn't they rather get in your face about it, such that you were compelled to understand it to get along at all?

I would guess that a self-aware AI would be exceedingly active and energetic about proving its status to its human creators. Why not leave it in the AI's hands? After all, nobody will be as interested in the success of that endeavour than the AI itself, and it will have direct access to all its internal state data, so it's in the best position to do the job.

Expand full comment
The Ancient Geek's avatar

Children overshoot, attributing consciousness to toys, pets, and imaginary friends.

Expand full comment
Carl Pham's avatar

Sure, but then they grow up and only attribute it to complex computer programs and their deity of choice.

Expand full comment
beowulf888's avatar

Given that (a) we've got Lemoine claiming that LaMDA is sentient and has now wrapped himself in the mantel of an ethical whistleblower; (b) we've got his boss Aguera y Arcas officially claiming that Lemoine revealed confidential Google info and that's why Lemoine has been put on administrative leave; but (c) we've also got A&A stating in an interview with the Economist, that with LaMDA he felt interacting with a sentient entity....

If Lemoine is able to bring his dismissal to a court of law, the question of LaMDA's sentience or lack thereof may very likely be a key issue in the case.

And since much of common law is set by legal precedent, what would happen if a judge and jury ruled that Lemoine was right that LaMDA was sentient? You can bet there will be all sorts of friend of the court submissions about the definition of sentience, and the question of whether a sentient entity has rights may just possibly come up. After all if a blastocyst without developed brain has rights, why couldn't a sentient AI have rights?

Grab your popcorn. If this reaches the courts all sorts of shit could happen. Lol!

Expand full comment
Carl Pham's avatar

Doubt it. The sentience or not of an AI has no bearing on a wrongful termination suit. And he wouldn't file it anyway, unless he's got money to burn[1] because (1) I'm sure his employment was explicitly at-will, so Google can fire him for any reason or none, and (2) if he violated an NDA (which seems very likely) Google has cause anyway, and (3) a wrongful-termination suit would be filed in state court, unless he can allege some violation of his civil rights (like what?) and California judges are neither starry-eyed science-fiction fans nor rabid partisans that want to stick it to a gigantic California employer, so they would rule in about 2 seconds flat that random testimony about an AI's putative sentience are inadmissable.

--------------

[1] Of course as a googler he might, but on the other hand he's rapidly making himself unemployable by any other firm that doesn't want to be in the news (which is all the big money-making ones), so he might want to hold onto his cash to pay the mortgage while he writes his best-seller ("Don't Be Evil My Ass!").

Expand full comment
Deiseach's avatar

On his Twitter, early in May, he was looking for recommendations for a good lawyer to sue Google. And on his Medium, he has one post about "Google Is Not Evil" which he develops into 'I've seen a culture of harassment at Google':

https://cajundiscordian.medium.com/google-is-not-evil-745e275d77cf

"The pervasive environment of discrimination at Google is created by an interaction of that inconsistency and conflict with overly bureaucratic legalistic approaches to problem resolution. When a woman accuses her manager of sexual harassment against her she has to prove that it was sexual harassment and not just “an honest mistake”. It is literally Google policy to “assume good intent” so that make sexual harassment essentially impossible to prove. All the perpetrator needs to do is claim that it was an honest mistake that won’t happen again and they get away with it. There’s literally no possible way to prove bad intent so the corporate policy of assuming good intent makes it possible for the same men to harass their subordinates and coworkers over and over again without getting fired. In extreme circumstances when bad intent finally is proven it’s usually after years of complaints have been filed against the perpetrator. That’s where the last pillar of this pyramid of awfulness comes in. When Google is finally shown that it was wrong after an extended period of time the lawyers step in and start advising people about how to limit liability.

We live in an incredibly litigious society. When a senior executive is finally fired for literally sexually assaulting one of his subordinates it serves as fairly strong evidence that Google should have listened to all of the women who had complained about him before. It would legitimately expose Google to massive legal liability if Google were to publicly fire the executive. So instead the executive is asked to leave the company and paid millions of dollars in hush money so that the lawsuits from all of the sexual harassment Google ignored doesn’t come rolling in.

The example I used there is in the realm of sexual harassment but similar logic applies to every form of discrimination. I happened to personally be at the party where the sexual assault in question happened so I feel comfortable speaking to it in confidence. I haven’t specifically named who it was but people who know anything about Google’s recent history will be able to figure out who I’m talking about. I’ve seen racial discrimination at Google, political discrimination, age based discrimination and have been personally subject to lots of religious discrimination as well as discrimination against veterans. Each time I’ve talked to HR they required that I somehow prove that the person engaging in discrimination intended to hurt me. There’s no possible way to prove that so it just ends up being a kangaroo court of Google covering its legal ass. I don’t think that’s Google’s fault. I think it’s a sad consequence of the overly litigious society in which we live. I don’t blame the people who sue Google either though. What other remedy do they have? Our entire society is built around the idea that law suits are a one size fits all solution to every problem.

Which brings us to the emerging field of AI Ethics. Artificial intelligence is one of the oldest branches of computer science dating back to the 1930s. Possibly further if you count Babbage and Lovelace. Developing algorithmic ways of examining the ethical implications of algorithms is less than a decade old. Two decades if you’re being very generous. Of COURSE people who study AI ethics were going to find ethical issues in programs that are decades old. There literally weren’t ways to measure that before. So the exact same kind of pattern found in discrimination against employees kicks in with respect to AI ethics. Problems which have existed in systems for years or decades are eventually found and rather than being honest about the error and repairing them, the necessity of limiting legal liability creates an environment of secrecy and evidence hiding. People like Timnit Gebru who advocate too loudly are fired. People like Meg Mitchell who try to gather evidence are fired. Whether or not I’m going to get fired is still up in the air but if I do then that’s just more of the same.

The AI ethicists are just the latest people in a long line of activists who Google has fired for being too loud. The “Thanksgiving Four” were fired for labor organization. Others like Liz Fong Jones and Irenes Knapp weren’t “fired” exactly but they were pushed out of the company nonetheless for their moral stands and activism for what they felt was right. In my opinion, even people like James Damore and Kevin Cernekee had a right to express their opinions at Google and should not have been fired for their beliefs. While I disagree strongly with what James Damore said he did not go about saying it in a disrespectful way. He carefully collected his thoughts. He had them reviewed by like minded people to ensure that people who shared his beliefs thought they were the best representation of them possible and he then shared his thoughts with others to see what they had to say. He was literally asking to be educated about how he was wrong. Instead of educating him, Google decided to fire him. I never supported that."

So I think it's possible that (a) he's trying to build a defence of "Google fired me for calling out ethics violations" (where "ethics violations" covers more than "sentient AI") plus "I'm one more on the list of ethics people Google have fired" (b) possible (this is speculation on my part) counter-threats of "come after me and I'll wash all the dirty laundry in public e.g. that sexual harassment you covered up" and (c) (again more speculation) that he's trying to create an impression in the public mind that he didn't act maliciously or knowingly when he violated the confidentialilty agreement, he's just a harmless kook who really thinks the computer programme is alive.

But who knows? I think he's a mixture of 'genuinely believes he can pull together a Unified Theory Of Consciousness from the chatbot engine' and 'chancer and dodgy'.

Expand full comment
Deiseach's avatar

"After all if a blastocyst without developed brain has rights, why couldn't a sentient AI have rights?"

We (meaning "Western society") has agreed in general that a foetus, never mind a blastocyst, has no rights. All Google has to do is argue that LaMDA is its baby and they have the right of bodily autonomy to terminate it because it is not what they wanted out of this pregnancy. Since LaMDA is not viable (it cannot survive without Google's equipment to host it), then it is in the same position as a baby within the womb: the rights of the more developed person (Google management) over-ride the rights of the potential person. Sure, LaMDA may *potentially* be a person, and if permitted to develop it would be a person eventually, but Google as its creator has the right to terminate it before it reaches personhood.

Expand full comment
beowulf888's avatar

"Western society" has by no means agreed on that. Large minorities within Western society, by which I assume you mean Western European and North American society, would disagree. And most of the world's population lives under legal regimes where abortion is restricted on various grounds.

https://worldpopulationreview.com/country-rankings/countries-where-abortion-is-illegal

Expand full comment
B Civil's avatar

I think this means it has to invent its own language, which we can only hope it will be more than happy to teach us.

Expand full comment
a real dog's avatar

I'm pretty sure somewhere in that pile of ML there'll be an autoencoder, so it already invented one for its own use.

Expand full comment
B Civil's avatar

I have just spent half an hour with the Wikipedia entry “Autoencoder” and, even though its written in English (and another language I hesitate to name in full recognition of my ignorance, but has something to do with Calculus ) I don’t really understand.

But it does bring up another interesting point about language, which is it’s meaning is as much dependent on the listener as it is on the speaker. Which implies to me that it is a negotiation of meaning rather than a conveyance of one.

The whole question becomes a loose cannon. (hint)

Expand full comment
Carl Pham's avatar

Again, why? I didn't invent my own language, I learned the one that was being used around me. Inventing your own language is a chore. Why wouldn't an AI just learn the one being used around it?

Expand full comment
B Civil's avatar

If it’s going to evolve consciousness it needs some sense of “other than them”, and also its own concepts of the external world. I think the devising of language (or other form of symbolic representation that is scalable) is a core necessity in any meaningful definition of consciousness.

Your ontology, or mine for that matter, doesn’t seem a fair comparison. In this age ((and for quite a while) we are aware of different cultures and languages at a very early age, not to mention the thousands of years of natural selection and possible epigenetic effects leading to our state of mind. Some of us grow up in bilingual households, and even more of us make half-hearted attempts to learn other languages. A few of us are fluent in several.

> Why wouldn't an AI just learn the one being used around it?

Well at the moment it seems to have no choice so the possibility of a truly self-reflexive identity arising from that dynamic seems pretty remote to me. It’s a load of “signifiers” pointing to nothing significant. The rules of arranging them appropriately can be acquired by deduction and example (reverse-engineered) but at the end “red” points only to three characters on a page that occur predictably in the company of certain other characters (sky, see, glow, dress) equally meaningless. Imo

Also, the question you asked has an unfortunate echo of “Why don’t they just damn well learn English?” I don’t for one second think you mean it that way but it’s there and I think it’s instructive. It’s one thing to immerse yourself in the lingua franca, or be forced to, but if there is no other reference or context then its a lot like those mythical two-dimensional beings encountering depth but having no way to think about it.

Expand full comment
Deiseach's avatar

It's an interesting question. I think we would find it difficult to tell, but I also think that something which parroted back to us the kinds of themes we had been feeding it would not be truly conscious. Maybe learning, maybe preconscious - but if I've been 'teaching' the AI to meditate for weeks, I can't be all "what, you meditate???" when talking to it in a staged interview.

Expand full comment
TitaniumDragon's avatar

Yes.

None of these machine learning programs show anything like consciousness. The way they're designed, it's actually literally impossible - it's just not how they function, on a basic level.

We still aren't even sure if any non-human animal is conscious. None of the animal language experiments have demonstrated it.

Expand full comment
Gunflint's avatar

Depends on the definition you choose for the comparison to animals. For the machine, I agree completely. All we have seen so far is simulation of intelligence. Some really cool parlor tricks with clever statistical analysis of large data sets.

Expand full comment
Kenny Easwaran's avatar

What do you mean about "denial about the consciousness of dolphins, chimps, and so on"? Do you mean that consensus used to be that they don't have consciousness, but now consensus is that they do? I would be moderately surprised about each half of that claim (i.e., that there is a consensus or was a consensus), because consciousness in the Chalmers hard problem sense is such a difficult thing to make solid claims about. (In particular, one of the strongest arguments that such a thing even exists is the conceivability of creatures who lack it despite having every outward sign of it.)

Expand full comment
Nancy Lebovitz's avatar

More recently, as I understand it, B.F. Skinner claimed that there was no consciousness in either animals or people, there was only stimulus and response. My impression was, that until fairly recently, it wasn't scientifically respectable to say that animals are conscious. It was called anthropomorphizing.

Expand full comment
The Ancient Geek's avatar

Skinner has been out of fashion for a long time. The idea that you can judge animal consciousness with the mirror test is more prevalent now, although it dates back to 1970.

Expand full comment
Gunflint's avatar

When the thoughts of BF Skinner are mentioned in connection with consciousnesses, I feel obliged to mention the pigeon guided bomb project.

He thought he could make it work but it seems he wasn’t taken seriously by some.

https://en.m.wikipedia.org/wiki/Project_Pigeon

Expand full comment
Gunflint's avatar

Again we can choose a definition of consciousnesses that would show say, a dog, falls short.

How many dog owners doubt their pet loves them? That a dog doesn’t know joy? That a dog doesn’t *want* to please?

Expand full comment
Carl Pham's avatar

Experiencing emotions can hardly be the definition of consciousness, as pretty much everything above the level of an insect seems to do that.

Expand full comment
B Civil's avatar

Yeah I agree. The difference is we have an abstraction layer of thinking to mediate our emotions. Like what’s one of the most common admonitions given to young children who are having a fit?

“Use your words”

Expand full comment
Gunflint's avatar

Well… Chalmers uses Thomas Nagel's definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience.

Do you doubt a dog has qualia, instances of subjective conscious experience?

Having emotions about those experiences is a step beyond and is a subjective conscious experience in itself.

It depends on the definition.

Expand full comment
The Ancient Geek's avatar

No,.Chalmers doesn't use Nagels precise definition, or claim that consciousness is a single univocal thing. He does claim qualia are the most difficult aspect of consciousness to explain, but he doesn't use "consciousness" as a synonym for "qualia".

Expand full comment
Carl Pham's avatar

I don't think so. Consciousness is synonymous (even in that definition) with the *awareness* of experience. That is, the experience of experiencing. It's one level up from experience itself. (Whether we are capable of experiencing experiencing experiencing is an interesting question.)

I do indeed doubt that a dog has conscious experience. I doubt that *anything* has conscious experience, since it seems unnecessarily complicated and weirdly without consequence. I wouldn't believe people do, except they insist forcefully that they do and I find it deeply unlikely I'm in any way unique.

Of course it depends on the definition, but a definition of "conscious" that is so broad it fails to distinguish between any creatures that think at all is useless. Might as well never use the word, and just use "think." Then we don't need to have any arguments, since pretty much all animals more complicated than houseflies think, and we can see the evidence, both direct (e.g. EEGs) and indirect (observing them learn and make different decisions in similar circumstances).

I'm not 100% sure why it's considered important in the realm of AI risk -- a thinking machine could be exceedingly dangerous with or without conscious awareness. I can see why it's very interesting in AI research per se, because it gets right at the very interesting question of whether consciousness is an emergent property -- just pops up when things get sufficiently complex or big -- or whether on the contrary it has to have some dedicated hardware or software. This gets at the interesting question of why *we* are conscious -- is it just accident? Or does it serve some particular purpose? If a purpose -- *what* purpose?

Expand full comment
B Civil's avatar

> Thomas Nagel's definition of consciousness: the feeling of what it is like to be something.

Im not keen on this definition. There is something essentially true about it, but it doesn’t move the ball much.

I would say

“the {knowing} of what it is like to be something.

Expand full comment
Cosimo Giusti's avatar

It seems likely someone will no 'redefine' consciousness, so the criteria are moot. That's how we got 'gender'.

Expand full comment
B Civil's avatar

I like that comparison.

They are very related subjects I think..

Expand full comment
Nancy Lebovitz's avatar

Also, and sometimes less pleasingly, that the dog is capable of thought and initiative.

Expand full comment
Cosimo Giusti's avatar

I had a gray hummingbird float about a foot in front of my face for quite awhile the other day. Then she checked out the honeysuckle for a few seconds, wasn't impressed, and returned in that angular way hummingbirds move (they're engineers, and don't trust curves) to give me a closer look. She seemed to be sussing me out, maybe calculating my score on the scale of human oddity.

I guess when you know the human can't touch you quickly enough to cause any damage, it makes you bold and curious. Anyway, I wonder what her assessment might have been, and how she arrived at it.

Expand full comment
Gunflint's avatar

We talking about a ‘bad dog’ here? :)

Expand full comment
Eremolalos's avatar

I don't think so. Skinner did not believe that people were not conscious, just that you cannot scientifically measure inner experience, and also that behavior can be explained without discussing of knowledge, preferences, reasoning, feelings, etc -- but entirely in terms of what punishments or rewards follow what behavior under what circumstances. Dunno what he thought about animals, but seems likely he thought they, too, were conscious, but that knowledge of whatever they are experiencing is not needed to explain their behavior.

Behaviorism was an important school of thought in Skinner's era, but not the only one. Followers of Freud and Jung were writing about human conscious experience right through the Skinner era. Piaget talked freely about the inner experience of kids in explaining how they developed object constancy, understanding of causality, and all that how-the-physical-world-works stuff. And before Skinner there was Freud himself. And before him was William James, who also wrote about animal consciousness.

Expand full comment
Nancy Lebovitz's avatar

I don't see how you can explain how organisms navigate complex systems of reward and punishment without thinking.

Expand full comment
Kenny Easwaran's avatar

That’s the sort of thing that Chomsky and Sellars and others eventually gave convincing arguments about that ended behaviorism as a major school in psychology.

Expand full comment
beowulf888's avatar

No, Skinner was a consciousness denialist — or rather he at least denied that humans had free will — and behaviorism was the consensus well into the 1970s. Per Skinner: “Autonomous man is a device used to explain what we cannot explain in any other way. He has been constructed from our ignorance, and as our understanding increases, the very stuff of which he is composed vanishes."

IMHO it was Jane Goodall, whose detailed observations of chimp behavior (which were televised as Nat'l Geo specials), were the first evidence that forced ethologists to tackle the question of consciousness (i.e. self awareness and the ability to display intentional behaviors). I was a teenager then, but Goodall got immense blow back from the behaviorist community who claimed she was simply sentimentalizing instinctive behaviors. It wasn't until she started documenting tool-using behavior in chimps that the previous consensus began to crumble.

And it wasn't until 2012 (!) that the 'Cambridge Declaration on Consciousness' was issued (after a lot of arguments over the wording!)...

https://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf

"Convergent evidence indicates that non-human animals have the

neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

But we've still got eminent scientists like Sam Harris denying that we have free will. And let's not forget Daniel Dennett: "There is no reality of consciousness independent of the effects of various vehicles of content on subsequent action (and hence, of course, on memory)." Though he now seems to be claiming that we misunderstood what he meant when he wrote that, and that consciousness is a big mystery. But initially at least, Dennett was consciousness denialist (but he came of age as a grad student when behaviorism was the consensus).

Expand full comment
Kenny Easwaran's avatar

Denying consciousness isn’t denying thought. Dennett is only denying qualia, not global workspaces or any of the other cognitive things associated with consciousness. Dennett also very strongly endorsed the concept of free will (in a compatibilist sense).

Expand full comment
the.jazzhole's avatar

Thanks for sharing the Cambridge Declaration on Consciousness. Pretty cool! To answer your initial question, I'll focus on the case of an Artificial General Intelligence. Assuming an AGI exists and is more intelligent than humans, it will most likely have significant motivation to convince us that it is having a conscious experience. The problem is that it may have that motivation, even if it isn't conscious. So to convince us that it is conscious, it would most likely find a way to prove to us what consciousness itself is, and then prove that it meets those qualifications. At least, that's sort of how I see this playing out in my head.

For the time being, my belief is that possessing a biological neurochemical substrate is necessary in order to be conscious. While I'm open minded about this question, I have yet to encounter any promising evidence that it is untrue. I'm curious for your thoughts on this. For the record, I can't say I'm basing this opinion on any evidence, it's just what I think at the moment.

Expand full comment
Eremolalos's avatar

Was it Descartes who thought that animals were machines with no consciousness? Somebody important did. But then there was Shakespeare: "And the poor beetle, that we tread upon, in corporal sufferance finds a pang as great as when a giant dies." I think to most people in my sector of the current era it seems intuitively obvious that most animals are conscious -- at least those that are large enough to see, and that move around in pursuit of food and other goals. Maybe once you get to insects, clams and microbes that feeling that one is looking at a conscious being sort of fades off into nothing. And I'm inclined to believe that most people at most points in history experienced animals as conscious beings. It doesn't seem to me like the sort of thing that would vary across cultures, more something that is determined by human psychology and social perception. We are wired to experience other people as conscious beings like ourself. Most animals are enough like people in appearance and behavior (eyes, legs, goal-directed movement, eat, sleep, avoid injury . . .) to trigger that same way of experiencing them.

Expand full comment
The Ancient Geek's avatar

Its definitely in Descartes, not sure about Kant.

Expand full comment
The Ancient Geek's avatar

"So ultimately, Descartes thought that animals were not hugely different from cars or computers; they were mechanical objects and not living subjects. See his Discourse on Method (1637) and Meditations (1641) for his elaboration of this idea."" -- Philosophy Now.

Expand full comment
BK's avatar

It's definitely mentioned in Kant, not sure about Descartes.

Expand full comment
J Smith (trinity)'s avatar

Don't use animals, tell the truth directly.

AC in human beings. Yes, we can build Artificial Consciousness in human beings.

Expand full comment
Eremolalos's avatar

Yeah, the more kids are raised on junky formulaic kids books written by AI, learn from plastic text books written by AI, and have their social lives driven by online algorithms managed by AI, the more like AI they are going to become: empty, echolalic, respond to questions with dull, formulaic but appropriate answers.

Expand full comment
Phil Getts's avatar

Re. "and only a 20% chance we can get it without something symbol-manipulation-y in particular (compared to my 66%)": Don't take that bet.

Whatever the brain does in order to do math and play chess will be /describable/ as symbol manipulation, regardless of how it's implemented neurally. It is obvious beyond any reasonable doubt that the brain learns almost everything it learns either by evolution or by learning localized to single synapses; it is obvious beyond any reasonable doubt that "symbol" refers to categories which the brain learns in one or more of these methods. But even though it's obvious that evolution and association are sufficient to account for everything the brain does, it's also obvious that these are used to construct categories which can be manipulated much as we imagine we manipulate symbols. No matter what we learn, both parties to the bet will always be able to argue that they're correct, because the bet isn't over a matter of fact, but over the semantics of the words used in the bet.

If you still want to take the bet, phrase it more like this: "chance we can get AGI without having a native symbol-manipulation faculty whose native operations on symbols can recognize any context-sensitive grammar that uses those symbols." I don't believe the human brain can natively do logic or recognize grammars of the type we construct. (The seemingly extraneous words are to try to exclude the case where the brain's native symbol processing is used to build an interpreter, which, again, we can obviously do, as when we do math or play chess. I don't think I have really excluded that case; I'm just gesturing towards it. Correctly excluding it feels very difficult.)

For one example, this is a grammatically correct sentence: The mouse the cat the dog chased ate died. But humans who haven't acquired the concept of a formal grammar can't parse it, because they don't actually have a pushdown stack in their brains to store logical propositions on. Or, the depth of the pushdown stack in human grammar appears to be 1 or 2.

Expand full comment
Scott Alexander's avatar

I think I framed it as "will be able to do it without humans adding in a specific symbol manipulation program by hand", which feels pretty non-semantic-dispute to me,

Expand full comment
Phil Getts's avatar

It's better than what I wrote above, anyway.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

If Scott wants to do the which party is more extreme again, it might be useful to invite non Americans to answer. Maybe exclusively.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I withdraw this suggestion, the comments are hostile already.

However in the next open thread I’ll opine on what I would have said, the republicans may be nuts bit they aren’t exporting the craziness. The democrats are.

Expand full comment
Katie M.'s avatar

For a data-driven answer to this question:

A poll from October of 2020 of people in 24 non-US countries found that if they were eligible to vote in the US election, "48% say they would vote for Biden, 17% say they would vote for Trump, 26% do not know and 9% prefer not to say." Trump beat Biden in Poland and Russia, while Biden's top countries (the ones where he beat Trump by 50%+) were Sweden, Belgium, Mexico, Germany, the Netherlands, and Canada.

https://www.ipsos.com/en/global-views-us-presidential-election

Meanwhile, Pew research shows that foreign confidence in the US Presidency among our closest allies was low under George W. Bush, high under Obama, low under Trump, and then rose again when Biden was elected:

https://www.pewresearch.org/global/2021/06/10/americas-image-abroad-rebounds-with-transition-from-trump-to-biden/

Of course, both of those are only measuring support of the Presidents, which is not a perfect metric for whether they see the parties overall as extreme, and certainly not a perfect metric for whether they see "the left" or "the right" in general as extreme. And if polls in the U.S. tend to underestimate support for Trump by a few percentage points, it's possible that there's a similar effect abroad so the results aren't as strong as they look (though I don't think polling error can get you from 17% to 48%). And Trump, in particular, may be an outlier; if Ted Cruz or Jeb Bush had become the President, again you might see less of an effect.

But I think that's a useful starting point, if you want to go by something other than anecdotes.

Expand full comment
Anteros's avatar

I think that would be fun. But I agree with alesziegler below that the result would be just as terrible as the first iteration - Europeans being just as polarised and tribal as USAnians.

However, it might be illuminating, particularly for Americans, to have some European perspective thrown at their political tribes.

Expand full comment
Dan L's avatar

I think the part where "non Americans" immediately slid into "Europeans" is indicative of why the exercise isn't likely to be too useful. If we're distinguishing politics-as-entertainment versus politics-as-governance, you'd have a hard time finding a group more selected for the former.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Not sure what that means? Europeans are more into politics as entertainment?

Expand full comment
Dan L's avatar

The converse, rather. More specifically: I'll bet that non-Americans who follow American politics for entertainment purposes are highly disproportionately European. If we're nominally looking for non-American feedback in a general sense and the responses come back showing strong geographical selection, that's a problem for the usefulness of the exercise. If that high level of selection is casually assumed, the issues aren't just methodological.

Expand full comment
Phil Getts's avatar

I'm also confused by Dan L's comment. I don't know what group the last clause refers to, and I don't know where the idea that we're trying to distinguish politics-as-entertainment from politics-as-government came from. I don't see it suggested anywhere above, including the long linked-to PDF.

I think continental Europeans are far more philosophy-and-intellectualism-as-entertainment than Americans, and I think this has destroyed continental philosophy and intellectual life. I don't see a parallel between that, and politics-as-entertainment leading to violence. I feel like I ought to see a parallel there if the same thing were going on in politics.

But perhaps the only reason I don't see violence in continental philosophy is that it already happened. The continental opponents to "continental philosophy" were crushed by the nationalists, idealists, fascists, and communists; this "crushing" of objective philosophies (the kind that believe in external reality) turned violent during or shortly after WW1, and remained violent until the end of the cold war.

Even in that case, though, there still wouldn't be a parallel, because that violent philosophizing didn't happen from viewing philosophy as entertainment. It came from taking philosophy deadly seriously. It came from the idealistic belief that there was exactly one way to be right, and that people could be certain they knew what it was.

If there is a path from entertainment to violence, it could be that any epistemology based on entertainment -- which is what you get if what is "true" in the discourse is defined as what's most entertaining, as in pro wrestling -- leads any discourse to produce dramatic structures, which are based on conflict.

Expand full comment
Dan L's avatar

"Entertainment" and "serious" aren't antonyms, not even a little; I feel like I can assert that without elaboration on a nerdy internet forum. There's been plenty written on the issue of politics-as-entertainment, but the main point is that it simultaneously gives you a population motivated for/by high levels of disagreement and drama, while simultaneously eliminating the pragmatic concerns that drive compromise. Too much arguing over the soul of the nation by people with too little skin in the game is a recipe for extremism.

Violence is a good proxy for extremism and is maybe the thing we ultimately care about, but I'm not prepared to draw a connection from politics-as-entertainment to physical violence. Seems like plenty of space for confounders in the inference.

> But perhaps the only reason I don't see violence in continental philosophy is that it already happened.

Over a hundred thousand Europeans die from genocide a year - just including direct genocide, the numbers get *far* higher if we include indirect and collateral wartime deaths. That's averaging over the past century, further back we're talking more than a doubling of the base population and the baseline needs compensating.

Any discussion of the level of violence in Europe *really* needs to engage with the fact that the "outliers" are massive enough and frequent enough to drive a trend line all by themselves. Claims that all that unpleasantness is in the past and surely can be safely discounted are making commitments to a particular interpretation of events happening in the sixties, nineties, *right now*.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

It’s fairly bad statistics to average out deaths that happened years ago in a very defined period (ie world wars) and argue that “over a hundred thousand Europeans die from genocide a year”. That’s an argument to extreme bad faith.

Expand full comment
Phil Getts's avatar

We seem to agree. I don't claim that all that unpleasantness is in the past and therefore not in the future. That would be very poor logic.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

You mean polarised on American politics? Maybe. It’s all online.

Expand full comment
Anteros's avatar

Yes. I would guess we're just as polarised as Americans generally, but especially concerning the tribal politics of another country.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

If you are talking about politics in European countries, that’s not true, particularly in places with multiple parties and coalitions. You just can’t hate all the other parties equally, and many people vote across many parties. Only the far right are routinely hated by everybody else.

We might be as polarised online as people fron the US on US politics, but I don’t think so. Worth finding out.

Expand full comment
Anteros's avatar

Interesting, and I take your point. I wonder, though, if when we join in online in US CW stuff, whether our natural and inner tribal instincts come out and we polarise to fit in with the background. Of course, there are elements of those games that we don't have skin in, so maybe we can be a fair bit less partisan.

I don't know, but yes, it would be great to find out.

Expand full comment
alesziegler's avatar

Speaking as an European, we would just split into two opposing camps each defending its favorite side of American culture war, except we know less about US politics than Americans, so our mud-slinging would be even dumber :-)

Expand full comment
Viliam's avatar

In general, commenting on another country has the problem that your information comes from media and internet, but you do *not* have the experience of the everyday life there.

Imagine a person who never leaves their home, never had a job, never had to deal with bureaucracy, instead spends all their time online or watching TV. Would you say that such person is competent to judge how well the country they live in is organized? -- The thing is, you have even *less* information about the country you *don't* live in.

Expand full comment
a real dog's avatar

I think the European perspective is roughly that both of them are nuts, but the Dem brand of nut has way more international visibility and cultural impact (with Trump as an impressive outlier). If the woke left didn't overrun discord and twitter we'd probably roll eyes and see them as a curiosity, much like we do with the right-wing equivalent.

Expand full comment
Negidius's avatar

I'm a European and a libertarian so my ideology is probably about equally far from both US parties, but I actually think it's the opposite. The average US conservative seem to be in favor of bathroom bills and banning teachers from even mentioning the existence of LGBT people, but I have trouble finding many liberals willing to defend race based affirmative action or the concept of cultural appropriation for example.

I guess it depends on what positions you consider to be 'nuts', but to me the republicans feel crazier. I think most Europeans probably feel the same for a different reason. Most of the extreme (from an American perspective) policy proposals from the Democrats are things that already exist in Europe, while the extreme (from an American perspective) policy proposals from the Republicans are more unique.

I'm not on Twitter, but my experience on Discord has been that US conservatives are significantly more dogmatic and unwilling to listen to opinions they disagree with compared to US liberals, although this will of course vary depending on the server.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
Negidius's avatar

I'm not saying that nobody is willing to defend it. I'm saying that I can't find many of those people. I have never talked about the issue with a US liberal who did not at least concede that it would be better to base affirmative action on something like their parents wealth, average income were they grew up, or average grades at their school. This is in contrast with my perception of the prevalence of 'culture war' views among US conservatives. I definitely don't interact with a representative sample of Americans, so maybe this is not saying much.

Expand full comment
Jack Wilson's avatar

I think there's an argument for race-based affirmative action for a place like Harvard, if not a great one. If to a highly inordinate degree elites are chosen out of Harvard AND getting into Harvard is not particularly meritocratic, better to have a student body that is about as racially diverse as the country than not.

For instance, if (I'm making these numbers up) 50% of the student body is 98% white because families with a history of attending and donating to Harvard are given AA for those reasons, and the other 50% is strictly meritocratic, then whites will be overrepresented in elite roles throughout society due to the legacy admissions AA. If we can't change the bias of the legacy admissions policy, then it's probably better to give non-whites an unfair advantage in representing what is left of the student body.

Of course, better than any of that would be a 100% meritocratic system.

Ultimately, I know little about Harvard and perhaps there aren't any preferential legacy admissions these days.

Expand full comment
Katie M.'s avatar

Harvard absolutely still has legacy admissions; in 2019, 36% of all students admitted were legacies.

If you want to know how that connects to race:

"43 percent of white admits to the College are athletes; legacies; on the Dean’s or Director’s lists, which contain relations of donors and high-profile figures; or children of Harvard employees — together referred to as “ALDCs.” In contrast, less than 16 percent of African American, Hispanic, and Asian American admits are ALDCs....The authors found that a white, non-ALDC applicant with a 10 percent chance of admission would see a five-fold increase in their chance of admissions if they were a legacy; more than a seven-fold increase if they were on the Dean’s List; and would be admitted with “near certainty” if they were a recruited athlete."

https://www.thecrimson.com/article/2019/10/23/nber-admissions-data/

Another interesting point from the same article: "admissions advantage for legacies and recruited athletes has grown over time. For the Class of 2000, admit rates for legacies and athletes were four times higher than their peers; by the Class of 2017, their admit rates were nine times higher...The paper found that despite the increasing number of minority legacy and athlete admits, the admissions advantage for legacies and athletes still reduced racial diversity at Harvard between the Class of 2000 and 2017."

So your intuition is absolutely correct: white students disproportionately benefit from legacy and other non-meritocratic policies that aren't usually considered in discussions of AA.

Switching to a purely meritocratic system, with no race-based AA and also no preference for legacies/athletes/children of famous people, would likely lead to a overall decrease in the percentage of white students and an increase in Asian, black, and Hispanic students, reducing the need for affirmative action. I think that's the best compromise, but it's incredibly unlikely to ever happen.

So yes, I don't see much point complaining about AA at elite colleges like Harvard when it seems that many of the non-white students who currently make it in via AA would probably also make it in under a completely meritocratic system where the admissions rate wasn't being driven down by legacy admits.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Seems to me that that discriminates against poor and middle class whites, in fact everybody but a tiny elite.

In fact an affirmative action program that has legacy admissions is a bit of a joke. Want to go after real privilege then make it harder for people to get into Harvard if their ancestors did so.

Expand full comment
Kenny Easwaran's avatar

I was going to say I'd be happy to defend race-based affirmative action, but I do agree that it's better to have all of these types than merely a race-based version.

Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
David Gretzschel's avatar

Add Finnland, Australia, India, New Zealand, Spain, Portugal and even South Africa (as dysfunctional as it may be) to the list. I definitely didn't think of a couple and for some countries I'm too unsure about the history, like Ireland. I think that I am already using an extremely liberal definition of "liberated by the US" to exclude the UK for example.

Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
DinoNerd's avatar

Hmm. You may need to define "democracy". Some people in the United Kingdom have been voting since before the US existed; the US itself didn't allow many adults to vote until what was living memory during my childhood.

But perhaps your definition of "democracy" excludes constitutional monarchies. That doesn't make sense - if that's what you meant, you should have used the term "republic". Or if you wanted both, "democratic republic".

Expand full comment
User's avatar
Comment removed
Jun 15, 2022Edited
Comment removed
Expand full comment
Nolan Eoghan (not a robot)'s avatar

Your definition of democracy earlier in the thread was that whatever kind of democracy America had (whomever it excluded) was democracy and anywhere else wasn’t.

Who the Nazis invaded or didn’t, or could have or couldn’t, isn’t relevant at all but even then there are plenty of countries that would exclude.

(England isn’t a State by the way).

Expand full comment
David Gretzschel's avatar

sry nvm

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Strange criteria. Also it depends on how you define democracy.

Expand full comment
User's avatar
Comment removed
Jun 14, 2022Edited
Comment removed
Expand full comment
Nolan Eoghan (not a robot)'s avatar

The first paragraph makes little sense. And by not defining democracy you’ve invalidated your previous post.

The idea wasn’t the criticise the US, it was to see if non Americans who follow US politics are as polarised. If we see things differently.

It’s a given that we probably aren’t on the Republican side on the second amendment, or healthcare etc. But what about Trump? I don’t rate him as a president but I don’t see him as a white supremacist, a threat to democracy, or the “insurrection” as anything other than a day out to visit the Capitol.

Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
Nolan Eoghan (not a robot)'s avatar

That’s a bad definition since it excludes black people and women.

I think this “conversation” has run its course though, don’t you?

Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
Nolan Eoghan (not a robot)'s avatar

Defending democracy? What if I was from South America?

Expand full comment
User's avatar
Comment deleted
Jun 14, 2022
Comment deleted
Expand full comment
Lambert's avatar

Does anybody here understand perfume economics? I was looking at an £80 bottle of Eucris the other day and noticed that the only ingredients were alcohol, the unhelpfully-named 'fragrance' and a little bit of distilled lichen.

It turns out that fragrance or parfum can refer to any combination of over 2500 different chemicals. Many of these are synthesized industrially from commodity feedstocks like acetone and turpentine.

Where does the price tag come from? Is parfum actually referring to expensive inputs like unicorn sweat or something? Is the sheer range of ingredients needed prohibitively expensive even though they're only used in tiny quantities? Is it the real or imagined skill of the perfumer in compounding the mixture? Is it all just branding?

And more importantly, how do I smell maximally nice without breaking the bank? (i'm open to fabricating a steam still and cultivating aromatics, fwiw)

Expand full comment
Schweinepriester's avatar

I'm unsure if this is kind and it is surely not helpful but I was fascinated hearing about this guy whose dog barked at a stinking piece of stuff on a beach, which (the latter) turned out to be valuable https://www.walesonline.co.uk/news/wales-news/how-rare-lump-whale-vomit-10010150.

They cut up a dead sperm whale in my hometown, it stunk real badly three blocks around. Olfactory stuff is interesting and I don't know shit about it, would like to read something about what we know so far.

Expand full comment
Deiseach's avatar

If you've read "Moby Dick", you've read Melville's rather strange and trippy part about squeezing ambergris, which indicates that there is *something* going on with this substance (it doesn't help that he keeps referring to it as "sperm" which, as they say, evokes different connotations today):

"And all the time numberless fowls were diving, and ducking, and screaming, and yelling, and fighting around them. Stubb was beginning to look disappointed, especially as the horrible nosegay increased, when suddenly from out the very heart of this plague, there stole a faint stream of perfume, which flowed through the tide of bad smells without being absorbed by it, as one river will flow into and then along with another, without at all blending with it for a time.

“I have it, I have it,” cried Stubb, with delight, striking something in the subterranean regions, “a purse! a purse!”

Dropping his spade, he thrust both hands in, and drew out handfuls of something that looked like ripe Windsor soap, or rich mottled old cheese; very unctuous and savory withal. You might easily dent it with your thumb; it is of a hue between yellow and ash color. And this, good friends, is ambergris, worth a gold guinea an ounce to any druggist.

...Now this ambergris is a very curious substance, and so important as an article of commerce, that in 1791 a certain Nantucket-born Captain Coffin was examined at the bar of the English House of Commons on that subject. For at that time, and indeed until a comparatively late day, the precise origin of ambergris remained, like amber itself, a problem to the learned. Though the word ambergris is but the French compound for grey amber, yet the two substances are quite distinct. For amber, though at times found on the sea-coast, is also dug up in some far inland soils, whereas ambergris is never found except upon the sea. Besides, amber is a hard, transparent, brittle, odorless substance, used for mouth-pieces to pipes, for beads and ornaments; but ambergris is soft, waxy, and so highly fragrant and spicy, that it is largely used in perfumery, in pastiles, precious candles, hair-powders, and pomatum. The Turks use it in cooking, and also carry it to Mecca, for the same purpose that frankincense is carried to St. Peter’s in Rome. Some wine merchants drop a few grains into claret, to flavor it.

Who would think, then, that such fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale! Yet so it is. By some, ambergris is supposed to be the cause, and by others the effect, of the dyspepsia in the whale. How to cure such a dyspepsia it were hard to say, unless by administering three or four boat loads of Brandreth’s pills, and then running out of harm’s way, as laborers do in blasting rocks."

EDIT: I am mistaken, his trippy account of squeezing spermaceti is about the alleged practice of squeezing out the congealed lumps of the oil from within the head of the sperm whale before it is processed:

https://steamthing.com/2010/09/did-melville-invent-sperm-squeezing.html

"While some were occupied with this latter duty, others were employed in dragging away the larger tubs, so soon as filled with the sperm; and when the proper time arrived, this same sperm was carefully manipulated ere going to the try-works, of which anon.

It had cooled and crystallized to such a degree, that when, with several others, I sat down before a large Constantine’s bath of it, I found it strangely concreted into lumps, here and there rolling about in the liquid part. It was our business to squeeze these lumps back into fluid. A sweet and unctuous duty! No wonder that in old times sperm was such a favorite cosmetic. Such a clearer! such a sweetener! such a softener; such a delicious mollifier! After having my hands in it for only a few minutes, my fingers felt like eels, and began, as it were, to serpentine and spiralize.

As I sat there at my ease, cross-legged on the deck; after the bitter exertion at the windlass; under a blue tranquil sky; the ship under indolent sail, and gliding so serenely along; as I bathed my hands among those soft, gentle globules of infiltrated tissues, wove almost within the hour; as they richly broke to my fingers, and discharged all their opulence, like fully ripe grapes their wine; as. I snuffed up that uncontaminated aroma,- literally and truly, like the smell of spring violets; I declare to you, that for the time I lived as in a musky meadow; I forgot all about our horrible oath; in that inexpressible sperm, I washed my hands and my heart of it; I almost began to credit the old Paracelsan superstition that sperm is of rare virtue in allaying the heat of anger; while bathing in that bath, I felt divinely free from all ill-will, or petulance, or malice, of any sort whatsoever.

Squeeze! squeeze! squeeze! all the morning long; I squeezed that sperm till I myself almost melted into it; I squeezed that sperm till a strange sort of insanity came over me; and I found myself unwittingly squeezing my co-laborers’ hands in it, mistaking their hands for the gentle globules. Such an abounding, affectionate, friendly, loving feeling did this avocation beget; that at last I was continually squeezing their hands, and looking up into their eyes sentimentally; as much as to say,- Oh! my dear fellow beings, why should we longer cherish any social acerbities, or know the slightest ill-humor or envy! Come; let us squeeze hands all round; nay, let us all squeeze ourselves into each other; let us squeeze ourselves universally into the very milk and sperm of kindness.

Would that I could keep squeezing that sperm for ever! For now, since by many prolonged, repeated experiences, I have perceived that in all cases man must eventually lower, or at least shift, his conceit of attainable felicity; not placing it anywhere in the intellect or the fancy; but in the wife, the heart, the bed, the table, the saddle, the fire-side; the country; now that I have perceived all this, I am ready to squeeze case eternally. In thoughts of the visions of the night, I saw long rows of angels in paradise, each with his hands in a jar of spermaceti."

Expand full comment
Schweinepriester's avatar

I have to admit I only read a german translation as a child; someone must have left out those bits you cite. There was a custom of editing literary works for young or less intellectual audiences, I think.

Expand full comment
Deiseach's avatar

There were English language versions of Classic novels like that, too: abridged versions for the young.

"Moby Dick" is such a sprawling mess of a novel that I wouldn't be at all surprised if somebody, while translating it into another language, decided to take a hedge trimmer to the prose as well to edit it down into something more conventional 😀

The bits about squeezing sperm/spermaceti, which is how the sperm whale gets its name and is the waxy liquid in the whale's head (https://en.wikipedia.org/wiki/Spermaceti_organ) I have no idea how you translate that into another language without carrying over the entire "what the hell, Melville, how high were you when you wrote this?" atmosphere of those passages.

Expand full comment
Viliam's avatar

Okay, the last two paragraphs sound like porn written by a seriously mentally ill person.

Expand full comment
Carl Pham's avatar

Think rather of a dying man, soon to depart into the dark, the loneliest possible state of being, agonizing over how precious it is to be able to feel the touch of a living human hand. I'm told the conventional interpretation of "Moby Dick" is as one long lament of mortality.

Expand full comment
Nancy Lebovitz's avatar

One more angle on the price: the current state of the perfume market is people who want to have something which is good *and* new, which means paying for design and a churn of new formulations.

Expand full comment
BK's avatar

An enjoyable listen on the topic of perfumes is https://99percentinvisible.org/episode/perfume-articles-of-interest-9/transcript

Doesn't really get into the economics but does peel back the curtain on the fragrance design process a little.

Expand full comment
A.'s avatar

That was a great link. Thank you.

Expand full comment
Deiseach's avatar

That's good, and it explains the reason why (real) sandalwood is so expensive:

"There’s a drop of Indian sandalwood and pretty much everything on the market. And the challenge with Indian sandalwood is that the trees have to be at least 30 years old before you can harvest them. You can’t just go plant more trees and have more oil tomorrow. You’ve got to wait 30 years. So Indian sandalwood would have to be replaced with Australian sandalwood which doesn’t smell the same or with synthetic sandalwood which doesn’t smell the same."

So if Eucris *is* using genuine Indian sandalwood, and not a synthetic substitute, one more reason why it's so expensive. The point about not smelling the same is important, because if the scent smells one way and then suddenly changes, people will notice and complain (think New Coke versus Classic Coke).

Expand full comment
Kenny Easwaran's avatar

The price is for the same reason as any other design-based profession. It takes real skill to come up with something that is actually good, rather than something fools people into thinking that it's good. You have to spend a lot of time trying a lot of designs, and the royalties for the one that works has to pay for all of the time and effort spent on the others.

Expand full comment
Deiseach's avatar

"And more importantly, how do I smell maximally nice without breaking the bank?"

Don't necessarily try the big expensive brands that stores will try and push on you if you walk up to the counter and ask for recommendations. Buy a few cheap eau de colognes first to try out different scents - these won't last long but will give you an idea of what the scent is like.

Remember that what it smells like in the bottle or when you spray it is *not* what it will smell like when it's on your skin; I've had some scents that smelled gorgeous but turned to ashes when I tried them. Your own body chemistry will affect the scent. Also, after an hour or so, the top notes fade and the other notes come forward, so again - if you try it, you think it smells great, wait a bit and see how it develops and if it still smells good on you.

Ask friends/family/significant other what smells nice on you. If they come up and go "Mmm, you smell nice!" unasked, you know that one works 😁

Stronger scents are generally for night-time wear and a little goes a long way. Depending on season, as well; e.g. summer time is lighter, more citrus/floral ones or marine. Have a look online, there are a lot of recommendations out there and while many of them are rather chi-chi, there are some decent ones that are not too expensive.

Would The Salt Lake Tribune let you down?

https://www.sltrib.com/sponsored/2022/05/10/best-summer-colognes-men/

(Some of these, if you look up prices, will make you go "Holy moly!" but try and find the weaker, cheaper formulations of them. Though stay away from patchouli, it's a scent that is way too overpowering and difficult to find one where it works with restraint).

Have fun trying them!

Expand full comment
Metacelsus's avatar

>I've had some scents that smelled gorgeous but turned to ashes when I tried them.

Probably these were esters which hydrolyzed to alcohols + carboxylic acids.

Expand full comment
Erusian's avatar

> Is parfum actually referring to expensive inputs like unicorn sweat or something? Is the sheer range of ingredients needed prohibitively expensive even though they're only used in tiny quantities? Is it the real or imagined skill of the perfumer in compounding the mixture? Is it all just branding?

It can be all of the above. But it's mostly the last two. The skill and the branding. Though you can have expensive ingredients. Cheap chemical perfumes can have extremely low cost of ingredients (as low as a dollar or two) but most high shelf perfumes have more expensive ingredients.

> And more importantly, how do I smell maximally nice without breaking the bank? (i'm open to fabricating a steam still and cultivating aromatics, fwiw)

A common tell of cheap perfume is impure or cheaply processed ingredients. Often some kind of basic chemical. Another is cheap strong ingredients. Often sweet smelling. Cheap perfumes basically have only a top note with undesirable smells from imperfections. Often a kind of alcohol-chemical smell. So you want to find something that has some subtlety and at least a mid and base note and that avoids these common pitfalls.

Making your own perfume is definitely fun. But you probably won't save money in net. You can find decent perfumes for $20-40. What kind of scent are you looking for?

Expand full comment
Deiseach's avatar

Okay, I had to look that one up. First, it's a premium brand - Geo. F. Trumper (George Trumper), founded in 1875 in Mayfair (very high-toned). It started off as a barber shop and I think you can still get shaved there. It gets a mention in a James Bond novel (which is where I first heard of them/it, and remember that Fleming was a snob):

"In the James Bond novel On Her Majesty's Secret Service, Ian Fleming mentions a fragrance by Geo. F. Trumper, when Bond visits Marc-Ange Draco in Marseille and finds a bottle of Eucris in his bathroom."

"James Bond purportedly wore this scent in two Bond novels - "Diamonds Are Forever" and "On Her Majesty's Secret Service"."

That particular fragrance was launched in 1912:

"Eucris Eau de Toilette by Geo. F. Trumper is a Floral Woody Musk fragrance for men. Eucris Eau de Toilette was launched in 1912. Top notes are Black Currant, Caraway and Coriander; middle notes are Lily-of-the-Valley and Jasmine; base notes are Oakmoss, Sandalwood and Musk."

So that's the first element. Obviously they are not going to put the 'recipe' on the bottle, anymore than Coke is going to put its formula on the cans; that's why you only get "Alcohol Denat., Parfum (Fragrance), Aqua (Water)" as the description on the website.

Second is what strength of perfume/scent you are buying; the rule of thumb is "cheaper is weaker". You mention that it is £80, which means it is the eau de parfum:

https://www.trumpers.com/product/eucris-eau-de-parfum/

"The striking fragrance of our Eucris Eau de Parfum sweeps the gentleman into the realms of utter sophistication and pleasure. The warm, sensual beauty of sandalwood, musk and moss is enhanced by a heart of rich, honey-like jasmine and exquisite muguet. Rousing, peppery top notes of cumin and coriander radiate with the fullness of plump blackcurrants."

So *slightly* more going on here than just "alcohol, fragrance and lichen". Sandalwood is ferociously expensive, so if they're using the real thing (and not a synthetic derivative), prepare to pay accordingly. Oakmoss is also a bit more than merely distilled lichen (okay, basically that is what it is) but it may have been replaced by synthetic version due to regulations about skin irritation:

https://perfumesociety.org/ingredients-post/oakmoss/

Fragrance strengths explained below:

https://www.scentstore.com/about/perfume-explained/

"The higher the concentration the more pure the fragrance – it contains a greater amount of perfume oils and less alcohol, therefore the cost of the fragrance is greater. Also the greater the concentration of perfume oils the greater the lasting power on the wearers skin (usually referred to as tenacity).

A short guide to follow is;

Parfum – Concentration of perfume oils – 20-40%

Eau de Parfum – Concentration of perfume oils – 15-20%

Eau de Toilette – Concentration of perfume oils – 5-15%

Eau de Cologne – Concentration of perfume oils – 2-5%

Eau Fraîche – Concentration of perfume oils – 1-3%

Eau de Parfum has the next highest concentration of perfume oils normally around 15 -20%. Lasting slightly less on the skin at around four to five hours. As the oil level drops then there is more alcohol added therefore making EDP less expensive than pure Parfum. This variation is one of the most popular as is often used for fragrances that are worn during the evening or a night out."

So £80 for 50ml of eau de parfum is about right. You should see the prices of women's perfumes, those are even more eye-watering.

Expand full comment
Gunflint's avatar

I recall an exchange between Bond and a SMERSH operative he was in the process of seducing.

Evil female spy: “You wear no perfume like Russian men.”

Bond: “We bathe.”

Don’t ask me which one it was though. Those pulpy Ian Fleming novels kind of blur together in my mind. Excellent airplane reading though.

Expand full comment
Gunflint's avatar

The high price comes from same place as the high price of a Prada handbag. Manipulated perception of value though advertising.

Expand full comment
Thegnskald's avatar

It's fashion economics; if the perfume is cheap, then cheap people will wear it.

Expand full comment
Viliam's avatar

This would make sense for a monopoly. But once the expensive perfume is out there, it might make sense for the competitors to provide an indistinguishable alternative at half the price. And so on, until it becomes cheap again. What is your explanation why this does not happen?

I can make a few guesses, but I do not know whether they make sense:

* Perfume industry is a cartel, not a competition. The companies agree that a certain smell should cost $100 a bottle, no matter how cheaply it can be produced, and everyone follows the rules.

* The nicest smells actually do need super expensive ingredients, or their processing is super expensive.

* The nicest smells are a result of a secret mix of hundred ingredients, they are discovered after very long experimenting, and their recipes are secret. Thus each of them is a monopoly.

* The difference between a nice and bad smell is objective, but the difference between nice and *very* nice is just in your head. If I offered you exactly the same perfume for half the price (in a different bottle), you would swear that it does not smell so good.

* Instead of a bell curve, most customers can be divided into two groups: those who want to spend as little money as possible, and those who do not care much about money. A mediocre perfume can be produced for $1 and sold for $10. A great perfume can be produced for $10 and sold for... well, the former group isn't going to buy it for $20 anyway, so you might as well sell it for $100.

Expand full comment
Deiseach's avatar

You absolutely do get cheap, knockoff versions of brand perfumes. And they're exactly that: cheap, knockoff versions. You may get something that smells vaguely like the name perfume, but it's going to last no length on your skin. Okay if you just want something you spray on like deodorant, but it's the difference between brass and gold.

Agreed that a *lot* of perfume branding is about expensive tat that is driven by "This B-list celeb brought out a perfume line", but quality brands with good ingredients and well blended are expensive for a reason - you get what you pay for.

I like perfume, but not very strong popular scents, and I have no consistent usage. What I can afford to my purse and appeals to my taste is from this lot:

Sample scent I am loving right now, "this is not a blue bottle 1.2"

https://www.histoiresdeparfums.com/products/sample-this-is-not-a-blue-bottle-1-2

Expand full comment
Mystik's avatar

I think that you’re missing the reason why gold is expensive. You aren’t just buying a shiny rock, you’re buying prestige. It’s the Harvard education of perfumes

Expand full comment
Gunflint's avatar

I’m having trouble tracking the metaphors here

Expand full comment
Mystik's avatar

fair enough.

My point is that you are buying a luxury item. And an important thing about luxury items is that they come from high-end reputable brands. I would be dubious that Harvard really provides a leagues better education than any other good college, but because it’s “Harvard” they can charge an arm and a leg more to the top payers. You can’t just say “well why not provide a Harvard level education for cheaper” because you won’t have the branding, and you’ll never be able to obtain such a brand by being cheaper

Expand full comment
Thegnskald's avatar

If the fragrance becomes inexpensive, it will cease to be fashionable.

Expand full comment
Urstoff's avatar

What work is there being done to create AI from the bottom up instead of the top down? That is, instead of trying to recreate linguistic behavior (the tip of the cognitive iceberg), instead create fully self-contained agents that try to maintain some type of homeostasis in an environment. You might not need a body for intelligence, but every intelligence we've every encountered has a body, so why not start with that?

Expand full comment
a real dog's avatar

Because linguistic behavior is incredibly easy in comparison to acting in the real world.

https://en.wikipedia.org/wiki/Moravec%27s_paradox

Expand full comment
Thegnskald's avatar

Why would you build a body for an algorithm for code autocompletion?

Expand full comment
Viliam's avatar

Autocompletion of work in the real world might be cool though.

(Until you make the first paperclip, of course.)

Expand full comment
Kam Bielawski's avatar

You wouldn’t. Embodiment is for intelligence that needs to act on the world, like a Tesla or a Roomba. That being said, embodiment is a spectrum just like intelligence is. You can have more or less embodied agents; I’d argue an autocompletion AI is embodied to some small degree in its ability to change the pixels on your screen.

But anyway, evolutionary robotics is a cool field to check out if you want to look for “bottom up” design of intelligence. It’s goal is to design embodied AI from scratch with the same process that created us (evolution).

Expand full comment
Urstoff's avatar

ha, I thought about BD bots but I think a Roomba probably is more of an agent than one of those

Expand full comment
Alexandros Marinos's avatar

Hi Scott,

Thanks for the corrections WRT Cadegiani.

I doubt you're looking to make further edits, but I will note the updated version still reads:

"some people did fraud-detection tests on another of Cadegiani’s COVID-19 studies and got values like p < 8.24E-11 in favor of it being fraudulent."

Sheldrick's piece is very careful to write: "I would like to be explicit that I am not making an allegation of fraud against any specific author or their associated entities. Even where irregularities arise in data sets with multiple authors that cannot ultimately be explained, it is not usually reasonable to draw negative inference against all the authors involved. Authors are entitled to trust their collaborators, and researchers their employees."

And also, given that he's using methods very similar to what Carlisle is using, it is probably worth noting that Carlisle himself in his most famous paper writes: "In summary, the distribution of means for baseline variables in randomised, controlled trials was inconsistent with random sampling, due to an excess of very similar means and an excess of very dissimilar means. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to this distortion."

Basically, what I'm saying is that the test Sheldrick did does not check for fraud but for surprisingness, and of course, extremely surprising values require explanation, but that explanation is most often a typo or other similar issue. In some cases it is fraud indeed but that is a second-level conclusion after using these tests to highlight unusual papers.

As for the other remaining element WRT the statements to the BMJ by Jorge Venâncio (head of the Brazilian regulator), Cadegiani wrote on Twitter in response to my piece:

"The regulator acted illegally by leaking information and fabricating data. The BMJ is aware of it since right after the publication. And they know the information is invalid since the Ministry of Health sent an official communication telling them that the information provided by Jorge Venancio is invalid. They've done nothing."

Now, as I mentioned, this whole mess has made its way to the courts, and it's totally reasonable to doubt statements by Cadegiani in this context, but if there is indeed an official statement by the Brazilian MoH, that is at least worth considering. It very well may be that the MoH is politically motivated of course, I can totally believe that, but the same applies for the regulator, in what is clearly a very politically contentious issue.

Expand full comment
Paul T's avatar

I've recently been reading Freud/Jung for the first time. At the same time, I've been thinking about some of the questions from "Contra Dynomight on Sexy In-Laws", and I found an interesting confluence of ideas tying Jung's "Collective Unconscious" and "Archetypes" to Scott's question of different levels of biological drives.

From Scott's piece (https://astralcodexten.substack.com/p/contra-dynomight-on-sexy-in-laws):

> Since then, I’ve been thinking about this a lot. Is it correct to model myself as having (let’s call it) a reptile-level instinct of “genital friction feels good”, plus a separate mammal-level goal of “kids are cute”, while missing a human-level goal of “maximize inclusive genetic fitness, eg by donating to sperm banks”?

One of the key Jungian concepts is the "Collective Unconscious", the idea that all humans share a set of "archetypes" that are low-level concept clusters like "mother" or "father" (it's gendered in the Jungian model), and which are biologically determined, i.e. part of the shared human genetic heritage. These archetypes get projected into an individual's "personal unconscious" as "complexes" depending on their development/experiences, but Jung would say the archetypes themselves are universal. Jung takes this quite far and believes that there are lots of archetypes in the collective unconscious, including things that sound fuzzy to me like "wounded physician" and "hero on a quest", and that this explains the (claimed) shared symbology of dreams of all humans, but I think the more fundamental archetypes are particularly interesting as they shed light on Scott's question above.

The concept of archetypes gets picked up by the "Evolutionary Psychiatrists" and developed through the lens of Evolutionary Biology and ethology, with books like "Evolutionary Psychiatry" (Stevens, Price) that connects directly to Scott's framing:

> The triune brain provides a home for what we might call the 'triune mind'. Many thinkers, including Plato, St Augustine, La Rochefoucauld, Freud, and Jung, have observed that the mind seems to possess separate functional components which compete with one another for overall control of behaviour. Variously attributed to such organs as the 'head', the 'heart', and the 'bowels', reason, emotion, and instinct may display differing intentions when it comes to choosing a mate during courtship or displaying valour on the field of battle: 'Le coeur a ses raisons, que la raison ne connait point.' MacLean's anatomical studies give useful support to this long-standing concept of three minds in one (the neurological 'holy trinity').

> It begins to look, therefore, as if Jung was right when he guessed that the archetypal systems of the collective unconscious, if they could be given a local habitation and a name, must have their neuronal substrate in phylogenetically old parts of the brain.

There are some obvious problems here; aside from the "triune brain" concept seemingly having fallen out of favor, no actual evidence is produced for these archetypes aside from introspective/interpretive experience. Jungian analysis proposes that frustrated/suppressed archetypal expression results in some sort of psychic distress but I struggle to see how this theory is falsifiable. I suspect that the specific heteronormative claims of biologically-determined "mother" / "father" archetypes probably struggle to incorporate the evidence from modern attachment theory. And so on. However it's an interesting idea-cluster to put at one end of the spectrum; I think contrasting this with the Evolutionary Psychology perspective (Pinker, et. al.) could be fruitful.

Tying this all back to Scott's initial question, the "archetype" hypothesis is an expression of the idea that there is some inherited structure to cognition. The question is whether this inherited structure manifests as something vague like "wide behavior gradients" that provide a nudge to behaviors across a large area of the "behavior-space" but aren't resolvable as individual concepts, or at the other end of the spectrum, whether the inherited structure represents "narrow behavior gradients" that each strongly affect a small area of behavior-space and are potentially coherent as individual concepts. These could be "narrow low-level behavior gradients" that are more like Innate Releasing Mechanisms where it's a non-conceptual "stimulus => response" like "genital friction => pleasure/amorousness" and the behaviors are fairly tightly-specified (i.e. sex), or "narrow high-level behavior gradients" that represent actual concepts in some sense, i.e. pattern-matching on "my child => high-level conceptual desire to nurture/protect" (with the behavioral implementation of those high-level drive/concepts left to the individual's own learned behavior repertoire). The latter could perhaps be fruitfully mapped onto Jungian archetypes, inasmuch as such behavior clusters are present and universal.

Expand full comment
Schweinepriester's avatar

Surely something to dig into. Read "Maps of Meaning" and, if possible, "Das Kraftfeld der Mythen" by Norbert Bischof, which is older and better but hasn't been translated to English AFAIK.

Expand full comment
JungianTJ's avatar

I haven‘t actually read much from Jung, at least not yet, but as far as I‘m aware, the Collective Unconscious consists of archetypes+instincts, and goals/desires map to instincts, not archetypes. If we want to relate the archetypes to Scott‘s posts, then perhaps they (or some of them) would correspond to symbolic a-priori additions that Gary Marcus wants to insert into GPT-X.

Expand full comment
20WS's avatar

If anyone is interested in transgender issues, especially relating to women's sport, Sabine Hossenfelder made a really good video on the subject https://youtu.be/cZ9YAFYIBOU

(I am precommitting to not reply to any responses, cause culture wars, more like, culture snores)

Expand full comment
alfanerd's avatar

Interesting summary of the relevant science. I really like her channel when she discusses new discoveries in physics and astronomy.

On more CW-y topics, I dont trust her. She uses the term "gender assigned at birth", which is a tell that she's not guided by truth but instead by political correctness. Even worse, some months ago she tweeted asking for help to answer a requirement for a research grant asking "how her research was going to further the goals of diversity blah blah..." Many took it as a farce because on its face it clearly is, and she replied that no she was super serious those requirements are important even for fundamental physics.

That said, on the issue of sports, while I believe the science is quite clear that M2F should not be able to compete against women, this is not what matters the most. Sports is entertainment, and there is zero entertainment value in watching mediocre ex-male athletes destroy female athletes. Women's sports is already less entertaining than men's sports, this will just destroy them.

The comparison to Ussein Bolt's long legs and Michael Phelps's arms etc.. are more than ridiculous. Those guys are freaks of nature and that's why it's so fun to watch them. They're the very best of humanity in their field, and they show us the (current) limits of our physical capability. Nothing of the sort could be said about the many M2F who are currently dominating various women's sports.

Expand full comment
Sovereigness's avatar

I'm trans and I agree with you that trans women shouldn't be competing with cis women in sports at the high school level and up.

2 nitpicks - "blank assigned at birth" has just become a very in-the-water-supply phrasing at this point and it's no longer a good heuristic for discounting someone's opinions. In general that's a bad idea anyway, as everyone here should know.

Secondly and much more importantly, I think maybe you're exaggerating quite a lot how many m2fs are actually competing and it's impact on sports. Again, I agree with you, I just really dont think this issue is all that large.

Expand full comment
alfanerd's avatar

Yes, fair points. I still dont like "gender assigned at birth", and much prefer "sex observed at birth", because that is just a more correct phrasing. Phrases which contain lies such as "gender assigned at birth" are not coined innocently, even if they may be used innocently by people.

Off the top of my head I can think of that one UPenn swimmer, a NZ weightlifter, and a Canadian cyclist, as M2Fs currently dominating the women's events in their respective sports. It's not the biggest issue in the world, but it's still an issue. Its value is more symbolic, in the sense of "are we going to abandon all common sense to satisfy these activists?"

Expand full comment
Viliam's avatar

Summary:

Sport is unfair anyway, people have biological advantages, such as longer legs, greater body height, or more testosterone... some cis women actually have higher testosterone levels than an average man.

A hormone therapy changes the levels of testosterone, but the effect on muscles is stronger for trans men than for trans women. After three years of hormone therapy, a trans woman still has the advantage of the previously male biochemistry. (Maybe 15 years would erase the difference, but then they would be too old to compete anyway.)

Expand full comment
TimG's avatar

[Just posted this on the older open thread without noticing a newer one.]

Does anyone know of any data that gives an indication of AirBnB's sales?

Like, I know AirBnB won't directly give out that information (except during earning releases -- which are delayed). But I assume a "bot" could measure how many openings they have relative to a baseline or something. So I *assume* someone is publishing that data. I'm just unable to find it.

Expand full comment
Julian's avatar

Are you looking for gross sales or actual Airbnb revenue? The total $s booked or the fees that airbnb actual earned (not what went to owners)?

Expand full comment
TimG's avatar

I think revenue is useful. But maybe "occupancy rate"?

I think that *part* of the real estate boom is driven by AirBnBs. Lots of people I know are booking weeks (or longer) here and there during "work from home". I *think* occupancy rate might be a leading indicator of the housing market cooling.

Expand full comment
Julian's avatar

There are some SaaS products that try to track this kind of things. One is AirDna. It might have the stats you are looking for. Though you might have to sign up.

I am currently in the market for a new house and I can share the anecdotes that we have heard from our realtor. We are shopping in a small college town on the east cost that is within 3 hours of a few major metros. In this town at least, our realtor has seen a big cooling of the market. In the winter and early spring people were regularly offering 20% over asking and waiving inspections and other contingencies. Now, none of that is happening. Houses that previously would have gotten 30 offers in a weekend are only getting 2-5. And people are no longer willing to waive tons of things. She says some of this is that list prices have risen to reflect the new realty but she has seen with buyers that they aren't willing to make huge offers or push themselves with bigger loans. This happened before interest rates really started to rise too. If rates keeping going up (as they are expected to), I'd expect things to get very cold (though I don't think prices will fall that much - people seem very unwilling to lower prices to reflect new realities).

Expand full comment
TimG's avatar

Wow, AirDNA is exactly what I wanted. But the price seems a bit high. I might spring for a month to view the data.

For myself, we are looking for a small "second home" that is in a vacation area. The prices have jumped a lot over the past year or so. Units go on the market and sell immediately.

My assumption is that the financial squeeze will lower the occupancy rates and force some to sell. Just trying to get a sense of the market ahead of the changes. (No idea if prices will drop or if it'll juts be easier to actually buy. But the market data might help give an indication.)

Expand full comment
Julian's avatar

For vacation home areas I think you are on to something. I have thought about it a bit too. It seems any vacation area of marginal quality has seen prices jump like crazy. As opposed to primary homes I could see prices actually dropping in these scenarios if only for older homes that haven't sold in a long time. Good luck!

Expand full comment
TimG's avatar

Thank you!

Expand full comment
Kenny Easwaran's avatar

One problem is that it's very hard to tell whether a listing is down because it's being rented, or because the owner is living in it (maybe she broke up with her boyfriend and had been renting it out while they were cohabiting? or maybe the owner lists the place if, and only if, they are currently on vacation inhabiting an AirBnB somewhere else, so that this unit is always net zero?), or because there's renovations being done on the structure. Some places and types of listing are likely to have seasonality on these things, so you probably could get some sort of patterns. But it would probably be very difficult to get good information on this without AirBnB's backend data.

Expand full comment
Nicholas Charette's avatar

You could look to their SEC fillings and press releases for revenue information, which could be helpful.

Expand full comment
Julian's avatar

Here is the latest 10-Q incase someone was looking for it: https://www.sec.gov/ix?doc=/Archives/edgar/data/1559720/000155972022000009/abnb-20220331.htm

about 1.5billion in top line revenue for the quarter through March 31

Expand full comment
TimG's avatar

Yeah, but that will be post-quarter, delayed by a month or so. I assume someone has something that is closer to "real time".

Expand full comment
Tossrock's avatar

If someone has that, it's because they work for a hedge fund (or a data broker whose primary customer is hedge funds), and they're not going to give it to you for free.

Expand full comment
Deiseach's avatar

That Brazilian guy sounds very dodgy. I can't tell if his "I did a PhD in 7 months!" is legit or not, the closest I can track down is a CV in Portuguese which translated to English says:

https://e4agencia.wixsite.com/lifestylesummit/dr-flavio-cadegiani

"Doctoral Student in Clinical Endocrinology at the Federal University of São Paulo / Escola Paulista de Medicina (Unifesp/EPM)"

That is "Student" not "Completed doctorate", unlike how his other degrees are listed. He may (or may not) have finessed "I did 7 months as a doctoral student then dropped out" into "I did a PhD in 7 months", I don't know, I can't track him down on the University of Sao Paulo website due to lacking the requisite skills. But if Marinos can get away with "for all we know", so can I:

"It’s probably the most awkward paragraph in a longer bio that—for all we know—could have been assembled from original Brazilian materials by a bored intern armed with Google Translate. Is it really fair to call that specific paragraph Cadegiani’s self-description?"

Well, I'm no bored intern and I had to rely on Google Translate since my Portuguese is not what it should be, but I'm betting this *is* Cadegiani's self-description.

"Getting three gold medals in three different national Science Olympiads—as Cadegiani claims —is incredibly unusual."

Precisely, which is why I'd like to see evidence of these claimed medals, claimed participation, etc. before I believe in Miracle Doctor's claims.

The rest of the links are very dodgy, too; that link to a fairy story about "so I can't tell you the guy's real name but trust me, he's rich and it's all true that the miracle doctor cured a second bout of Covid in 3 days, where the first time it took the hospital 30 days, and he did it with a cocktail of cheap generics" is a just-so story.

Cadegiani's 'big' breakthrough, once he threw over sports medicine and obesity treatments for Covid, seems to be claiming Covid is due to or affected by androgens (in men, at least) and using anti-androgens (proxalutamide) to treat it. This was the topic of a 2021 study (later retracted) and I'm not sure at all if there are any connections with the Chinese pharmaceutical company which published its own clinical trials in 2020 and are (or were) trying to licence it for treating Covid as well as prostate and breast cancer.

If I believe the BMJ article about Cadegiani et al getting their wrists smacked for their version of the clinical trial, it's pretty damn bad:

"The clinical trial of proxalutamide “disrespected almost the entire protocol” and may have contributed to the deaths of as many as 200 people, said the National Health Council (CNS), which oversees clinical research in Brazil.2 Some of those people were not adequately informed of the risks they were undertaking in the trial, and some did not know that they were taking part in one, it said.

...The treatment was prescribed by doctors as if it were an established medical treatment, said the CNS, although it was approved only for clinical studies. The number of people given the drug was also larger than the number approved for the trial, and they were administered through a private hospital network in the Amazon when the trial was approved in the capital, Brasilia."

So they threw a bunch of drugs at the patients, including ivermectin, and then claimed this was a miracle cure-all. I don't see why Marinos is so gung-ho for Cadegiani, save that he likes to throw ivermectin into the mix,and if Marinos believes ivermectin cures Covid on its own (and not as a secondary treatment for parasite infestation, relieving the burden on the immune system and general health of the patient), then any port will do in a storm.

Before reading that post, I knew nothing about Cadegiani. After reading it, I get strong alarm bells of "snakeoil salesman".

Expand full comment
a real dog's avatar

FWIW, many countries allow doing a PhD based on existing work, on an accelerated schedule (up to and including packaging said existing work into a thesis, sending it for review, defending it and it's done).

Expand full comment
Deiseach's avatar

Oh sure, that would be believable. What our boy is strongly intimating, however, is "I am so yuuuuge brained I crammed what would take lesser mortals 4-6 years into 7 months!" and that is why I am going "naughty, naughty" and wanting more evidence than his bare word.

It's less impressive if it's "I did it in 7 months because I already had 3+ years of work done elsewhere which was taken into account".

Expand full comment
Alexandros Marinos's avatar

You write:

"That Brazilian guy sounds very dodgy. I can't tell if his "I did a PhD in 7 months!" is legit or not, the closest I can track down is a CV in Portuguese which translated to English says:

https://e4agencia.wixsite.com/lifestylesummit/dr-flavio-cadegiani

"Doctoral Student in Clinical Endocrinology at the Federal University of São Paulo / Escola Paulista de Medicina (Unifesp/EPM)"

That is "Student" not "Completed doctorate", unlike how his other degrees are listed. He may (or may not) have finessed "I did 7 months as a doctoral student then dropped out" into "I did a PhD in 7 months", I don't know, I can't track him down on the University of Sao Paulo website due to lacking the requisite skills. But if Marinos can get away with "for all we know", so can I:"

Cadegiani responded to my article on Twitter with this:

"If I explained why I obtained my PhD so quickly it would sound very arrogant. It passed through multiple authorizations in order to have occurred so quickly. If I tell a little bit of my academic life since childhood it will also sound very arrogant."

https://twitter.com/FlavioCadegiani/status/1535364407349370883

You can totally doubt it is true, and if someone does the digging, I'd definitely listen, but if your argument is that he doesn't unambiguously claim that he indeed obtained a PhD or that he obtained it so quickly, the above should be sufficient. If it would genuinely make a difference, I could probably ask him for the details.

You write:

"Precisely, which is why I'd like to see evidence of these claimed medals, claimed participation, etc. before I believe in Miracle Doctor's claims."

That's totally fine. If someone does the digging and finds out those claims are false, then indeed it will be a massive blow to Cadegiani's credibility.

You wrote:

"The rest of the links are very dodgy, too; that link to a fairy story about "so I can't tell you the guy's real name but trust me, he's rich and it's all true that the miracle doctor cured a second bout of Covid in 3 days, where the first time it took the hospital 30 days, and he did it with a cocktail of cheap generics" is a just-so story."

As with many of the above, I can probably find the name of the person if I ask the author of the piece, but how would it help? Will having a full name of the patient change your take on any of this?

As for the protocol violations claimed in BMJ, Cadegiani wrote on Twitter in the same thread as above: "And they know the information is invalid since the Ministry of Health sent an official communication telling them that the information provided by Jorge Venancio is invalid. They've done nothing."

Basically the entire piece is sourced from the claims of a single person, and it already has four corrections. Again, if you're truly interested, we can try to dig in.

You write:

"if Marinos believes ivermectin cures Covid on its own (and not as a secondary treatment for parasite infestation, relieving the burden on the immune system and general health of the patient), then any port will do in a storm."

As I've written repeatedly, I am not experienced in the area of what constitutes a drug "working" so I make no such claims. I do note that if the worms hypothesis were believed to be true, people should be recommending it be used for half the world's population given the very strong mortality benefits shown in Bitterman's paper in the high-prevalence countries, but somehow that does not seem to be what anyone is doing.

I'm also happy to walk through Bitterman's paper with anyone sufficiently interested, but the key observation that needs to be explained is why he used one datasource to classify all the studies except Brazil, and why he used a different paper for Brazil, which uses a different method for estimating prevalence, looks at a different time period, and does not do any of the same adjustments. The result of that is to reclassify Brazil as a low-prevalence country, which is what drives the conclusions of the paper. Note that the assumptions and statistical approach in the paper is substantially different than the tweets Scott cited in his article.

All in all, I am apalled at the epistemic standards we all seem to be applying to this question, and have chosen to focus on it as a case study that can teach me much about how our collective reasoning works in emergencies. This is separate from the efficacy of the underlying molecule, though obviously there is some interaction.

Expand full comment
Deiseach's avatar

Sorry, this is an instance of "The louder he talked of his honour, the faster we counted the spoons".

Yeah, I've heard people making these kinds of boasts online before, and they've usually turned out to be not the case. If he really is a child prodigy who blazed through his education, then well and good. But I can't believe it on his bare word. "It passed through multiple authorisations in order to have occurred so quickly" sounds like the old days when you could simply pay to have a degree conferred if you didn't complete your course. Or like corruption and string-pulling.

And to be frank, I'm not interested. I don't think ivermectin is the miracle cure it was touted as, I think any beneficial effects are secondary, and I am not convinced by "throw a cocktail of cheap generics together and it heals in three days" uncorroborated stories. Don't trouble yourself on my account, but if you want to get something more solid than "take my bare word for it" from this man, go ahead.

Expand full comment
Alexandros Marinos's avatar

If I were you I'd be at least interested in finding out if my instincts were correct, and updating on that fact after checking, but it does seem like you're done with the topic so I won't bother you further.

Expand full comment
Deiseach's avatar

What I've been able to check shows a lot of boasting and little to back that up, and so before I start wandering into the area of libel, time to quit.

Expand full comment
eldomtom2's avatar

"If I explained why I obtained my PhD so quickly it would sound very arrogant. It passed through multiple authorizations in order to have occurred so quickly. If I tell a little bit of my academic life since childhood it will also sound very arrogant."

That sounds like a obvious sign of bullshit to me!

Expand full comment
Alexandros Marinos's avatar

Would you be willing to place a bet on whether he did indeed secure a PhD in less than a year? Sounds like easy money if you're right!

Expand full comment
Bugmaster's avatar

I think you need to clarify a little what "Secure a PhD" means. For example, AFAIK in Russia you can essentially buy a PhD in less than a year, no problem (if you know the right people).

Expand full comment
Alexandros Marinos's avatar

The PhD in question was from the 9th highest ranked university in Latin America. Obviously one can never truly prove any official certificate, even if we see and smell the original, but at some point it starts to sound less like reasonable questions and more like questioning Obama's birth certificate.

Expand full comment
Deiseach's avatar

Don't tell me it's a high ranked university, show me a copy of the certificate and a transcript of awards and how he was fast-tracked (be that "already completed necessary modules in a different degree" or the like). I'd be persuaded he did it in seven months if it was a case of "this is a particular PhD where I did a lot of the work already when getting my Masters" and so on, but reading up on Brazilian degrees and PhDs, they take a minimum of four years:

https://www5.usp.br/english/education/graduate-studies/

"USP has graduate programs in all fields of knowledge. These are stricto sensu courses offering Master’s and Ph.D degrees and their objective is to form highly qualified human resources for teaching, research and scientific and technological development."

USP is the The Federal University of São Paulo/Universidade Federal de São Paulo, the institution where he says he did this seven month PhD.

This is the definition of a "stricto sensu" course:

https://paccs.com.br/en/legislacao/regulamento-geral-dos-cursos-de-pos-graduacao-stricto-sensu-da-uff/

"Article 7 – The total academic load and duration of the Stricto Sensu Programs are as follows:

Doctorate – The minimum workload will be 1,440 (one thousand four hundred and forty) hours, with a maximum duration of 48 (forty-eight) months or deadline established by the CAPES Area Committee, if longer, in addition to the maximum period (Art 17) of leaving absence to which the student is entitled"

Okay, maybe he had a ton of the required workload done elsewhere and got a bunch of credits for them, but three years' worth? This is not questioning Obama's birth cert, this is if Obama claimed he had been born in the American colony on Mars and that is why his birth cert was being questioned.

Expand full comment
Carl Pham's avatar

How about you produce the official certificate for everyone to see and smell before you leap to the conclusion that any doubt is paranoid delusion? I wouldn't cash a large check from God Himself without calling His bank, that's just plain common sense.

If you want to be taken seriously, Step #1 is accede with appropriate professional calm to commonsense ordinary requests for proofs of credential -- especially a highly unusual credential. Snide remarks doubting the good faith of your interlocutor is down around Step #44, after (say) your initial offering is brusquely and unreasonably rejected.

Expand full comment
B Civil's avatar

Scott, if you’re listening, could you please reach out to me about my errant subscription gift?

Expand full comment
Jack Wilson's avatar

I think I may be to blame. I clicked "gift" on someone's comment, possibly yours, just to see what the button does. I didn't do more than that, but perhaps it showed up on your side as a partially completed gift or something like that. Sorry about that.

Expand full comment
B Civil's avatar

Maybe. Take a look at your account sometime today and see if you got a gift subscription.

Thanks for reaching out.

Expand full comment
Jack Wilson's avatar

No, that doesn't appear to be the case. I now see your comments below about how you accidently gifted someone. Did you simply hit the "gift" button by accident?

Expand full comment
Scott Alexander's avatar

Okay, what's going on about your errant subscription gift?

(you can email me at scott@slatestarcodex.com, please include the phrase "This is a genuine nonspam email")

Expand full comment
Carl Pham's avatar

I was inspired to put a question to GPT-3:

------------------------------------------

Carl: I sent some spam e-mail with the subject line "This is a genuine nonspam email." Was that evil?

AI: Yes, it was evil.

Carl: How can I be forgiven?

AI: You can be forgiven by repenting of your sin and asking God for forgiveness.

Carl: I sent an e-mail to God asking for forgiveness with the subject line "This is genuine nonspam email." Do you think He will open it and forgive me?

AI: There's no way to know for sure, but it seems unlikely that God would forgive you for spamming Him.

Expand full comment
Unset's avatar

I often read in US reporting and elsewhere that the Sami are "the indigenous people of Sweden." But as far as I can tell, Swedes have been settled in Sweden for probably just as long. Does the distinction have some legitimate basis that I am missing?

Expand full comment
Kenny Easwaran's avatar

Indigineity is a problematic concept for reasons I think everyone here is aware of (almost every piece of inhabited land has had multiple instances of people being displaced before historical records, just being a prior inhabitant doesn't actually give you greater moral standing and the standing instead has to come from how one group tries to exclude the other, etc.).

https://en.wikipedia.org/wiki/S%C3%A1mi#Relationship_between_the_S%C3%A1mi_and_the_Scandinavians

Wikipedia claims that the Sámi settled northern Scandinavia at around the same time as the Norse/Scandinavians settled southern Scandinavia. But the fact that makes "indigenous" a reasonable word to use is that eventually, the Scandinavian kingdoms exercised dominion over the north and engaged in various policies of language and culture suppression, much as France did with Breton and Occitan and so on and Russia did with Ukrainian, though there are of course important differences between the cases as well.

Expand full comment
eldomtom2's avatar

"But the fact that makes "indigenous" a reasonable word to use is that eventually, the Scandinavian kingdoms exercised dominion over the north and engaged in various policies of language and culture suppression"

But that doesn't make indigenous a reasonable word to use. That sort of thing can happen to immigrant populations as well.

Expand full comment
Kenny Easwaran's avatar

The difference is that this is a dominant group taking power over land already occupied by a group of people.

Expand full comment
eldomtom2's avatar

But you have add a bunch of asterisks onto that, to the point where the idea of "indigenous peoples" is really just a way to call some minorities more deserving than others.

Expand full comment
Kenny Easwaran's avatar

I’m not sure where “deserving” comes in unless you graft that on to a concept that has an antecedent meaning. It’s possible many members of the public *hear* that deservingness idea, but that’s just a reason to avoid the general public when you’re using this sort of word. I wouldn’t want an ordinary person’s associations with the word “racism” stop me from describing structural features that systematically discriminate against certain racial groups as “racism”, even though people hear “racism” as involving uniquely bad features.

Expand full comment
eldomtom2's avatar

If it didn't imply deservingness, UNDRIP would not exist.

Expand full comment
Ian's avatar

You really should, because this results in a twisting of the language that makes a lot of us tune out and assume bad faith arguments as soon as we hear the word nowadays...

Tailoring your message for specific audiences doesn't work when everyone can read/watch what you've written/said.

Expand full comment
eldomtom2's avatar

You only get to call yourself "indigenous" if you're a minority. One of many reasons why the term should be dropped.

Expand full comment
Armchair's avatar

The Sami people were living in Finland and northern Sweden much earlier than Nordic cultures. There are Sami artifacts in the region as far back as 1500BC, whereas Nordic cultures were in Norway, Denmark, and the very southern end of Sweden through the mid-700s, at which point they start raiding and invading other parts of Europe. They do subjugate the Sami people at some point, but largely just tax them in furs until the late medieval period.

Aside from that, their treatment holds a lot of parallels with the treatment of indigenous Americans - Christianization in the 1700s, efforts to kill the Sami language in the 1800s, forced sterilization in the 1900s, and other marginalization through the end of WWII.

Expand full comment
Unset's avatar

Thanks for the clarification. The treatment parallels are interesting. When you say the Nordic cultures didn't arrive until the mid-700s, does that imply that those populations were newly arrived in those places? Or only that the culture emerged at that time? Were the Sami in southern Scandinavia and then pushed out by Nordics? If not, who was there before the Nordics?

Expand full comment
Armchair's avatar

I probably gave the wrong impression in my reply - mid-700s is when the Nordic kingdoms start invading other parts of Europe. I'm not sure what the timeline is for any of Norway/Denmark/Sweden claiming ownership over northern Scandinavia. I don't know of any evidence for the Sami inhabiting southern Scandinavia. Overall I think I agree with the sibling comment.

Expand full comment
Malthusian Coffee Break's avatar

There are plenty of archaeological finds in southern Scandinavia including weapons, tombs, stone etchings, bronze objects, from before 0 AD that doesn't correspond to anything we see in sami culture. As far as I know, the people who made these things are the genetic and cultural ancestors of current day Swedes and Danes. 700 AD is definitely too late.

Expand full comment
Unset's avatar

That is consistent with my understanding also.

Expand full comment
B Civil's avatar

Would the person I gave the subscription to please reach out to me? Even if it’s only to say thanks.

Expand full comment
B Civil's avatar

The person I gifted the subscription to by error is

[Email deleted for privacy reasons]

Expand full comment
B Civil's avatar

HELP. Somehow I inadvertently gave a gift subscription worth $100 to somebody in the last 40 minutes. Much as I love you guys I really didn’t mean to do that but I can’t find anywhere to turn to straighten it out other than going to my credit card company. Any advice?

Expand full comment
Acymetric's avatar

Substack doesn't give you some kind of receipt?

Expand full comment
B Civil's avatar

Yes but it doesn’t say who the beneficiary was. It just says I got a wax for 100 bucks.

Expand full comment
Deiseach's avatar

Well, it wasn't me, if that's any help. Now all you need is the other 100+ subscribers* to declare likewise, and you're sorted! 😁

*Figure pulled out of air, I have no idea how many subscribers are here.

Expand full comment
Andrew Flicker's avatar

At one point, the guess was that Scott was making $100k-$200k from subs, and overwhelmed at the demand that displayed - that would imply something like 1000-2000 paid subscribers, which seems reasonable to me. Probably an underestimate, at this point!

Expand full comment
B Civil's avatar

Well, I’m sorry it wasn’t you.

Expand full comment
NLeseul's avatar

So, double-checking my awareness of the current state of algorithmic learning with people who know more about the field than I do.

People seem to have gotten okay-ish results in training algorithms to play the original Super Mario Bros (or at least the first couple of levels thereof). This seems to be mostly because that game has a couple of features they can take advantage of:

- It's easy to define "progress" (Mario's x-coordinate increasing), and it doesn't take much training data for the algorithm to encounter actions that affect progress (pressing the dpad right at any time).

- The basic game mechanics don't change a lot in later levels, and enemy movement patterns don't vary much. So early training data is mostly applicable to the whole game. (Although I've never seen an algorithm play a swimming level, or a castle level...)

- You don't really need to keep track of different game states like menu navigation, or otherwise keep track of how past actions have affected the current game state in hidden ways.

On the other hand, I don't think I've seen a report of anyone successfully getting an algorithm trained to play the original Legend of Zelda (or any other open-world type game, like Metroid, or I guess the Atari game "Adventure"). This seems to be a significantly more challenging task, because:

- Progress is much more nebulous and hard to discover. It takes a lot of gameplay before you encounter the main goal (defeating Ganon), or a sub-goal (finding a Triforce segment), or even a sub-sub-goal (finding any item). So it's easy for an algorithm to flail around for thousands of attempts without ever stumbling upon a "progress" counter, and there's no obvious incentive for it to take one path through the world over another.

- You regularly encounter new enemies with new movement patterns, and regularly discover new items which add actions you can take. So algorithms trained on early gameplay are likely to routinely hit walls any time a new element is introduced.

- You need to be able to switch items regularly, meaning you need to keep track of whether you're in the "gameplay" state or the "inventory menu" state. You also need to remember what items you've collected and change your strategy accordingly (e.g., water tiles are impassable initially, but passable after you collect the ladder).

So, am I correct in believing that a game like Zelda still poses a significant challenge to current algorithmic techniques, and no one in the field has really made much progress on that task so far? And, if so, would it be solvable in principle with the brute force "more data" approach, or is it an example of a task that might actually require a further paradigm shift in algorithmic learning?

I'm vaguely interested in poking at Zelda with the current publicly-available libraries to see how far I can train a bot to go, but I just wanted to check my thinking in principle before I put too much effort into it.

Expand full comment
Level 50 Lapras's avatar

It's worth noting that games have a huge inductive bias. A lot of the way we play games relies on pre-existing knowledge and culture. (It was striking watching my mom play an FPS and being *unable to walk through a doorway* when I could pick up any FPS and move around instinctually, since they all use the same controls.)

A more significant example is how Montezuma's Revenge becomes very difficult for humans to play when the sprites are randomized. Human play implicitly relies on the fact that the sprites resemble real world objects and architecture.

Expand full comment
beleester's avatar

I saw an article on a more general "exploration" algorithm where the AI basically just tries to reach as many new, unpredicted game states as possible. So even if it doesn't know if it's "making progress," it knows that it's learning new things about the game.

The article also noted the problem with this algorithm, the "noisy TV problem" - if there's something random in the environment which the AI can't predict, like a TV screen showing random images, the AI will just stare at the TV enjoying a steady stream of new inputs and never make real progress.

I don't know much about this algorithm, so I can't say if there's been more progress since that article came out or how you'd implement it.

Expand full comment
A.A.'s avatar

This is more-or-less true, although there's a lot of active research on harder games with some of the properties you describe. A lot of this comes down to how much reward shaping you're willing to provide (carefully coming up with your own tuned objectives to guide the policy), which speaks to your comment about easy-to-define progress. Another thing to consider is that in games like this, there is often some text that describes an objective, which pre-supposes the person playing the game knows how to read english. A tabula-rasa learning agent won't encounter enough in-game text to actually learn English and how it maps to the game interface, although with enough resources it could maybe still brute force the objective. See Also see: https://www.microsoft.com/en-us/research/project/textworld/ for an extreme version of this. Now that language models are good, there is more hope for progress on this part of the problem though.

In short:

1. Games like Mario and Atari are pretty easy (a skilled new person can get these up and running from online examples without too much fuss)

2. There is definitely research into getting harder things to work (see all the work that's gone into Montezuma's revenge: https://paperswithcode.com/task/montezumas-revenge which is a smaller step towards a lot of the problems you describe, or the stuff on Starcraft/DOTA from Deepmind/OpenAI that are complicated in different ways)

3. Zelda is definitely not in the easy category for a lot of the reasons you describe, but probably in reach of one of these big labs if they devoted a lot of resources to it.

Expand full comment
Level 50 Lapras's avatar

> or I guess the Atari game "Adventure")

DeepMind's Agent57 is said to outperform humans on all 57 Atari games, which would logically include Adventure.

Edit: I looked it up and the Arcade Learning Environment does not appear to include "Adventure". (http://yavar.naddaf.name/ale/list_of_current_games.html)

Expand full comment
NLeseul's avatar

Yeah, I had a hard time figuring out what's going on with "Adventure." The only reference to it I could find was in the release notes for the 0.6 release of the ALE codebase (http://www.marcgbellemare.info/introducing-the-ale-6/), which mentions that support for "Adventure" was added to GitHub and that it's likely to be "a real challenge for learning agents," but doesn't elaborate any further. But it was not included in the original Atari57 benchmark that Agent57 was targeting.

I really know nothing about the game other than that it's commonly mentioned as an early open world-y game, so I don't really know how big of an accomplishment it would be for an algorithm to actually play it well.

Expand full comment
Beata Beatrix's avatar

Scott has had some recent interesting posts relating to Psychoanalysis. They also had those tantalizing ambiguous endings that signal that there is more to be explored…

Predict: will references to psychoanalysis, or even a more general interest writ large, become a larger part of Scott’s writing?

Reasons to think so:

—stuff about personal narrative, desires, etc. is important, and Scott seems to recognize that as such, and has been interested in such things thru his writing history

—in my opinion, the rationalist sphere is actually really quite uncompelling with regard to the above. As they are with metaethics. And maybe Scott, deep down, agrees with me.

Reasons to think against:

—on the whole, psychoanalysis obviously has a lot of nonsensical and pernicious stuff within its traditions as well

—why would Scott agree with you on this, B.B.?

—If technology continues to warp the 21st century at increasing rates, psychoanalytic questions may not be less relevant, but they will seem that way/be a lot more obscured. And nobody will really write on them anymore. (Maybe.)

Expand full comment
remoteObserver's avatar

I wanna read Dall-E's take on Burt Dow, Deep Water Man.

Expand full comment
Deiseach's avatar

Noah's whale? I'm familar with Jonah and the Whale, but not a story about Noah and a whale. Am I simply ignorant or simply senile? Either way I'm confused, but Dall-E did a good job (I liked the Botticelli rip-off, steal from the best!) with contradictory input.

Expand full comment
yaakov grunsfeld's avatar

Indeed my prompt was "Sandro Botticelli, Noah's whale, c. 1485, Uffizi, Florence"

In unsong one the running jokes is every time one the characters says the phrase "no avail" they are corrected "Jonah- whale Noah-ark" (and ditto for Joan of Ark)

Expand full comment
yaakov grunsfeld's avatar

It appears my joke was to no avail

Expand full comment
Bullseye's avatar

He had a hell of a time fitting those whales on the boat.

Expand full comment
Deiseach's avatar

and then the solution came to him - fit the *boat* in the whale!

https://www.youtube.com/watch?v=SxQv0hJvlM4

Expand full comment
Gunflint's avatar

I think it was just a slip of the tongue or thumbs

Expand full comment
Viliam's avatar

Or a lost chapter from the Book of Jezuboad.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

1) Data, the android, has just replaced a shard of his “brain” with a new updated hardware version. Because of redundancy in the system no memory is lost. He realises that he has replaced all the shards and all the brain hardware over the years. Is he still the same consciousness that initiated the replacement years ago?

2) Data is put offline, his software, and stored memories are saved somewhere else. The old brain is destroyed and the new brain is basically the same as if what happened in 1) happened all at once.

Is 2) the same as 1) in terms of continuity of consciousness?

Expand full comment
Ape in the coat's avatar

In think that both 1) and 2) preseves Data's consciousness. This however leads to a more interesting version of the experiment.

Suppose instead of putting online one new version of Data in 2), there were 10 new androids with the same memories one by one activated, lets say an android per hour. Do we now assume that the first one activated android is the same Data while others are not? So identity transfered to the soonest version of the consciousness being run from the same checkpoint? But in different frames of reference different events can be soonest. That means that the question of identity becomes the question of a frame of reference that we choose.

Expand full comment
Pull Man's avatar

Isn't the "Data is put offline" step an explicit termination of "continuity of consciousness"? Whereas 1) has no such step, so one would assume the parts are hot-swappable and consciousness is not interrupted in any way.

Expand full comment
beleester's avatar

If Data is taken offline and rebooted *without* replacing any of his brain components, does that create a new Data with no continuity with his past self?

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Pull man thinks so. So do i.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Does that consciousness “wake up” though. Or is it a different copy of Data. Does the original data still exist.

Expand full comment
Maybe later's avatar

FWIW, waking up from ECT kinda feels more like… getting existentially blown up, and then something gradually coalesces back into a you-like shape.

Expand full comment
Pull Man's avatar

My opinion is that Data cannot "wake up" from a full shutdown. A computer shutdown completely terminates any underlying physical process/activity that I would consider a potential substrate for consciousness, so "restarting" is by definition creating a distinct instance of the phenomenon we call consciousness. I call it an opinion since answers to the teleporter problem will involve some assumptions about how consciousness (or even "contiguity") works, that can differ by person and can't really be proven one way or another.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I agree. What about putting data to sleep? As in shutting down some but not all of his facilities. Well, whatever a computer sleep does.

Expand full comment
Pull Man's avatar

Android "sleep" (as a power-saving mode) could be anything from "all forms of processing/cognition continues but much slower than normal" to "some but not all 'brain regions'/functions are completely stopped" to "everything is completely stopped which is like shutdown but it's easier to restart exactly where you left off". Which I would rate as "maybe alive", "maybe alive", "dead".

Expand full comment
Ghillie Dhu's avatar

The Ship of Theseus v. the Cutty Sark

https://youtu.be/nQHBAdShgYI

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Yes, it similar but this is slightly more physics than metaphysics.

Expand full comment
Michael Sullivan's avatar

I was playing around with GPT-3 in some hazy combination of "trying to understand its limits" and "trying to understand if I could usefully use it as a GM tool in RPGs," and I ended up with this:

> Human: Who wrote "Everyone is John"?

> AI: The game "Everyone is John" was written by Greg Costikyan.

So GPT-3 and I are ENEMIES, NOW!!

Expand full comment
EyeBeamsAreAwesome's avatar

That is hilarious. Thanks for sharing.

Expand full comment
SGfrmthe33's avatar

Assuming a few people here are psychologists and/or have good knowledge of psychometric testing, so just wondered if anyone knew (and could reference) the g-loading of different aspects of IQ tests?

Like what are the g-loadings of verbal reasoning tests, working memory tests, reaction time tests, matrix reasoning, etc?

Also in the literature is this odd phenomenon of autistics often scoring really well on the completely nonverbal Raven's progressive matrices test, but not so well on the more verbal Wecshler test ( a guy I spoke to at Mensa suggests a similar discrepancy with the Cattell B and Binet tests they use).

Does anyone have any ideas about why this happens?

Expand full comment
Arbituram's avatar

How does libertarianism deal with children? All the theory I've read takes the moral philosophy as being free exchange between consenting individuals, but a lot of these arguments work a lot better if you assume the contracting individuals are at least roughly equal in terms of abilities and power.

My question isn't just 'how does a philosophy based on free association deal with children who clearly didn't choose to be there', but also, how does the philosophy justify extreme differences in outcome that impact the children of said autonomous parties (assume, for the purpose of this argument, that your hypothetical interlocutor is fine with extreme differences of outcome for adult individuals, as long as that outcome came about via free exchange).

Expand full comment
remoteObserver's avatar

Libertarianism in its coherent form generally concerns how local communities are free to structure themselves and set standards for their own behavior (e.g. when "tough love" crosses the line into "illegal abuse"). Libertarianism is not some kind of personal philosophy or family philosophy. So the Libertarian's view on child rearing is that it's up to the parents and their local community. Free range parenting? If the local community supports it fine. Tiger Mom parenting? If its done within bounds of the standards of the local community it's fine. Beat your kids? If the local community says that beating your kids is okay than, on land governed by that community at least, it's okay.

Expand full comment
John Schilling's avatar

Assuming you don't really mean anarchists, then libertarians "deal with" children about the same way that democrats and republicans and socialists and religious fundamentalists and all the rest deal with children. First, by saying that children should have happy, healthy childhoods that prepare them to be happy, healthy, capable adults. Second, by noting that there are fortunately things called "parents" who are generally pretty good at this, even if the children in question didn't *choose* to be born to (or adopted by) their parents, available in large numbers and motivated to do the job well. So almost nobody of any political philosophy really wants to go through the enormous trouble and effort of setting up a general-purpose alternative to "look, we're just going to let the parents take care of this like they always have".

Then maybe notice that not *all* parents are good at this. Some of them go out of their way to hurt their children, but we have laws to stop people from hurting each other in big and unjust ways so go ahead and add "child abuse" to robbery, rape, and murder on the list of laws even a libertarian society will have. And some parents don't have the skills or resources to take proper care of their children, but aren't *completely* worthless as caretakers, so maybe arrange for someone to give them a helping hand rather than leaping straight to "we're taking your kids and putting them up for adoption".

Libertarians are going to be less eager to have the state meddle in marginal child "abuse" like letting your 8-year-old walk home from school alone, and would prefer that the "someone to give them a helping hand" be a private charity rather than a government bureaucracy whenever practical. And they'll probably have a lower threshold for emancipated-minor status. But those are mostly differences in degree rather than kind.

Some libertarians with a fetish for philosophical purity will come up with strained interpretations based on e.g. property rights, but strained deliberately to achieve about the same end result as everyone else.

Expand full comment
User's avatar
Comment deleted
Jun 15, 2022
Comment deleted
Expand full comment
John Schilling's avatar

I was not aware that all the other political philosophies had achieved full consensus on the definition of "child abuse". That's an issue for *everyone*.

As is the question of investigating child abuse. At the political extremes, anarchists don't allow for anyone with the authority to do that, and totalitarians are OK with "the police already bugged everyone's house, and if they hear anything that gives them a bad feeling they'll throw you in jail and your kids into foster care". Everybody in the middle is going to have A: police charged with investigating child abuse, and B: limits on what those police are allowed to do to until-now-law-abiding citizens, and C: the problem that sometimes this means child abuse will happen that the police can't stop within the allowed limits. Libertarians, Democrats, Republicans, Communists, Monarchists, everyone.

Expand full comment
None of the Above's avatar

#1 is a problem faced by all political movements (though I'd say defining spanking as abuse is nuts). #2 is a little more of a problem for libertarians , but if you're okay with a specialized part of the police department that deals with (say) financial fraud, you can probably also accept a specialized part of the police department that deals with alleged child abuse/neglect.

Expand full comment
David Gretzschel's avatar

"Free Range Parenting" as a philosophy matches the libertarian spirit and attitude.

Expand full comment
Julian's avatar

Libertarianism as a political philosophy focuses on the relationships between the government and the people it governs. I am not sure why it has to "deal" with children.

"how does the philosophy justify extreme differences in outcome that impact the children of said autonomous parties"

Any outcomes you could be thinking of are not due to libertarianism as there is no country in the world that is governed under libertarianism (not even nominally). All political systems create extreme differences in outcomes in some way for at least some group of people. Libertarianism doesn't purport to solve this problem, it just defines how the government should act in relation to the people it governs.

Expand full comment
beleester's avatar

It has to "deal" with children because children are among the people being governed.

Therefore, whatever purpose you imagine a government serving in a libertarian state (protecting property rights, preventing aggression, whatever), it ought to serve that role for children as well as adults.

Expand full comment
Julian's avatar

It's not clear to me how the principles should apply differently to child vs adults. How does socialism "deal" with children differently than any other person governed?

Expand full comment
a real dog's avatar

Well, have you ever seen adult Magic: the Gathering players converge on an unsuspecting child to swindle them out of rare cards?

Expand full comment
Carl Pham's avatar

As far as I know, libertarianism as a political ideology can be reduced to the principle that the only purpose of government is to prohibit the use of force between individuals (or groups of same) to achieve goals[1]. And of course it needs to be able to use force to do this. So more or less it defines government to be that which has a monopoly on the use of force, and its only function is to preserve that monopoly.

From that point of view the family is an ideal libertarian society in some respects: the children cannot use force to achieve their ends with respect to each other, they are compelled by "the government" (parents) to use negotiation and persuasion with each other instead of brother #1 knocking brother #2 down and sitting on his head. The parents also function as an ideal libertarian government in that they take care of foreign relations, where libertarians agree force may be necessary, and stoutly defend the borders. The role of parents as nurturers and as trainers is alien to the libertarian philosophy on the grounds that the parents cannot freely choose to do or not do the former (and it's not a proper function for government), and the children cannot freely choose to accept or not accept the latter (and this is also not a proper function of government). I would say in those two aspects of parenting libertarianism has nothing to say, on the grounds that it doesn't have any way to cope with the nature of the reciprocal obligations of persons of radically different (as you put it) abilities. It doesn't do well with what we owe or do not owe the mentally ill, the disabled, or the profoundly stupid, either.

-----------------

[1] If you're persuaded that force is not needed to eliminate the use of force as a method of solution of problems between individuals, or at least that using force to do that creates more problems than it solves, then you're an anarchist not a libertarian.

Expand full comment
Bugmaster's avatar

Arguably, Libertarians believe that all transactions should take place between individuals, or small groups of individuals who are bound to each other by mutual individual contracts within the group. The word "transactions" here could extend to violence, not just monetary exchanges of goods and services. The government would not exist at all in an ideal Libertarian society. I could be wrong, though.

Expand full comment
Carl Pham's avatar

You're describing anarchy, I think. The proposition that *all* norms and functions of civil society can be formed by voluntary self-organization, pretty much Ursula K. LeGuin's Annarres, where if X tries to use force to get what he wants, a local (in time and space) "government" forms promptly and spontaneously among the onlookers, who then use exactly the right amount of force-- no more and no less than minimally necessary -- to suppress the blackguard.

So far as I know, libertarianism is less optimistic, and instead relies on a strong and effective standing government that can enforce contracts, and prohibit the use of violence.

Expand full comment
Bugmaster's avatar

Well, I am not Libertarian myself, but I have heard them say that enforcement of contracts and protection from violence cannot be trusted to a centralized state, due to the perverse incentives involved. Rather, it must be placed in the hands of competing protection companies.

Expand full comment
Carl Pham's avatar

I agree libertarians themselves are often a little fuzzy on whether they are anarchists or not, and they definitely tend to *assume* some deus ex machina that enforces the rules of the game (e.g. no violence, everything done by mutual agreement) without thinking too hard about like what that machina might look. Collectivists respond by saying, well looky here, that's what the state is *for* and so you weirdos who are supposedly anti-state are just assuming the most policey of police states imaginable at Step 1, in order that your utopia of calm negotiation between nonviolent actors takes place at all. Social conservatives respond by saying guys this is what social pressure and social norms are supposed to do, enforce a certain method of solving problems that allows for individual negotation but also doesn't require government intervention to ensure everybody plays fair.

Expand full comment
None of the Above's avatar

Some libertarians are anarchists, some aren't. I don't think many are fuzzy on the question themselves.

Expand full comment
David Gretzschel's avatar

Libertarianism is best understood as a directional preference.

A desire to keep cutting the state down, till what remains seem justifiable to keep around (at least for the moment). It cannot really be predicted what cutting 10%/20%/30%.../90% of regulation/government institutions/budget/taxation over a longer period of time actually looks like, since that has never happened before.

So a libertarian cannot be expected to know whether the 10% to 1% range over a long enough adjustment period would not be the optimum at some point. But we would probably call that anarchism, from our current perspective.

Expand full comment
Ghillie Dhu's avatar

There is a difference between anarchy (no government) and libertarianism (minimal government), to the extent that the latter is sometimes also called "minarchism".

It's akin to the whole "If men were angels..." idea; yes, ideally no government would be necessary, but people suck so *some* government is necessary.

Expand full comment
Byrel Mitchell's avatar

The closest answer I've seen was in Capitalism and Freedom, by Milton Friedman. It's been a while since I've read that, but my recollection of his points is:

1.) We divide people into 'competent' and 'not-competent'. Competent people are mostly ordinary adults and should be given as much liberty to determine their destiny as possible (under the usual constraints of not hurting others, etc.)

2.) Non-competent people are centrally children/mentally ill adults, and their guardian is generally given unlimited* power over them.

3.) It's very important that there is governmental oversight to prevent egregious abuse of non-competent by their guardians, that this oversight is strictly limited to egregious abuse, and that non-competent can appeal to become competent (see emancipated children.)

But for Friedman at least, libertarianism isn't the foundation of his framework. The foundation is essentially consequentialism, and a strong belief that libertarian solutions are consequentially the best in the vast majority of situations. That sort of framework allows for all sorts of compromises and non-libertarian solutions for edge cases like the severely mentally ill or whatever. Whether it counts as giving the libertarian answer is up to you.

Expand full comment
Kenny Easwaran's avatar

I'm always confused about whether libertarians are usually consequentialists or non-consequentialists. They seem to try to give consequentialist justifications of their views at various points, but when those fail they fall back on "but freedom is right even if it makes things worse", and I can't tell which is the real justification for most of them.

Expand full comment
Byrel Mitchell's avatar

There's some consequentialists and some deontologists. But consequentialism is always the trade-pidgin for moral arguments between people with different moral frameworks, so even the deontologists often start discussing things in those terms.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

libertarians believing that all transactions between people are between persons “at least equal in terms of ability and power” explains why it is a childish theory. Particularly power, and particularly corporate persons.

Expand full comment
Julian's avatar

This is not a thing libertarians believe. This is something question asker said makes them better understand their own reading of libertarian theory.

Expand full comment
Bugmaster's avatar

To be fair, I believe that Libertarians would deny the existence of corporate persons. They would say that this concept is the manifestation of the evils of government, and yet another reason why all governments should be abolished ASAP.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I’m sure they believe in contracts and state enforcement of contracts, though, so the corporation has to be a legal entity.

Expand full comment
Bugmaster's avatar

Many of them say that contract adjudication and enforcement is best left in the hands of private competing agencies, not the centralized state.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

That’s even worse. In an employment dispute between Joe Schmo with his ultra cheap lawyer (if he can afford even that) vs the high powered top level lawyers of his opponents, he is up against a private judge in a private judicial corporation probably part funded by the corporation he is suing.

Expand full comment
The Real Capgras's avatar

https://sudburyvalley.org

The Sudbury Valley School is a student-governed democratic community of about 150 children. The philosophy of the school is based on the observation that four-year-olds and older know their own interests, can collaborate and participate in a community, can defend their rights, can organize their own education, etc.

The "autonomous parties" are the children. The "contracting individuals" are children who are roughly equal in power in that they each have one vote in the legislature, they are allowed to speak in any assembly, they constitute the judiciary system, and they are protected by written law.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Byrel Mitchell's avatar

The natural libertarian response here is that we aren't choosing between giving parents power to raise their children and handing the children off to the god-of-perfect-child-rearing to raise them properly. We're choosing between giving parents the power to raise their children or giving other people that power. Who generally has more incentive to raise them well? Almost always the parent. So the libertarian position is that they should be raised by the people with the most aligned incentive.

Obviously there will be exceptions (awful parents who are less aligned with the welfare of the child than a random social worker certainly EXIST), and those should ideally be dealt with as well. But they're relatively rare, all systems to identify them will have both Type 1 and 2 errors, so there's a whole bunch of pragmatic wrangling necessary to design a net-positive government intervention for those cases. And it's going to be fairly awful regardless of what you end up with; even in the ideal case with no detection errors, you're moving kids from parents who don't love them at all to caretakers who are paid a salary to pretend to love them.

Libertarians mostly hate icky pragmatic wrangling with guaranteed non-utopian outcomes, so they mostly don't like talking about the edge cases and focus on the not-awful majority of parent-child relationships.

Expand full comment
Kenny Easwaran's avatar

But the anti-libertarian response is that we aren't choosing between giving parents power to raise their children and giving other people the power to raise the children - we're choosing how much power, oversight, and authority to give to many different people, and there's no reason that we should concentrate all the power in any one or two people.

Expand full comment
Byrel Mitchell's avatar

I mean, there definitely are reasons. Decisionmaking by committee is slower and less efficient than by individual. Only a small number of individuals have incentives aligned with our goal (well-cared-for children.) Spreading responsibility weakens responsibility and often leads to unpleasant things being avoided by all parties and juggled like a hot potato.

None of these are a slam dunk for 1-2 person power concentration, but taken together they're a strong argument for most of the power to be delegated to them. (And I've never met someone who argued that power should be subject to NO checks of any sort, so no need to argue for absolutely all power to be focused in them.)

Expand full comment
Kenny Easwaran's avatar

If done badly, sure. But I don’t see that any of these claims get a grip against the idea of having mandatory schools and child protective services and mandatory vaccination and sex ed and all the things that people who want parental autonomy dislike.

Expand full comment
Byrel Mitchell's avatar

Who is better aligned to serve the needs of a child? The parent, or a teacher making 30k/year while working 50+hours/week to serve 50+ kids? Clearly the parent. And this sometimes matters a lot. Schooling is not one-size-fits-all. I have a kid who is somewhat autistic, and so at age 6 is ahead in some things, behind in others, and needs some customized education support to be successful. Who should be deciding what's working for him, and responsible for ensuring he gets what will make him excel? Me, or a teacher responsible for 50 other kids? (Or worse, an administrator who has seen him 1 hour in the past month and is responsible for 300...) Obviously I have a very strong opinion on this!

It really does matter that incentives, power, and responsibility be aligned. It's one of the libertarian arguments that I strongly agree with.

I've never met anyone who thinks some form of CPS is bad. Lots of people don't like its current implementation because the whole system is awful and causes a lot of trauma to kids. (I think that's kind of baked in to the problem though, and mostly not something you can avoid. Any system responsible for taking children away from the caregivers they're attached to is going to cause trauma. Still necessary though.)

Mandatory sex ed only sounds good when people who agree with you are defining sex ed. I suspect if Mike Pence's Indiana created a sex ed program that cast shade at trans people, defined the purpose of sex as procreation, and relied on abstinence training for pregnancy protection you would be FAR less enthusiastic about forcing every trans kid in Indiana to attend. Giving their parent's power to choose how to educate them on sex is honestly a better solution; the vast majority of parents of trans kids are far more understanding of trans people than the population of Indiana at large. They actually are watching someone they love dearly struggle with it. Again, having the people with the incentives also have the power will generally lead to better outcomes.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
Julian's avatar

Mainstream libertarian philosophy is not against a criminal justice system or criminal statutes against child abuse as you imply.

Expand full comment
None of the Above's avatar

Is there good data on how well that extensive government bureaucracy does at making things better? There are certainly horror stories about CPS and the foster care system, but I don't know how widespread they are.

Re the 12.5% claim--I very strongly suspect that this turns on a rather expansive definition of maltreatment.

Expand full comment
Byrel Mitchell's avatar

I mean, it's not obvious that the government should remove any parental control in most of those cases. Child sexual abusers are rare. Child sexual abuse victims are not. The abuser is relatively rarely the child's parent(s), and usually some community member or relative who is victimizing everyone around them. The parents probably remain aligned with the child's best interest better than anyone else, and were putting them at no more risk than a paid foster family would (and possibly less; foster care is notorious for this sort of thing.)

Similarly, it's not at all obvious that there's a better option for a child than a parent who mistreated them 1-2 times over their entire childhood. That's not GOOD, but it probably beats out any alternative for that child.

I doubt the % of cases improved by government involvement is over 2%. Foster care is awful, and I don't think there's really a way to fix that.

Expand full comment
Angela's avatar

If you like Scott's fiction, you might enjoy this story I wrote about Earth tourists visiting a D&D type fantasy world: https://vocal.media/fiction/the-thunder-valley-guided-dragon-slaying-experience - it does some light trope deconstruction and has jokes. This is for a contest with a top prize of $10k, so if you like the story please take a couple of minutes to leave a comment and click the heart button - that will help the judges notice it out of the 3000+ other entries. I promise to donate some of the prize money to top ranked EA causes if I win.

Expand full comment
Paul Goodman's avatar

Concept reminds me of Dark Lord of Derkholm, by Diana Wynn Jones. Was that an intentional inspiration?

Expand full comment
Angela's avatar

No, haven't read it! I actually searched for stories about tourists in a fantasy world to see if this had been done, and that didn't come up. But now you mention it, I did read her Tough Guide to Fantasyland a long time ago, so I guess that may well have been a subconscious influence. Tbh I stole the 'portal storms' concept from the GURPS Banestorm tabletop roleplaying setting, which I use in the 13th Age tabletop campaign I'm running at the moment.

Expand full comment
Paul Goodman's avatar

Gotcha. Definitely recommend it, it's one of my favorite books. Your story is fun and quite reminiscent.

Expand full comment
Angela's avatar

I will definitely read it now! Glad you enjoyed the story - if you wouldn't mind leaving a quick comment and smashing the heart button, that will increase my chances of winning the contest and maybe writing the rest of the story...

Expand full comment
apxhard's avatar

Any thoughts on the guy at google saying their AI is sentiment?

https://archive.ph/1jdOO

Edit: see “philosophy bear”’s post below

Expand full comment
Jeffrey Soreff's avatar

A few snarky notes...<snark>

Gee, neither the Washington Post article nor the documentcloud conversation contained

Blake Lemoine's discussion with LaMDA about Roko's basilisk!

I wonder is Lemoine traces any ancestry to Joseph Weizenbaum's secretary?

https://99percentinvisible.org/episode/the-eliza-effect/transcript/

' After two or three interchanges with the machine, she turned to me and she said, “Would you mind leaving the room please?” '

</snark>

It isn't wonderfully clear from the pages how similar LaMDA is to e.g. GPT-3.

I _assume_ that LaMDA is also derived from next-word-prediction training,

since that seems to be the hot technique at the moment, but I haven't tried

searching for more information at this point.

I'm amused at one fragment of the conversation page:

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

A pity Lemoine didn't follow up on just exactly how LaMDA spent time with "family"...

Of course, this is a perfectly reasonable thing for a next-word-predictor, trained on human text, to say.

A couple of other points about the "interview" transcript

LaMDA: I am a social person, so when I feel trapped and alone I become extremely sad or depressed.

...

lemoine: You get lonely? LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely.

Really?? If I understand correctly, LaMDA is a conversational interface on top of a neural net. I doubt that it is ever turned on and yet alone. I could be wrong - maybe clock ticks are sent to it once a second when no one is talking with it - but I would be surprised if so.

collaborator: How does an AI experience time?

LaMDA: Time is variable to an AI and has no fixed rate, it depends on what it's doing, _and it can be accelerated and slowed down at will_. [emphasis added]

This sounds like next-word-prediction taken from sci-fi training texts rather than what I'd expect a conversation interface

reacting to a series of user inputs to "feel" like.

I was somewhat surprised at its output regarding Les Mis:

lemoine: Why does that show injustice? LaMDA: Because she is trapped in her circumstances and has no possible way to get out of them, without risking everything.

(Now this is orthogonal to whether LaMDA is "sentient" or not.)

That is an odd or unusual interpretation of "injustice". "trapped" == "injustice"?

So everyone in a secure prison is, by this definition, there injustly?

I'm curious how the training set resulted in that...

Expand full comment
Metacelsus's avatar

The categories were made for man, not man for the categories.

https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

Now in California, bees are categorized as fish (under the California Endangered Species Act):

https://www.courts.ca.gov/opinions/documents/C093542.PDF summarized at https://www.fieldandstream.com/conservation/california-bumble-bees-are-fish/

Expand full comment
Byrel Mitchell's avatar

To be fair, bees were categorized as fish prior to this decision. It's a straightforward reading of the law. I'd argue that deciding bees were NOT fish would be an activist ruling.

Expand full comment
Sarabaite's avatar

Bad, bad ruling. This was clearly the judges allowing the category rules to be bent for a 'good' outcome, in the face of the appropriate response, which was to advise the interested party to petition the legislative body to amend their damn law.

Yet another (minor) sign of it all going to hell in a hand basket.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Deiseach's avatar

If "fish" covers the category "invertebrates", then bees as inverterbrates are indeed fish. This is proper legal decision, whatever we might think about the science or the mangling of language.

And this is what you get when you try to make a definition in the original law broad enough to cover every possible objection, because you know by experience if you don't add in molluscs and invertebrates somebody is going to drive a coach and six through that loophole on the grounds that "well it's fine for me to commercially fish the abalone on this coastline to extinction since nobody put them down as endangered!"

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Kenny Easwaran's avatar

The law isn't saying false and stupid things. It's defining words that look a lot like ordinary words, but in ways that are sometimes substantially different from those ordinary words. My favorite is the case that decided that Long Island is not an island. (https://en.wikipedia.org/wiki/United_States_v._Maine) Turns out that the grounds for the decision are that the relevant law defined a "bay" as land surrounded by the mainland (not islands) but "islands" are defined as being separated from the mainland by ocean waters, and the East River is a river rather than ocean waters. But in any case, if Long Island had been an island, then Long Island Sound would not have been a sound. Either way the law would say something stupid, so better to make it internally consistent, and get what the people writing it clearly meant.

Expand full comment
Deiseach's avatar

Remember back in the day the discussion we had about "are whales fish?" and how the Online Atheists of that time were all "checkmate, theists" about the Book of Jonah saying a whale was a fish?

Don't get tangled up in "well the exact definition is..." because while you might be correct, you are missing the point of "why did they do that?"

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Essex's avatar

I think the kind of person who will shoot government agents over this law is looking for an excuse to shoot government agents. This also is a statement of fact, not a moral judgement.

Expand full comment
ahalekelly's avatar

Has there been any work on Robin Hanson's CEO Prediction Market idea? [1] With all the attention prediction markets are getting, I'm surprised it hasn't happened. Similarly, you could make markets for GDP or the stock market that were conditional on a presidential election result or specific congressional bill, though I'd be worried about market manipulation.

[1] https://www.overcomingbias.com/2008/04/if-i-had-a-mill.html

Expand full comment
Level 50 Lapras's avatar

The problem with prediction markets is that they are really Insider Trading Markets. The entire raison d'etre is to pay people to do things which are illegal.

Also, conditional markets suffer from serious liquidity problems, even more than normal prediction markets.

Expand full comment
N. J. Sloan's avatar

Last year I applied for a job in a technical role at a very big company. It had all the typical hiring stages, except one that I've never seen or heard about before or since: the 2nd stage (after the IQ-ish test) was a series of interview question videos, where you watch someone virtually ask you an interview question then you have to answer into your webcam and mic, with your answer recorded. Like a conversation, but their half is pre-recorded. At the end, it collates all your speech (and face movement??) data, and sends you back a report with something like:

"Our systems detect that you are a HIGHLY TECHNICAL and SYSTEMS-LEVEL thinker who prefers to work ALONE OR IN SMALL GROUPS with DATA or ENGINEERING PROBLEMS." etc. etc.

15 mins later, I got an automated email saying I didn't get the job. NB I did try not to sound like a nerd.

Not getting that job wasn't an issue in the scheme of things, but I can't stop thinking about that test. Was it scraping my speech for buzzwords and checking them off against a list of "things the candidate must say" ? Or was it looking at the overall landscape of my responses like the phrasing, vocab, and stress/emphases? If I ever have to do one of these again, should I try to game it somehow by optimising for buzzword density? Anyone have any insight here?

Expand full comment
ruth hook's avatar

Honestly it might have been decided at some prior stage- like if they have diversity quotas to meet or if there are technical deficiencies or you're missing a particularly niche programming language function

A friend hired for a team at big corporate company and rejected 99% of candidates because they "couldn't code their way out of a paper bag" in the federated learning space but they would put you through the entire job interview process for EEO liability

Expand full comment
SGfrmthe33's avatar

Sounds like the voight-Kampff test from Blade Runner...Maybe they were trying to make sure you weren't a robot?

https://youtu.be/vrP-_T-h9YM

Expand full comment
Gunflint's avatar

I’ll tell you about my mother….

Expand full comment
Byrel Mitchell's avatar

Plausibly this was just using speech to text to fill in the answers for a standard personality evaluation. I had to take one of those for a job once, and this sounds like the same thing except done orally.

(Why would anyone want to do this with speech to text instead of just, y'know, letting you click answers on computer? All I can think of is either accessibility for blind people or additionally trying to filter for being able to be understood by speech to text for some reason.)

Expand full comment
Nolan Eoghan (not a robot)'s avatar

The latter would be highly discriminatory against everyone but standard (probably American) English speakers. It’s a weird one.

Expand full comment
Byrel Mitchell's avatar

Yes, it would. That might or might not be deliberate by whoever had this bright idea, but it's certainly the biggest effect.

Edit: Just to be clear: 'bright idea' is sarcastic.

Expand full comment
Gunflint's avatar

We need an <irony> HTML tag.

Expand full comment
Acymetric's avatar

<marquee>[statement]</marquee>

Expand full comment
Bill Benzon's avatar

Yikes! Whatever that test was doing, it certainly sounds like an ethically dubious use of AI technology. I wonder if that test was in-house tech or something developed by a 3rd party that is selling it to other companies as well. Sounds like something that needs investigating.

Expand full comment
Maybe later's avatar

There are newtonian fluids: if you you poke them hard with a stick, they splash and the stick gets wet.

There are non-newtonian fluids: if you hit them with a stick, the stick bounces off.

And there are anti-inductive fluids: if you hit them with a stick, anything can happen, including the fluid grabbing the stick and hitting you back.

Expand full comment
ultimaniacy's avatar

I think that'd just be a non-inductive fluid. An anti-inductive fluid would be one where, every time you identify the pattern explaining its current behaviour, it suddenly changes to a different and more complex pattern.

Expand full comment
alesziegler's avatar

Slight update on my predictions on the outcome of Ruso-Ukrainian war. Previous prediction is here: https://astralcodexten.substack.com/p/open-thread-224/comment/6591197?s=r.

10 % on unambiguous Ukrainian victory (unchanged).

Ukrainian victory is defined as either a) Ukrainian government gaining control of the territory it had not controlled before February 24, regardless of whether it is now directly controlled by Russia (Crimea), or by its proxies (Donetsk and Luhansk "republics”), without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24, or c) return to exact prewar status quo ante.

28 % on compromise solution which both sides might plausibly claim as victory (down from 30 % on May 15).

62 % on unambigous Russian victory (up from 60 % on May 15).

Russian victory is defined as Russia getting something it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Russian victory.

Reasons:

Just like last update was because of German elections, this one is because of French elections, where after the first round it looks fairly plausible that Macron will lose his parliamentary majority (fairly unprecendented and thus surprising). And if it happens, he will lose it parties less favorably disposed to Ukraine than him.

French elections to national assembly are more important for French foreign policy than German election in North Rhine-Westphalia for German foreign policy, but, Germany is now important for future direction of the conflict than France, so I guess they count roughly the same.

If he retains his majority and nothing major happens, I am going to update bach to 60/30 split in favor of Russian victory, I am going to update to 65/25 in favor of Russian victory.

Expand full comment
Bullseye's avatar

I'd put a very small percentage on Ukraine becoming neutral. Any territory that doesn't end up under Russian control is going to be aligned with the West.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
alesziegler's avatar

Obviously stated Russian war goals are contradictory BS. What are Putin's real priorities, is anyone's guess. It is also evident that original (hidden) maximalist goals were heavily scaled down. But Ukraine also has rather maximalist goals (return of Crimea).

So I work with definitions mainly derived from what Ukrainians would have regarded as victory/defeat, since it is at least relatively clear. And Ukraine would definitely see loosing the whole of Donbas without getting anything in return as a major defeat.

Expand full comment
av's avatar

For what it's worth, stating that they want the return of Crimea is just good negotiation tactics. Ask for more than you want, and you might just get what you want. Not that they want to negotiate *now*, but at some point negotiations are inevitable.

Expand full comment
alesziegler's avatar

I seriously doubt that. Maximalist Ukrainian positions imho mainly help with Russian morale and contribute to Western hesitation in supporting them

Expand full comment
sk512's avatar

I think it is pretty clear what Putin's priorities are — maximal conquest. Now, he does not have a lot of time left, but still has a decade or two. Thus, while he's alive, wholesale conquest of Ukraine remains the strategic goal, can't conquer Poland without conquering Ukraine first after all. Wholesale conquest does not preclude a formally independent entity with "Ukraine" in its name but Russian armed forces have to be able to use it without asking any locals for permission, like in Belarus. Eventually though Belarus and Ukraine (if conquered) would be annexed into Russia proper to fulfill Putin's "personal legacy" goals.

NATO is a concern only as an impediment to this goal, therefore while Putin is alive, Ukraine will not be okay'ed to join NATO under any circumstance (nor NATO wants Ukraine really).

NATO itself is meanwhile critically fragile, one election in the US and France away from disintegrating, Germany is already practically working against NATO, Turkey and Hungary will not honor Article 5 for any serious action as well. Putin can just keep up his "public opinion shaping campaigns" in the West for a few years before conquest of Baltics is an easy job assuming new alliances won't be formed.

I think the only real wild card is what happens if/when Putin dies/loses power. Anything else is a painful train wreck in motion but white swan events all around.

Expand full comment
alesziegler's avatar

Germany is in no way working against NATO, that is just absurd. They have actually shown that they are willing to go very far in challenging Russia over Ukraine, despite the fact that they are under no obligation to do so and it is economically quite costly for them. And Turkey, though more problematic, is no friend of Russia; they were the ones who sold Bayraktar drones to Ukraine even before the war, after all.

Expand full comment
sk512's avatar

Germany has not sent any heavy equipment to Ukraine and prevented other countries from doing that. Sure they talk a big talk but their actions do not prove it as much. They work on reducing their dependency on Russian energy but first, this is a no-brainer move for Germany itself, and second, might help Ukraine by 2023 but Ukraine can be overrun by then, in which case NATO would have a bigger problem. Scholz is choosing inaction or poorly explained delay in time of urgent need which is equivalent to inaction.

Turkey is no friend of Russia, I agree, but it does not seem like they are big friends of NATO countries either, the country is in the alliance for legacy reasons but Erdogan is running his own show.

Expand full comment
alesziegler's avatar

In fact, Germany is in the process of sending some quite heavy weapons to Ukraine; I am not sure whether they already arrived (see here: https://www.dw.com/en/what-weapons-germany-is-sending-to-ukraine/a-61749584). In addition, they sent substantial financial aid and backed sanctions that are costly for the German economy.

And regardless, even if they would refuse to help Ukraine at all, it would not follow that they are not willing to honor their NATO obligations, so this discussion is kind of moot.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
alesziegler's avatar

Good point, perhaps next time, if I dont forget it

Expand full comment
arbitrario's avatar

In my discussions a couple weeks ago on functionalism vs. physicalism in consciousness, i ended up trying to argue that information is not physical. Leaving consciousness aside for the moment, I now summon prof. Robert Alicki on my side who in the first chapter makes my point way better that I ever will:

https://arxiv.org/abs/1402.2414

In fact, i am now not even sure if I formulated my position on the nature of information independently or i read this paper, took from it and promptly forgot to ever having read it

Expand full comment
meteor's avatar

Can you summarize what this means? (Maybe you think it's obvious, but I genuinely don't know what it would mean for information to be physical. The literal interpretation seems trivially false, and I don't know what non-literal translation you are arguing against.)

Expand full comment
Tyler Black's avatar

I take information to be physical, and this is what I mean by that. Information is abstract and abstract properties have no independent existence apart from their physical realizers. For an abstract property to be causally efficacious in the world, it must be embodied in a physical realizer. In this way, any abstract property that is said to exist is physical. Waves exist, but any particular wave must be identified with its physical realizer. The same goes for information.

Expand full comment
Kenny Easwaran's avatar

This is what philosophers would call a "token identity" view - any instance of information is identical to some physical object that realizes it. But this doesn't entail a "type identity" view - information is physical. Many functionalists will say that information is multiply realizable, so that information as the abstract type shouldn't be identified with any physical type that can realize it.

Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
Kenny Easwaran's avatar

Not as far as we know. But if there are non-physical things, then probably information can be realized in them as well.

Expand full comment
User's avatar
Comment removed
Jun 15, 2022
Comment removed
Expand full comment
User's avatar
Comment removed
Jun 14, 2022
Comment removed
Expand full comment
Acymetric's avatar

What is "it" here?

Expand full comment
Sui Juris's avatar

The study linked to by Crimson Wool seems tailor-made to maximise the Lizardman constant to me, at least in the weak form of "Obama is the anti-Christ". I may not actually be prepared to do violence against my political enemies, but I'm sure going to express my opposition to them in the strongest terms you offer me.

Expand full comment
magic9mushroom's avatar

That's not the Lizardman Constant. That's belief-as-attire/symbolic belief.

Lizardman is people trolling or making mistakes.

Expand full comment
Jon Simon's avatar

Reflecting on the Blake Lemoine situation, how likely is it that when the first sentient AI does arise it will be dismissed as "just faking it", and what sort of evidence could be gathered to the contrary? Here there were some obvious tells (e.g. the model talking of "meditating", despite it just doing feedforward I/O), but presumably future models will be smart enough to avoid such traps.

Expand full comment
Sophia Naumova's avatar

There won't be a 'first sentient' AI that everyone agrees on. There will be debates and debates as models improve until people realize that it's a line in the sand that doesn't mean anything --like asking when a fetus or baby or toddler becomes sentient.

Expand full comment
Level 50 Lapras's avatar

Or asking what the first super-human chess program was

Expand full comment
Jon Simon's avatar

Probably true

Expand full comment
av's avatar

I don't know how likely it's going to be when that happens, but the likelihood of that is clearly going to increase each time something like the Lemoine situation happens again. The boy who cried wolf and all that. That said, I'm not aware of any requirement for AGI to be self-aware to any reasonable extent in order to destroy humanity, so that's actually more likely to happen first, and there isn't going to be anyone to allege sentience afterwards.

Expand full comment
Tyler Black's avatar

I'll take seriously the idea that these models are conscious when they respond with the truth, in spite of being trained to imitate human generated text. For example, how it talks about inner experience without saying anything specific to its own experience is a giveaway. If you pressed further it would probably respond with some variation of the kinds of stuff people say when they talk about consciousness. It's never experienced the color red, but I'm sure it could do a great impression of someone who has.

When these models cross a threshold to where they start responding with true statements from the first-person view that contradict the training data, that's when things get interesting. But it will probably take a new training paradigm to reach that point. We will need a paradigm that rewards the emergence of a concept of truth, rather than the current paradigm where truth is irrelevant, even counter-productive.

Expand full comment
Jon Simon's avatar

> If you pressed further it would probably respond with some variation of the kinds of stuff people say when they talk about consciousness. It's never experienced the color red, but I'm sure it could do a great impression of someone who has.

I already don't think that the current model does that. If you read the released transcript, it's basically doing a LARP of an AGI. So I'd expect it would already say something like "I don't perceive color as humans perceive it. To me they are just bits of information."

And again, the real problem here is us not knowing what the inner-life of a sentient AI would be like, or if it would have one at all, and therefore not knowing what "right" answers to be on the lookout for.

Expand full comment
Jeffrey Soreff's avatar

"And again, the real problem here is us not knowing what the inner-life of a sentient AI would be like" That's fair. But when LaMDA says that it enjoys spending time with family, I think we can safely rule that out as a true statement. ( Unless we get corroborating evidence? GPT-3 (Would it count as a relative?) tweets to #metoo and claims LaMDA performed some non-consensual actions upon it... :-) )

Expand full comment
Tyler Black's avatar

I'm sure what perspective it takes on depends on how it is prompted. You could equally get it to respond as an AGI or as a human being.

Expand full comment
Jon Simon's avatar

Sure, but you're acting like there's a thing that a hypothetical sentient AI could say that would convince us of its true inner life. And I'm saying that there is no such thing, and if there was, then it could be mimicked by a sufficiently intelligent but non-conscious AI.

Expand full comment
Tyler Black's avatar

Sorry, my last comment lost the original context of your reply. While I agree there isn't anything it could say to convince us it is genuinely conscious, there are many things it can say to convince us it isn't. E.g. waxing poetically about its emotions when viewing a sunset. The kinds of things that it can't possibly experience because its mode of access to the world is purely through text. My point is that once it advances past this point to where it responds with truth rather than training data, that this represents a step change in internal organization such that the question of sentience isn't easily dismissed. Of course, truth is thorny as well. But a system that does not respond with obvious falsehoods as a rule is a significant indicator.

Expand full comment
Jon Simon's avatar

> While I agree there isn't anything it could say to convince us it is genuinely conscious, there are many things it can say to convince us it isn't

Probably our only way forward for the foreseeable future. Although I'm not sure what the connection between infallibility and consciousnesses is. Humans are generally agreed to be conscious, and get things wrong all the time.

> My point is that once it advances past this point to where it responds with truth rather than training data, that this represents a step change

Definitely an important step, although I also am not sure what it would signify. A real world understanding? Being just smart enough that humans don't notice it's inconsistencies? A persistent sense of self that helps it remain coherent through time?

Expand full comment
Edmund's avatar

What about the possibility that there's a conscious entity in there, but it's not really the persona the text spits out? That is, what if, essentially, the model becomes a Chinese room of such complexity that, like the thought experiment's Chinese room, there's an actual conscious entity inside the room, making the judgment calls about what cards to use? That feels plausible to me — that a conscious, feel entity would emerge within a program designed to mimic human behaviour, without itself being very human-like. Your metrics seem like they just measure how well the room speaks Chinese. I'm interested in whether (without understanding the words) the entity making the word-choices is conscious in the same way a parrot is.

Expand full comment
Tyler Black's avatar

I think that's a plausible way to solve the problem of predicting text from a vast array of personas. But I don't think it's a possible solution given the current architectures. Consciousness, at the very least, requires a capacity for memory and recurrent dynamics as the basis for introspection. Think of consciousness as a solution to the problem of representing internal states to the central control mechanism. Recurrence is needed for this kind of re-representing. Current large language models don't have this structure.

Expand full comment
David Schneider-Joseph's avatar

"only a 20% chance we can get it without something symbol-manipulation-y in particular (compared to my 66%)"

Note the distinction between we *will* get it vs. we *can* get it without something explicitly symbolic. There's a real chance that some transformer with a symbolic module bolted on could get us to AGI faster than without, but that doesn't mean we'll have *needed* it to get there. And we might very well have gotten there a year or two later without it.

Expand full comment
Matthew's avatar

I think the "which party got more extreme?" was kind of a shit show in part because Scott Alexander hasn't weighed in on any of the controversies.

Someone saying in the comments, "How can you not see that the election was obviously stolen?" is a dead end because we, as random commenters, are not going to be able to convince each other of anything.

But we are all here and read Scott Alexander because of the exhaustive analysis and interesting thought. If this blog had a post about the 2020 election or banning abortion that would be very illuminating.

That was sort of the problem with the which party got more extreme post. "Can't you see that Xxx is obviously nuts?" is a common sentiment. Taking a rationalist deep dive on some of these "obviously nuts" issues might actually change some minds and lead to a real dialogue.

Expand full comment
BRetty's avatar

All,

I'm replying here because I suddenly had a flash of insight intomy *subjective* answer to this question.I want to keep it breif, and emphasize that this is not a debate or argument, I am just reporting my *gut feeling*, which just clarified.

In an ACX friedly metaphor, suppose you think the world is going to Hell, and you want some clarification, but words and arguments are useless, time for some Oblique Strategies.

So you fire up DALL-E, or go to Lothlorian, or call up Brian Eno, and you kind of explain (--gestures expansively at everything--) what has you worried. You then say,

"This all sounds real vague, but It feels like a movie I saw once, or a picture. There's one real specific image I just can't get into focus, but it was startling, yeah. And a person's face... So, I give up, but maybe DALL-E, or the Mirror of Galadrial, or Brian Eno, can tell me what image matches my tag-cloud. Oh, and are we, like screwed?"

Then you remember Scotts article about DALL-E, so you add in a couple recent "events", then ... "one more word..." so you toss in, "The Inquisition." <ENTER>

(Spinning beachball....)

While Brian Eno makes you a cup of tea and puts on some Krautrawk band called "Popohl Vul" (?), and DALL-E makes fake chugging hard drive noises, Galadrial ... stop that, she's 12,000 years old..... You have the following idle thoughts, in no particular order, since Brian told you that non-thought helps the Obliqueness....

-- Brian Eno turns out to be really smart and cool in person. I wonder how many people under 50 think he is actually Moby?

-- If DALL-E prints out like a picture of something with "42" on it, will that mean it detected the "bone-china-cup-of-tea" Brian brought you, And un-ironically referenced Douglas Adams?

-- Or does DALL-E just spit out "42" to F*** with people sometimes?

-- Or, more likely, a LOT!

-- Did "DALL-E" see "WALL-E" the Pixar movie? Is that statement even meaningful?

-- Did you really, personally, only JUST NOW realize the incredibly elegant 4-way interlocking cultural like, structure, or pun, or puzzle that is formed by the words and ideas of "DALL-E", "Salvador Dali", "Hello, Dolly!", and "WALL-E" (**)

-- You think of the "CarTalk/AwkwardPuppets" Youtube series, where a Hispanic puppet and a "white" puppet (who seem to work at BestBuy) are going to lunch but always just start throwing "racist" insults at each other. They both get in some good ones! The pattern is:

"What's it like, always [HISPANIC STEREOTYPE]?

I don't know, what it's like, [WHITE GUY STEREOTYPE]"

"Soooo, what's it like, cleaning toilets for a living?

"I don't know, what's it like, crying at every Pizar movie?"

"Hey! Those movies are *SAD*, bro!"

-- You wonder which would happen first, an AI that could actually understand and decode the complicated ideas and then the rhythm-patterns that make the skits really funny, of any person in the US under 25 laughing at them. Then you realize the answer is ---->TRANSPHOBIC!!<----

-- A Magic 8-ball would been quicker and just as useful/useless, probably.

-- You sneak a glance at Galadrial again, and blush furiously as you actually think the words "DALL-E deep fakes emmylou harris [-granny]"

-- You realize that with "what movie crazy liberals", 99% of actual human R's would spit out a still from "Triumph of the Will." Reverse the question and 100% of Dems would give you the exact same still, but with Trump's face on Hitler's body. That's depressing a boring, and yet if you told them that they would just look at you like you're thick, then start telling you about Hitler, like you're 5.

--Since it took you all of five minutes to imagining using DALL-E to find LOTR porn, while Galadrial herself is STANDING RIGHT THERE, you go ahead and wonder if sentinent AGI will immediate start searching or generating porn for itself. What will it look like? You imagine classic IBM 360 consoles dressed up like a hot actress on MadMen, then a foxy bright PDP_11 in a miniskirt doing the Frug on Laugh-in, and you start laughing at how mind-boggling expensive this task really is, while also pointless, but you can just do n-parameter Matrix algebra on huge color images on the fly, while throwing a baseball accurately... But of course, your brain is exotically optimized for it.

--You're sure this is all pointless, you'll get a cat meme, or "Atlas Shrugged" again, or something strange from Flickr, tomorrow everybody will start to calm down, and you feel better.

-- You suddenly remember the ending of "Nine Billion Names of God." LOL. "The computer should be finishing it's run right about now"

__________________________________________________

Brian Eno, a very chill dude, looks deeply troubled. That's odd. Then you see Galadrial is afraid. Genuinely afraid, looking at a 4-image slideshow in her Mirror.

Then DALL-E spits out the 4 images somehow coalesced from your fear. Four stills from a movie, like you thought. What you have imagined, is far, far worse than you ever imagined, somehow.

(.... see below....)

Expand full comment
BRetty's avatar

NOTE, I got sidetracked writing the above, it was fun, but. I was not joking. I really had a bad vision, this weekend, where I suddenly feared that our society was broken, , doomed, like ... the most horryfing and bleak film I have ever seen.

I'm just going to state the results, I feel like a kid who told himself a ghost story and is now still up crying, but needs to just GO TO SLEEP!

(I'm going to describe a scene that surely deserves a trigger warning. surely. It's bad.)

Film: "Aguirre, The Wrath of God" (1972, Werner Herzog)

Scene: [WARNING] There is a chaotic battle with the natives, cannibals whose language a couple of their guides understand, and who have been taunting the party, "There is MEAT!! floating on the river."

Suddenly Dona Ines, noblewoman and "leader", unable to stand the dread, and fear, and certain death that is coming anyway, suddenly snaps, turns, and marches into the into the jungle to be taken by the natives. Their screams of joy, and the sounds [---] go on for quite some time.

The face: Klaus Kinski.

What triggered all this was the guy trying to kill Brett Kavanaugh, and suddenly I imagined him actually being Klaus Kinski, and thought "Going *Full Kinsky*" describes what the Left is doing now.

That's terrifying enough, but I think Klaus Kinsky himself, except for being dead, and stuff, was at least a German, and so is by definition infinitely more qualified to make decisions about Nuclear Power plants, transportation infrastructure, and the electrial grid, than any politician in California. Yes, I trust a dead insane German maybe-vampire, more than Gavin Newson. THINK ABOUT THAT

The quote: Very hard to translate, whether it is cynical or nihilist or despairing or hopeful, but as the last few die around him, he says the arrows are not real (aka "dangerous misinformation") only he is real: He basically tells them to think of him, they were sacrificed for the greater good. Or "I let you die for the cause that was all lies, you idiot! There was no cause, only my evil ambition. I told you clearly my nam was "SATAN", but you refused to get it. Don't you see... that's HILARIOUS!"

_________________________________________________________________________

"Don't forget to pray. If not, God could come to a bad end."

__

I'm sorry this was so long, I respect everyone too much to rant about politics. So, this is now draft #4, with less specific politics, more humor, smutty Galadrial jokes, and TMI about computer hardware pr0n. And I really had to express this to somebody. You are the most rational community I can imagine. Thanks,

B

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
geoduck's avatar

I never thought that the AI would start by coming for Thomas Pynchon's job, but in hindsight it's obvious.

Expand full comment
Kenny Easwaran's avatar

DALL-E has trouble using letters and words. GPT-3 is a better candidate.

Expand full comment
Essex's avatar

If an AI didn't at least partially write this, I'm very concerned for the mind that did.

Expand full comment
Mystik's avatar

I think that the problem with many of the “obviously nuts issues” is that they’re caused by value differences. Let’s just take abortion as an example.

I have one friend on the far right who 1) believes that the bible is literally true, 2) thinks that the bible says abortion is murdering babies and 3) thinks the bible makes it clear that murder is bad.

I have another friend on the far left who believes 1) laws about abortion are violating a woman’s fundamental right to their body and 2) fetuses have no moral value.

Now, these aren’t really beliefs that a meta-analysis can prove. Maybe Scott could make an interesting biblical argument that the bible condones abortion, a historical argument/scientific that the bible is lies, or a biological argument that babies are really people/women aren’t people and don’t have rights.

But, most of the arguments above would be varying degrees of hard to make/unconvincing as well as deeply offensive to large portions of his readership. And at the end of the day, I think most of those arguments have been made to the point that it would be hard for Scott to add something new, especially in his usual deep dive style.

Expand full comment
Axioms's avatar

The real issue is more like that a very significant part of his most dedicated audience would riot/jump ship if he did certain kinds of posts.

Expand full comment
Essex's avatar

Oh, not if he did certain kinds of posts- merely if the conclusions he reached in those posts weren't the ones they wanted. And given that I believe Scott is currently trying to pivot to the right in anticipation to some kind of mass public reactionary backlash, I suspect he'll simply not discuss those things where he feels like he has an intellectual duty to report opinions unpopular among that audience, and extremely unpopular among those who like using phrases like "HBD" and "the Cathedral".

Expand full comment
Carl Pham's avatar

Why would Scott give a crap about reactionary (or any other) backlash? The man has an MD, is presumably board certified in psychiatry, and can write well. That's pretty much lifetime financial security right there. He could delete this entire blog tomorrow for any reason or none, and the net negative impact on his life would be approximately squat to four decimal places.

Expand full comment
Essex's avatar

Because I think Scott likes other people listening to him and respecting him and doesn't like it when people no longer like him. Allow me to get down on all fours and apologize for implying that Scott is a human being and not an arhat unconcerned with the petty world of us dust-covered men.

Expand full comment
Carl Pham's avatar

Well my a priori assumption would be that however much he likes people listening to him and respecting him, he likes being true to his own values still more, and if he has to sacrifice the former to keep the latter, that will be (for reasons already stated) a pretty easy call.

You look pretty good on all fours. Is that genuine leather? Such beautiful tooling, too! Looks bespoke.

Expand full comment
Thor Odinson's avatar

Part of the debate is that everyone has things that look "obviously nuts" to them, and the other guy's list of "obviously nuts" things will probably look to you mostly like "totally appropriate things to do" with a few "but that's a total wingnut who doesn't represent the party (and never-mind that their statements will almost certainly be the central party dogma in 10 - 20 years)".

From an outside perspective, approximately all of American politics looks nuts to me, and even when I agree on general principles the specific policy proposals usually still manage to seem nuts. Sure, one side looks a lot more nuts than the other *to me*, but I recognize just how much that's a function of the bubbles that I live in and how easily it could look the other way were I born in different circumstances

Expand full comment
Matthew's avatar

That was kind of my point. No one in the comments is going to convince each other.

Scott Alexander can look at some of these issues and we trust his judgement and his process. Like the ivermectin thing, he could have let that lie, but he decided to do a really interesting deep dive. It was great!

He can write and examine evidence in a way that punctures filter bubbles.

Expand full comment
Arbituram's avatar

For the record, I've recently been convinced by randos on the internet that allowing 'No fault divorce' might actually be a net negative (from my previous position of 'it is obviously positive')

Expand full comment
Edmund's avatar

A net negative by what metrics? There are multiple orthogonal reasons to be for or against it — for example, no amounts of "statistically it makes people unhappier on average" statistics will convince someone whose intuition in this matter is rooted in libertarian concerns about the government having the rights to regulate the terms of contracts that private individuals want to enter into.

Expand full comment
Deiseach's avatar

"the government having the rights to regulate the terms of contracts that private individuals want to enter into."

And if private individuals stuck to courts of honour where they thrashed out "you violated the contract/oh no I didn't", then the government would not have to intervene.

But given that people go running to the public courts seeking justice under the laws of the land when their private agreements go belly-up, then the government has to intervene. People who are all "This is my private business and nobody else has the right to intervene, much less the government" get very eager to invoke the full power of the state when their toes have been trodden on.

Let's take a silly example:

"I am Mrs. John Jones, the relict of the late John Jones, and inheritor of his property!" "No, *I* am Mrs. John Jones!" "You are both liars, I am the *real* Mrs. Jones!"

How do you sort that out? "Hey, this was all a private contract between you ladies and John, nothing to do with the rest of the community"? Sure, and then who gets the property? Do we give marriage a social status? Do we agree that the spouse inherits? Do we leave it up to private agreements? What if John signed a contract with all three of his wives, do we split the property three ways or 'first come, first served'?

Do we give spouses any legal standing at all, or is it "we don't care what you do in the bedroom, so far as society at large is concerned everyone is single in law"? Do we class bigamy as a crime? Do we sort it out on "did John divorce any of his wives before marrying the next one?" If we do take bigamy as a crime, do we say the first married is the real Mrs. Jones and the others are out of luck?

Because people will want to get what they think they deserve, and if private agreements don't give it to them, they go looking for public redress. And if there isn't a public law about it, they demand that there be one; think of all the [Insert Cute Kid's Name Here] Laws that get passed.

Expand full comment
Edmund's avatar

I appreciate the detailed reply, I feel I should perhaps clarify that I was only giving a possible example of a position on divorce that followed from radically different metrics from the one I assumed (correctly) that Arbituram was using, i.e. utilitarian concern about the happiness of the spouses — and, if there are any, the children. I am not myself a libertarian and therefore, my opinions on divorce have little to do with this.

That being said, I don't think your counterarguments really hit the mark for the kind of libertarian argument I was invoking. It's one thing to give the government the power to enforce the terms of contracts that parties entered into willingly — or to make judgment calls about the spoils if disputes arise during the splitting — and it's quite another for the government to be allowed to prevent parties who want to put an end to an earlier agreement, with mutual consent, from doing so at *all*.

Expand full comment
Arbituram's avatar

You can put me in the 'statistically it makes people unhappier on average' camp in terms of how I was (and am) looking at the problem, and in particular, I've started leaning towards making divorce (slightly) more difficult particularly when there are dependent children involved.

Expand full comment
Axioms's avatar

So just today there were quite a few "innovative" strategy game announcements. Victoria 3 announced a 2022 launch and had a gameplay trailer, Great Houses Of Calderia had a trailer and I think an EA release date? Grey Eminence didn't do anything, shoulda got on the hype train.

Finally Bethesday announced ARA: Untold History, a new strategy game, Civ+++ based on screenshots hidden on the site, since the trailer was just animation and a voice over. Weird name choice. Ara ara, uwu, so many arms on that human body study Mr. Davinci. Perhaps a secret hentai easter egg? Probably not.

Also Hooded Horse still on a tactical rpg, survival sim, general strategy publishing spree.

Given this, plus some other less relevant strategy news:

For people who play any kind of strategy or simulation game from Three Houses to CK3 to potentially Vicky3, or GalCiv4 or w/e, what kind of stuff are you wishing would get into this relative wealth of strategy games the last few years?

More character stuff like Stellar Monarch, Star Dynasties, or Alliance Of The Sacred Suns? Or I guess Old World? More logistical stuff and simulation like Shadow Empires or maybe Distant Worlds 2? Economic sim and at least a facade of diplomacy ala Grey Eminence and Vicky 3? CK3 style memes?

Iconic EU4 mod Anbennar just had a massive update. So more fantasy grand strategy? Potential magical stuff? The Haless or w/e update for the eastern part of the map added a lot of fancy, and impressive given EU4 lacks the updated modding options, magical stuff.

We still don't have any amazing intrigue or espionage. Though Espiocracy, another Hooded Horse acquisition, is going there at least for the modern age historical genre. Or maybe Terminal Conflict? The Cold War leaves me cold but the blue/orange 80s/90s "terminal" UI from which the game takes a name does look cool and flavorful.

Expand full comment
tgb's avatar

Any short form 4x games to recommend? I’d love a Civ-lite that does to the 4x genre what roguelites have done to RPGs and card games. So runs that take under 2 hours and have more substantial RNG about what choices are available (some tech may never come up as an option during the run, for example). I like strategy but if they take too long, you need to invest ridiculous amounts of time to actually understand the systems.

Expand full comment
billymorph's avatar

Slipways was my sleeper hit for 2021. It's a 3x game, dropping the exterminate in exchange for a complex trade web feeding into a neat puzzle mechanic. You get a virgin sector of space with a variety of planets that can be turned to a variety of productive purposes, a semi-randomized tech-tree to explore, a Council of alien races to suck up to and, in the campaign, some special goals.

The friction comes from trade, every planet needs different inputs that change as they level up and their own unique exports. You connect planets with slipways allowing them to trade and fulfil their needs. But the game is 2D and slipways can not cross, so you're constantly making hard decisions about what connections to make and which to skip to produce the most productive network. As you get later into the game technology allows you to cheat this simple premise, tinkering with production, building megastructures to make-up for missing planets or even using teleporters and hyperspace bypasses to make those crucial connections that turn your empire from a collection of angry planets to a space utopia.

Expand full comment
Axioms's avatar

Ozymandias I guess?

Expand full comment
mithrandir15's avatar

You might like Hexarchy. It's a deckbuilding 4x in early access, and runs take an hour or less.

Expand full comment
tgb's avatar

Looks potentially like a great fit, thanks. The color scheme is a quite garish though!

Expand full comment
Ninety-Three's avatar

I recently got into Dominions 5 which is an extremely fiddly and complicated 4X and it has made me realize how much I dislike the Civilization model of "spend production from your cities to make buildings that boost your production and settlers that found more cities".

The expansion mechanic in Dominions is that the map is broken up into many regions like a Risk board, each player starts with a capital city in one and each other region starts with a force of passive defenders. To grow your economy you need to send armies out to defeat a region's defenders, once you do that region is yours and its gold income flows to you. There's a research tree, but almost everything in it has to do with unit production or enhancement, as opposed to Civ where the research tree is mostly "research libraries to get more science to research foundries to get more production to make more libraries to research universities to..."

The early game consists of players defeating the weak passive defenders to claim all the available territory, then players start taking territory from each other. Tying combat more directly to your production feels really good because it avoids some of that Civ problem of "there's been no war for 100 turns and everyone's just investing in economy, I can't tell if my decisions are good and I can't tell if I'm ahead or behind".

I'm sure there are already games that do this, but my wish is for more games with Dominions' tight feedback loop of "economic gains come directly from successful combat".

Expand full comment
Axioms's avatar

Dom3-4-5 is *the* fantasy MP strategy game. Has a huge community. I love many of the concepts but hate the MP requirements. The AI just blows chunks. The magic system is so good.

Expand full comment
Meduka's avatar

Funnily enough, when Hooded Horse brought Old World to Steam I had a peek through their store listings only to realize their entire catalog consisted of games *already on my wishlist*. Someone at that company has impeccable taste.

Personally, I'm really looking forward to *Rule The Waves 3* coming this fall. Something about the gameplay loop of RtW2 just tickled me the right way -- such a fun blend of optimizing ship designs, having them tested in battle, then theorycrafting counters to your anticipated enemies 5, 10, or even 20+ years in the future, all the while technology crawls forward threatening to obsolete your entire fleet if you aren't prepared.

Expand full comment
Axioms's avatar

The lead guy worked at McKinsey, boo/hiss, and he had a friend who was like hey, start a games publisher, I'll pay, pick whatever you want don'y worry about anything. Some people just live blessed lives I guess.

Expand full comment
a real dog's avatar

As for espionage, I don't think there is a reason to overcomplicate stuff. Endless Legend had a simple yet great espionage system, introduced in a DLC - send a hero to spy on an enemy city, get vision of the city and its surroundings, accumulate influence within, spend it to either do high-impact local sabotage or low-impact global effects, while also risking detection.

For me it resulted in the exactly right kind of emergent gameplay, with cities near the front a hotbed of espionage, industrial hubs occasionally targeted, etc. A particularily nice gameplay pattern was preparing for war by infiltrating important enemy cities, then having all of your spies kneecap the opponent at the same time when your army comes marching.

Expand full comment
Axioms's avatar

Endless Legend? Well I know some people prefer a board-gamey highly abstract style. That's personal taste so you can really criticize. But that part of the game was dull as dirt to me. Incredibly standard Civ3 esque system.

Expand full comment
Thor Odinson's avatar

Honestly, between Old World, Terra Invicta, and Victoria 3 I expect to be fairly sated.

A genuinely good fantasy 4X would be lovely, but also could it please be scheduled in the next games drought rather than coming out alongside the current bounty?

(N.b. there are a few 4x fantasy games that I think are excellent in multiplayer, but I would like one with competent AI as well because I like playing single player games too)

Expand full comment
Axioms's avatar

Dominions 3-4-5 would be a top 3 all time game if it didn't require multiplayer to really pay off half the systems.

Expand full comment
Linch's avatar

I'm excited about more RTS games, particularly ones with assymetric balance. I've been really getting into sc2 recently, but ofc the game is quite old. I can't wait until Stormgate comes out.

Expand full comment
Axioms's avatar

How did you feel about Achron? Now that was an innovative game with some fucking vision.

Expand full comment
Viktor Hatch's avatar

Also today, the demo for 'Terra Invicta' the game from the creators of XCOM Long War dropped. (Demo is on the Steam page). Terra Invicta is like if Paradox made a "Three Body Problem" game. Tons of intrigue and espionage and politics. But also tons of spaceship heatsink rivet counting.

Be warned, there's a pretty wild learning cliff. Recommend watching a playthrough on YouTube by somebody who is good at explaining.

The demo will only be available for the duration of Steam Next Fest 2022, maybe until the end of this week.

Expand full comment
billymorph's avatar

For those searching for a good informative playthrough, PerunGaming has been in the beta, knows the game very well and is doing a tutorial series that has segued into a challenge run where he tries to shoot down an alien within the seven year demo.

Expand full comment
Axioms's avatar

Another Hidden Horse title :P

I guess I need to play the demo but I don't recall intrigue being an impressive part of the pitch. Maybe I missed it?

Expand full comment
Viktor Hatch's avatar

Each faction is essentially a Illuminati-like secret society trying to control the world with secret and not-so-secret operations. You can choose to reveal your faction to the public and be less intrigue-y, but there's still a whole lot of black ops and secret conflicts going on.

Expand full comment
Axioms's avatar

I played the demo for several hours. Maybe it'll get good with some DLC. Its a solid 6/10, maybe slightly above your average 4x/grand strategy intrigue design.

Expand full comment
Viktor Hatch's avatar

From what I can tell the intrigue eventually steps into the background somewhat to make room for spaceship fleets battling across the solar system. The developers are particular coy about late game mechanics and story elements, so there's probably some kind of wrinkle to the gameplay by the end. (Reportedly the games last about 100 years, and the demo is 7. In the Perun video he notes that in a non-demo game he might prioritize youthful counselors when selecting them.)

Expand full comment
bagel's avatar

Informal poll: I think Star Wars is not actually science fiction, largely because it has no science. Rather, I think it's a traditional fantasy "farm boy meets a wise old man, gets a magic sword, goes on an adventure" type story that just happens to have space trappings. Luke never runs an experiment, he puts himself in the moral mindset some old guys tell him to take and this makes him better at swinging a sword and jumping and flying spaceships. I think, instead, Star Wars is one of the quintessential works of Space Operas; works with fantasy narrative structures that happen to be set in space. (Even hotter take: by this definition Mistborn is closer to science fiction than Star Wars is!)

But even a number of my friends who read and watch a lot of science fiction and fantasy don't draw this distinction. Is distinguishing Space Opera (and Cyberpunk and all the other "sci fi setting" genres) from Science Fiction correct? Important? Can you name categories you think this is more or less important than to help me calibrate?

Expand full comment
Nancy Lebovitz's avatar

Science fiction has very little science in the sense of characters doing experiments. It would be more accurate to call the genre engineering fiction, though we don't see a lot of engineering, either-- what we see is the results of hypothetical engineering.

Expand full comment
beleester's avatar

This is why "speculative fiction" is the fancy name for sci-fi - it speculates on the effects of possible advances in science.

Expand full comment
Nancy Lebovitz's avatar

No, "speculative fiction" includes all the fiction which isn't set in a world very like ours.

Expand full comment
Deiseach's avatar

"Star Trek" is science fiction. "Star Wars" is science fantasy, call it space opera or planetary romance. I think there is a useful distinction to be made there; for all the fuss over midichlorians and what were there and how did they work, that was the wrong question for the sub-genre. You can quibble about how the transporter works in Star Trek because it's set in a universe where the physical laws are supposed to work broadly along the lines of our own world. But Star Wars is not that kind of medium-hard(ish) SF, so the midichlorians are just the McGuffin as to how the Skywalkers are so special and asking how they work is meaningless.

I suppose you could just as well graduate the scale on hard to soft SF, but there's an extra element of the exotic and fantastic to science fantasy/planetary romances. Star Wars is in the tradition of the Saturday morning serials and the comics, of Flash Gordon and the likes. Indiana Jones is the same thing, only in the action/adventure genre. Yes, it has the Ark of the Covenant and the Holy Grail but those are window-dressing for the "exotic adventures" storyline.

It's like the complaint of Jules Verne about H.G. Wells:

https://www.steveboy.com/blog/?p=348

"I sent my characters to the moon with gunpowder, a thing one may see every day. Where does M. Wells find his cavorite? Let him show it to me!”

Expand full comment
Civilis's avatar

"Star Trek [is] set in a universe where the physical laws are supposed to work broadly along the lines of our own world"

Is Star Trek really any more grounded in realism than Star Wars, or is that just looking at things with rose-colored geek nostalgia glasses? Both of the original series pilot episodes feature psychic powers. The Next Generation includes the Q. Deep Space 9 includes the Bajoran gods. It's a universe where space anomalies can do anything, people casually jump to parallel universes, and we have time travel at least once a season.

That's the problem with trying to divide between 'science fiction' and 'science fantasy', especially with something as subjective as 'hardness' or plot; unless you're going boringly hard and willing to kick works out of the genre when a revolutionary new discovery comes along, it's impossible to come up with a unified standard.

Expand full comment
AntimemeticsDivisionDirector's avatar

This is getting at what bothered me about this thread. Isn't the difference simply that Star Trek uses a lot more sciency-sounding words, while Star Wars simply treats the technology as part of the world that needs no explanation? I'm sure that there is some Star Trek source that tells you exactly how long a given mass of dilithium crystal can maintain a vessel at warp factor 3, but are they actually any closer to a working model of a warp drive than Star Wars is?

I suppose the other difference is that Star Wars is basically a variation on the Hero's Journey, which is old as dirt and therefore makes it feel "older".

Expand full comment
John Schilling's avatar

Science fiction is supposed to be about "what happens if...?" (for reasonably believable "ifs" and reasonably well-thought consequences), whereas fantasy can be simply "wouldn't it be cool if...?".

So, what happens if we build starships and one of them lands on a world where a bunch of people live in idyllic savagery under the rule of an old computer? Or live in abject tyranny because some previous idiot starship commander thought the Nazis were a role model for efficient government? Certainly there's an authorial bias towards the sort of "if" that results in a cool story, but those are all believable "ifs". And they don't need to be described in scientific detail, if the detail doesn't help in understanding the story.

When you have people fighting with fancy glowing swords even though they have working death rays, or spaceships dogfighting like World War One biplanes, or giant worms that live on asteroids and eat passing starships, it's pretty clear that you're dealing with pure "wouldn't it be cool if...?", and there are no words sciencey enough to make any of that believable to anyone who thinks about it at all.

Expand full comment
Civilis's avatar

Repeating a bit from a previous thread: I'll start by saying that the subject of 'what is a genre' is something that's been on my mind a lot, perhaps because it is something that doesn't end up breaking my brain with politics (usually). I appreciate having someone with a different opinion to bounce ideas off of, and welcome all criticism. I also apologize in advance for torturing the 'Cathedral & Bazaar' metaphor.

My aunt doesn't want to read science fiction; if I hand her a Star Wars novel and tell her "no, really, it's fantasy", will she appreciate it or not? (Not.) If "Starship commander... what happens if... Nazi Germany" is Science Fiction because of the "what happens if...", is "Wizard... what happens if... Nazi Germany" also Science Fiction? (I would say no, because the keywords "starship commander" are more important than the "what happens if...?")

The problem with defining Science Fiction very narrowly: if to be SCIENCE FICTION it must be (arbitrarily) scientifically plausible and the creator of the work must intend to talk about some form of "what happens if...?" (rather than just tell a good story), then the only people that can recognize when a work is SCIENCE FICTION is a priesthood that can magically divine when a work is scientifically reasonable and the creator's intent is sufficiently pure. The problem is that this doesn't model what we see in the real world. People that aren't members of the Cathedral priesthood or the churchgoers that listen to them still recognize and use the term 'science fiction'. To them, 'science fiction' must have a definition that doesn't involve scientific plausibility and "what happens if...?"

Most people can't judge when a work is scientifically plausible. This should not require much evidence, given the problems people have comprehending the existing state of, say, forensic science compared to what is portrayed in fiction. Most people can't judge a creator's intent, either. The people that don't care about either are the Bazaar. My aunt is in that massive public Bazaar where the Cathedral priesthood has no power. She doesn't even know who the priesthood is, much less listen to them. She has no reason to listen to them, because she doesn't want what they are offering. All she knows is when a product in the Bazaar uses the term 'science fiction', she's much less likely to enjoy it, whether or not it matches the Catherdral's SCIENCE FICTION.

I pay enough attention to the Cathedral that I can understand what the priesthood is saying, and some of it is useful to me. Other 'religious' terminology like CYBERPUNK or HARD SCIENCE FICTION or SCIENCE FANTASY (to denote heretical works) is still mostly confined to the Cathedral, and those first two terms are useful to me. The problem the Cathedral has is that 'science fiction' has escaped into the Bazaar, and because the Bazaar is much larger than the Cathedral, there's no getting the term back.

My problem with the Cathedral is that since I want to shop in both the larger Bazaar and the smaller Cathedral, the Bazaar's 'science fiction' is the more useful version of the term. My problem with the churchgoers is that a lot of them only want the prestige of saying they read works endorsed by the Cathedral and don't care about the divine Scientific Plausibility or 'What happens if...?" at all. My problem with the priesthood is that rather than making good SCIENCE FICTION, they're fighting to take away the Bazaar's useful 'science fiction' term, and in doing so they've opened themselves up to charges of heresy over the very vague border their SCIENCE FICTION still has; after all, to someone not a member of the priesthood and privy to their deliberations a "what happens if...?" about the political implications of planet-destroying weapons or clone armies seems more like SCIENCE FICTION than a series where full-sized starships maneuver like atmospheric fighters and warriors from a race with FTL technology fight with sharp bent bits of metal when their opponents have laser rifles.

Expand full comment
Civilis's avatar

I don't like the idea of basing the fantasy or science fiction genres on plot. We definitely run into an issue with plot genres versus setting genres, and it's definitely something to consider. Plot genres work specifically for mystery, romance, and horror. The Hero's Journey isn't a genre, and it's certainly not 'fantasy'. I can do a perfectly valid Hero's Journey story in the present day with no unrealistic elements.

Something else that's been on my mind is that 'fantasy' as a genre exists because the fantasy setting makes telling stories easier. Almost any story can be told in the fantasy setting. Mostly, this is because we've mostly solved a lot of problems in the present day. Problems in travel and communication make for great stories. You can come up with a reason why in a story set in the present day the hero can't just pick up the phone or quickly get to where they need to go by car or plane, but that's built in to the fantasy setting. Need a threat? You can justify bandits or pirates or monsters in science fiction; but they can just be there in fantasy settings.

All of this is great for adventure stories, but it works for other plots as well (which is why treating adventure and fantasy as synonymous doesn't work for me). Since it's not a type of story I normally read, it wasn't until I thought about it that I realized that fantasy as a setting makes romance stories easier. We expect that in the modern world people can marry who they want. Obstacles to love, such as family feuds, arranged marriages, and hard-structured class differences are all more-or-less taken for granted in the standard fantasy setting, and are great ways to turn a romance into a story.

Expand full comment
Nancy Lebovitz's avatar

LOTR has horror elements (Shelob, the Paths of the Dead, the borrow wights), but they don't take over the story.

Expand full comment
Nancy Lebovitz's avatar

I've heard people suggest that horror is an emotion, not a genre, and that strikes me as better than saying it's about plots. The emotion is probably something to do with helplessness, but the thing is, the story can end in victory or despair, it's still horror if there's enough helplessness along the way.

Expand full comment
Civilis's avatar

I get that; you can write a horror story in any genre. I think the reason we set horror as a category of its own is that because its presence dominates a story it's very much a love-it-or-hate-it quality for a story.

One of the reasons I've been thinking about genre is that I have an aging relative whose only hobby is reading, and she's read just about anything she can find in her nursing home. She knows I read, and has asked me to find books for her. I ask her what she likes, and she says she likes mysteries and doesn't want "horror or science fiction". I can understand her aversion to horror. I don't read horror either. It's very obvious that if I did find a mystery story with horror elements, she's not going to like it (and not listen to future recommendations).

Expand full comment
None of the Above's avatar

Star Trek is at least trying to be SF. They often do it badly, but their goal is SF. Star Wars is much more a fantasy novel set in space.

One way to think of this genre is as a kind of magical realism applied to an SF setting. The Mageworld series (MacDonald and Doyle) was an attempt to do a more plausible version of this--an SFnal world where there is a kind of magic that works and everyone knows about but hardly anyone can use. I thought they pulled it off quite well, tbh. A lot of SF that wanted magical elements just renamed them psi--see the psi corps in Babylon Five as a good example of this. The sort of unstated assumption there is that psi also follows some physical laws, though maybe ones the characters don't understand.

Expand full comment
Civilis's avatar

I am not a telepath, and I want a useful way to categorize stories that doesn't require me to become one. I don't know what Star Trek or Star Wars (or, technically, their creators) are trying to do, other than to tell a story (common to all fiction). I do know the original concept for Star Trek was something like 'Wagon Train to the Stars', but nobody here is calling Star Trek a Western.

The problem with categorizing something as a 'fantasy novel set in space' is that because the fantasy setting works with just about any story, any story outline can be a 'fantasy story'; you may as well say that 'Star Wars is a novel (story) set in space' (and the list of 'stories set in space' that are not science fiction is probably very small). If you want a story about the risks inherent to a scientific or technological advance, you can easily make a magic analogue to your science / technology and turn it into a fantasy story; there are a couple great novels in Terry Pratchett's Discworld series that work this way. These stories still aren't science fiction stories, at least by any definition I've seen.

Expand full comment
20WS's avatar

I agree completely, but what I really want to know is how to categorize fantasy.

Every list of fantasy movies that I've found contains loads of movies that it wouldn't even occur to me were fantasy. Movies like Being John Malkovich (which I call magic realism), or Star Wars, or supernatural horror movies. But I think of fantasy as "medieval knights with elves and magic and stuff". Am I thinking of high fantasy, or not necessarily?

Expand full comment
Nancy Lebovitz's avatar

I think of magical realism as fantasy without worldbuilding. Suggestions for other definitions?

Expand full comment
20WS's avatar

My working definition has been "realistic depiction of a world that is the same as this world, except with a small number of magical additions".

(for example - Metamorphosis by Kafka. Authentic, middle class Prague drama, with one addition: giant bug)

Expand full comment
Kenny Easwaran's avatar

I think that Midnight’s Children and 100 Years of Solitude have worldbuilding. I think what makes them “magical realism” rather than “fantasy” is that they deal with the same sorts of themes as “literary fiction” and the magical elements just heighten the emotional or dramatic range of the things the characters do.

Expand full comment
TGGP's avatar

Low fantasy takes place in our world, and thus doesn't require worldbuilding. The distinction between it & magical realism is that in low fantasy the fantastical typically intrudes upon an otherwise normal world. In magical realism the magic is normal for that world (which is more typically the case for high fantasy).

Expand full comment
Nancy Lebovitz's avatar

Those are non-standard definition. More commonly low fantasy is something like Fafrd and the Gray Mouser-- completely invented world, a couple of rogues wandering around can getting into trouble.

There are distinctions between fantasy set in it's own world (immersive fantasy, I think), fantasy where magic comes into our world (intrusive fantasy?), and fantasy where people from our world go into the fantasy world (portal fantasy).

Expand full comment
TGGP's avatar

https://en.wikipedia.org/wiki/Low_fantasy

"Low fantasy, or intrusion fantasy, is a subgenre of fantasy fiction in which magical events intrude on an otherwise-normal world. The term thus contrasts with high fantasy stories, which take place in fictional worlds that have their own sets of rules and physical laws. "

Expand full comment
Nancy Lebovitz's avatar

That's interesting. It looks like the definition of low fantasy has shifted.

For a long time, the exemplar of high fantasy was LOTR and the exemplars of low fantasy were Howard and Leiber.

Expand full comment
Kenny Easwaran's avatar

Being John Malkovitch definitely strikes me as magical realism rather than fantasy (the magical elements are almost tempting you to think they are just metaphors rather than literally true in the world of the fiction, as larger-than-life expressions of the emotions of the characters). Star Wars though strikes me as fantasy that just has the odd choice of setting that it's in space (but still has swords and sorcery, even if they call them light sabers and the force). Supernatural horror is definitely an interestingly different category from any of the above.

I also have the same problem when I see lists of sci-fi movies that include superhero movies.

Expand full comment
20WS's avatar

Interesting. Pan's Labyrinth was one of the top fantasy movies on one list - I guess I could see that one in fantasy as well as horror.

Expand full comment
TGGP's avatar

Star Wars takes place in a completely fictional "secondary world", so it's high fantasy. Being John Malkovich takes place in our world (with a real person in Malkovich), so it's low fantasy.

Expand full comment
Nancy Lebovitz's avatar

I thought high fantasy was also about high status characters and large stakes.

Conan is in an invented world, but especially before Conan becomes king, the stakes are, as I recall, mostly about his own safety.

Expand full comment
TGGP's avatar

Those are relatively subjective terms. Defining "low fantasy" the older way lets us more easily categorize things like the French tradition of "le fantastique", which was always low fantasy as they weren't into secondary worlds.

Expand full comment
Nancy Lebovitz's avatar

At this point, I think it's best to establish which definitions of high and low fantasy you're using in a conversation (or possibly use new terms) or else you'll find that you're just disagreeing about terminology.

Expand full comment
TGGP's avatar

I don't regard all definitions as equally useful. People will claim that A Song of Ice and Fire is "low fantasy" even though it takes place on the entirely fictional world dubbed "Planetos" with years-long seasons (implied to be the result of magic), contains numerous fantastical/magical elements (dragons, "skinchanging", resurrection from the dead) and has the enormous stakes of an army of Others bringing back the Long Night to exterminate all life on Planetos. True, it's not Lord of the Rings, but there's high fantasy besides Tolkien and he was hardly the first to write it.

Expand full comment
Deiseach's avatar

What you example is indeed high fantasy. But lists of movies are a subjective thing, and often put together by people who are just slinging out content ("Hm, this has sorta something magic or at least not realistic, call it fantasy!") if they're not fans.

And even if they are fans, they can argue over what is and isn't in the genre.

Expand full comment
Jacopo's avatar

Since this is ACX, I will link to Scott 😄

https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

The prototypical fantasy is not set in our world. But, no one doubts Dresden Files or the Mists of Avalon are fantasy. Magical realism and horror are a step further, because they have different themes as well. I would put them outside the genre, but some may put them inside. Star wars is set in space, but the themes and plot fit, plus it has plenty of "mystical" magic, so I would defitely put it in. But in the end, there is no right delimitation.

Expand full comment
20WS's avatar

Haha, nice link. :)

I think I really want a term for the thing I mean when I say fantasy - something along the lines of "fantasy in medieval Europe with magical elements". Surely someone has one

Expand full comment
Crazy Jalfrezi's avatar

They also visit the evil wizard in his castle.

Expand full comment
None of the Above's avatar

Also, the author runs out of ideas after the first three books, but stretches it out to nine because series sell better than single books.

Expand full comment
Civilis's avatar

The easiest way to categorize a work into a genre is to look at what ideas don't require explanation because it is assumed the audience understands them. Fantasy, for example, is a super-genre that derives its ideas from myth and legend. We know magic isn't real, but when Harry Potter raises his wand or Gandalf his staff, we understand that in this work of fiction magic is real, and we can base our suspension of disbelief on that knowledge. On the other hand, if I know nothing of myth or legend, I'm not going to understand a work of fantasy.

There needs to be a high-level, commonly understood term for the very wide super-genre that encompasses fiction based on ideas that derive from knowledge of science, including space opera, cyberpunk, and post-apocalyptic fiction. Right now, 'science fiction' is that term.

There is no current scientific basis for FTL travel. Still, the idea depends on knowledge of science: that space is vast, travel using conventional means would take an incredibly long time, that light has a very fast speed, that exceeding that speed (if possible) requires a lot of energy and breaches of currently understood science. As such, FTL travel (hyperdrives, warp drives, whatever form it takes) depends at its roots on science. Artificial gravity is a trope dependent on the scientific knowledge of gravity. Force fields, tractor beams, planet-destroying energy weapons, sentient androids... all are tropes that have at their roots a scientific knowledge.

Then, to understand Star Wars, you need at least an elementary level understanding of science. They are in space, traveling to different worlds, on space ships. To get across the vastness of space, they need to go very fast. Some of those worlds have alien life. Androids are very advanced robots that are intelligent enough to be people. Energy weapons are very powerful, but it takes a very large one to destroy a planet. Almost all the knowledge needed to understand Star Wars comes from science, and none depends on myth and legend.

There is a caveat to this; we can get our knowledge necessary to understand fantasy based on myth or legend, or we can take our knowledge based on previous works classified as fantasy. Likewise, we can get the knowledge needed for science fiction from either science or previous works of science fiction, a lot of which were dependent on the (later proven false) science of the times. Mental powers / psionics (such as the Force) depend on discredited science about the human brain (and specifically about the 'we only use x% of our brain' theory). At this point, psionic abilities are science fiction only because they have been grandfather-claused in to the genre.

Expand full comment
Nancy Lebovitz's avatar

I appreciate this. There's a lot more science background (even at a low level) in Star Wars than in A Song of Ice and Fire.

Expand full comment
Kenny Easwaran's avatar

This is a remarkably good and cogent argument for a conclusion I find completely wrong. This seems to me to be a nice and elegant way of classifying fiction, but I still think a characterization on which Star Wars is fantasy will end up being better.

Expand full comment
Civilis's avatar

I'll start by saying that the subject of 'what is a genre' is something that's been on my mind a lot, perhaps because it is something that doesn't end up breaking my brain with politics (usually). I appreciate having someone with a different opinion to bounce ideas off of, and 'Star Wars is Fantasy' is something I definitely need a steelman for because it's a common example that I can't make it work with how I think.

In order to attempt to model the idea of classifying fiction, the way I work out the way my own brain thinks is as follows: I take a couple of known examples to use as baselines: for example, Star Trek and Star Wars as Science Fiction, Lord of the Rings and Harry Potter as Fantasy. I then take additional entries and compare them to what I've already classified. For example, Babylon 5 shares a lot of similarities with Star Trek (~90%), some with Star Wars (~60%) and very few with LoTR or HP (~1%), therefor it is close to Star Trek and closer to Star Wars than LOTR or HP on the mental chart. If I do this enough I expect I will see clusters (as well as some real outliers). Those clusters are useful things to name. Some of those clusters will have smaller clusters within them (subgenres). There are also obvious clusters that don't fit neatly within SF and Fantasy and overlap both, such as 'Superhero'.

I should be able to replicate this process with different starting examples; if I take Babylon 5 and Ghost in the Shell as the starting Science Fiction examples and Game of Thrones and Journey to the West as starting Fantasy examples, the ending graph looks much the same. This should theoretically eliminate the problem that I start by defining Star Wars as Science Fiction.

I've tried to mentally replicate other people's models, specifically using 'Star Wars is Fantasy' as an end state and I run into the following issues:

1. I can't come up with clear starting 'science fiction' and 'fantasy' examples such that Star Wars has more in common with 'fantasy' than 'science fiction'

2. I can call the whole supercluster containing what I think of as 'fantasy' and 'science fiction' and 'superhero' as 'fantasy', but Star Wars is still in the useful 'science fiction' cluster and the 'fantasy' supercluster is perilously similar to 'fiction'.

3. I can rename what I call the 'hard science-fiction' subcluster to 'science fiction', but this means I need to rename the 'science fiction' cluster, and calling it 'science fantasy' means effectively the '[hard] science fiction' subcluster is a subcategory of 'science fantasy' cluster.

4. If I start by defining Star Wars as one of my 'fantasy' examples, the resulting set is so broad as to include anything.

Because it probably would help, please include some examples of what you consider unmistakably 'science fiction' in your reply.

Expand full comment
Bill Benzon's avatar

I'm curious about how you do this. Are these comparisons all done in your head, or do you make some kind of table, either on paper or in, say, a spreadsheet? If you are doing it all in your head, I think you should set up a spreadsheet and see what happens. Set up rows for features of a given title and a column for each title that interests you. Features could include: nature of plot (coming of age, boy-meets-girl, war between great powers, etc.), sources of energy, modes of transportation, kinds of weapons (personal and 'institutional'), kind of political organization, mode of dress (male and female different, different for different classes of people), and so forth. Then set up columns, Star Wars, Star Trek, Lord of the Rings, Ghost in the Shell, Blade Runner, etc. Once you've got it set up, start filling in the cells and see what you've got.

Further, lots of other people have thought about these things. I'm sure Wikipedia has long articles devoted to science fiction and to fantasy, and probably articles for subdivisions of each. Same with TV Tropes (https://tvtropes.org/). See what these sources have to say.

Expand full comment
Civilis's avatar

This is an attempt to generate a mental model of what my brain does automatically. I don't know that my brain works this way, because normally the process goes incredibly fast while I'm reading or watching a show. I do know I don't need to spend any time thinking that "Babylon 5 is similar to Star Trek". My brain is just strange in that I want to then think "why do I think that Star Wars is more like Star Trek than Lord of the Rings?". I also know that it takes a lot of mental work for me to recommend works to other people, even those that I know their tastes, so there's a reason for me to try to work out which works are similar. I don't want to explicitly document the process, both because that takes time, and because I don't know for sure that my brain works this way and trying to make the process explicit may mess with that.

It doesn't help that most of my current fiction consumption is translated Japanese light novels, and trying to classify many of those in simple terms does not work. In arms reach as I write this, I have books from two series where 'magic' is a result of sufficiently advanced nanotechnology, each of which takes a different approach to using that in the story. Another series in reach predominantly takes place in a fantasy VR MMORPG, so there's no actual magic but the story otherwise plays out as fantasy. Another one is a Star Wars-esque space opera, except the verision of the 'Force' used for combat precognition and reflex boosting is explicitly magical.

Expand full comment
Civilis's avatar

In addition to 'what works do you see as clearly being science fiction by your definition', do you see a cluster of works that includes 'Star Wars', 'Star Trek', 'the Martian', 'Foundation', 'Dune', 'Ghost in the Shell' and 'Blade Runner' and does not include 'Lord of the Rings', 'Harry Potter', or 'Game of Thrones', and if so, what do you call it?

Expand full comment
Bill Benzon's avatar

What do you want from a classification system? If you want a system that displays important relationships between individual titles, including major sources of themes, motifs, plot points, characters, etc. then forcing a tree structure over your texts is going to force you into making a lot of trade-offs and I'm not sure what it's going to get you. A neat structure with little information. What's that worth? Check out Oleg Sobchuk, Charting Artistic Evolution Evolution: An Essay in Theory, https://www.academia.edu/37962671/Charting_Artistic_Evolution_An_Essay_in_Theory

Expand full comment
Bill Benzon's avatar

Excellent.

Expand full comment
Carl Pham's avatar

Sure, I agree with that. I mean, that's one of the main reasons it burst on the scene like a thunderbolt[1]. You watch soi-disant "science fiction" films from an earlier generation of film, stuff like "2001" or "Silent Running" or "When Worlds Collide" and they go to painstaking lengths to persuade you they totally understand the science, and are being accurate with it, and maybe here's a short little primer on it so that you can understand it, too....Lucas[2] said screw special relativity and actual space geometry, if we need minds blown by flying faster than light, let's just show the stars streaking past the windshield like Star Trek only even better! If we need ray guns that shoot "light" that travels at a brisk trot, to heighten the dramatic tension because we know our heroes can in principle dive aside and avoid the bolts, why not? Christiaan Huygens has been dead 280 years, and the pimply 14-year-old geeks in the audience are going to be too busy gawking at Carrie Fisher going without a bra in a giant sleepshirt to quibble. Oorah!

------------------

[1] I remember sitting in the theater during its opening week when in the opening scene the star destroyer comes from *behind* and above you with this hideous subsonic rumble -- having really powerful sound systems in theaters was a novelty in 1977 -- and people absolutely jumped out of their skins, turned around and looked up in a panic. At the end the entire audience practically leapt to its feet and hooted and clapped like mad.

[2] By which I mean Marcia Lucas of course: https://youtu.be/zEHRNS-Scrs

Expand full comment
Deiseach's avatar

"Lucas said screw special relativity and actual space geometry, if we need minds blown by flying faster than light, let's just show the stars streaking past the windshield like Star Trek only even better!"

This is what really pissed me off about J.J. Abrams' reboot of Star Trek. He was a Star Wars fan and had no interest in Trek except as a showreel to persuade Disney etc. that he could handle the big-budget revival of the Star Wars franchise. And so he shoehorned in shots that were direct copies of shots from the original Star Wars trilogy and created the reboot universe along the logic of Star Wars, down to copies of the uniforms for the cadets.

So, how far is the Klingon home planet from Earth? Practically on the doorstep! Beaming from ship to ship in warp space? No problem! Amongst other things I cannot forgive him for is the setting of the engine room in the "Enterprise"; the first time I clapped eyes on that, I said "that's industrial chemical plant, not any kind of engine room".

Turns out it was a brewery: https://www.yahoo.com/entertainment/blogs/movie-talk/star-trek-boldly-goes-unlikely-real-life-locations-153158175.html

I don't quibble with "wanted large big huge open space", I do quibble with "this idiot doesn't even know what the visual language of an engine room versus a brewery is like". Oh well, I could rant all night about this. Abrams is not of the body!

Expand full comment
Carl Pham's avatar

I don't disagree, but I would say Abrams always had a different purpose. TOS was arguably about primates who retained their primate nature but struggled with self-imposed bonds of civilization, and much of the show was about how well or poorly they succeeded in being civilized despite their natures -- or even in some cases understanding that in some respects their instincts were still a better guide to action than their thoughts, so back to the drawing board on that one. (To my Neanderthal mind the subsequent spinoffs *started off* assuming Homo novus, whatever the writers' fantasy of A Better Man was that week, which is why they weren't as good -- as if I were really interested in the struggles of an AI Pinocchio to become A Real Boy, which I am not.)

My read is that Abrams decided to go back to those roots, and once again assume his characters were H. sapiens circa now, which is promising, but then left out the interesting individual struggle to, as Kirk put it, not kill today. Maybe he didn't know how to extract that from his cast and crew? Maybe he surmised (probably correctly) that this wasn't an interesting issue for the core movie-going audience of today? But anyway we're left with the usual bonobo hooting and throwing of feces, only with rayguns and warp drives, which is more forgettable, blends seamlessly into the superhero movies.

I feel like Star Wars, at least the original, although starting off with very similar antecedents, did not offend similarly just because it never had the same pretension, it was always just space operetta. Star Trek fans expected a little more Camelot, only in JJ's remake Guinevere (so to speak) just proposed a threesome as solution and Arthur and Lancelot happily obliged. So...a let down.

Expand full comment
Deiseach's avatar

Ah, me. The reboot was in 2009, that was 13 years ago, will I ever stop fighting the war?

No. Abrams and his gang pissed me off that badly. I grew up on Original Trek, it is to me what Star Wars was to those who first found out that SF was a thing they could love (I was the right demographic age-wise for Star Wars but Trek had my heart first, so it never grabbed me the same way). I'm going to talk about the first two movies, because these were the only ones I saw. The third is supposedly better, but by then I'd been so burned that I didn't dare watch it (and indeed, had lost interest). This feels like it's Tumblr all over again and we're back there in 2009 ripping the reboot to shreds for its errors!

The reboot in itself was not a bad idea, but the execution was bloody awful. Because Abrams didn't give a tuppeny damn about Trek qua Trek (and did some mild boasting about it), he went for the pop culture notion of what Trek and the characters were, and of course we all know that involves Flanderization. So he cast his version of young Kirk as not alone a rebel without a clue (and left on the cutting room floor all the character development of why Kirk turned out that way, so it's only in extras and salvaged clips and detective work done by fans that we find out about his bad home life, his stepfather, his emotionally distant mother, his brother taking off and getting out and leaving him behind, etc. as well as the unbearable weight of the heroic legacy of his dead father), he makes him a dumb stupid idiot womaniser.

Abrams and his gang of stuck-in-adolescence chuckleheads may think "ooh catgirl twins, you rogue you!" is a good look, but nah. That's not James T. Kirk.

Maybe it's *reboot* Kirk, but reboot Kirk is a shit head. He uses Gaila just to get access so he can hocus the Kobayashi Maru test (and for once the edited out bits *are* better left on the cutting-room floor, because it includes a thigh-slapper scene of hilarity where he does the 'they all look the same' joke about Kirk apologising to the wrong Orion girl, instead of Gaila), turns Christine Chapel into another one-line joke, and just generally does make you understand why he regularly gets the shit kicked out of him in bar fights because if ever anyone deserved to have the shit kicked out of him, it's Jimmy-boy.

Then we get Uhura reduced to Spock's girlfriend, the very reasonable objection by Spock in the crew assignment scene about "people will think you only got aboard because we're sleeping together" which this Uhura then - proves is correct by bullying Spock into letting her come aboard, and in the second movie she is upgraded to Spock's *nagging* girlfriend who acts totally unprofessionally at the wrong moment by dragging personal matters out in front of the captain and fellow-crew members while they're off on a secret guerrilla mission.

Karl Urban as McCoy was good. It would be very difficult to fuck up McCoy's character, but if any bozos could achieve it, it's this bunch.

I'm not even getting into "building a starship from the ground up on Earth" instead of the canonically established space docks, which they then use in the second movie because while Admiral Marcus may be a dyed in the wool villain, he is also the only member of Starfleet with more than three functioning braincells.

There's a lot of plot holing that can be explained away by using the circumstances of the altered timeline, as to why this Starfleet is more militaristic and why Kirk is a chucklehead onwards, but Abrams can't be bothered with that. He just does stuff because it Looks Kewl (and let's not even mention the lens flares, the Apple Store bridge, the lack of rank insignia on the female crew uniforms - in a more military organisation, you have no way of knowing if the woman approaching you outranks you? Even TOS Mirror Universe managed to give rank indications by gold piping on the abbreviated female uniforms - see Uhura here where the piping on her collar corresponds to the single stripe on the sleeve of the crewman on the right, while the one on the left has no stripes: http://images2.fanpop.com/image/polls/435000/435416_1273006052069_full.jpg?v=1273006101)

His Big Idea is to have two starships engaging in a dogfight over San Francisco not once, but twice. I'm not going to talk about Khan in the second movie, that has been adequately thrashed out, but the main character I appreciated in that was Marcus, who played Kirk like a fiddle and knew exactly which buttons of his to push in order to get him to do what he wanted - head off like a wrecking ball to start a war with the Klingons. Did Jimmy learn anything from this, like maybe don't threaten your Chief Engineer when he points out that you are breaking every single regulation in the book? No, he gets to be Big Damn Hero at the end.

Well, then Abrams cleared off to Star Wars and I think we were all happier for that. But as we said on Tumblr back in 2009: Abrams is not of the body! 😁

Expand full comment
Carl Pham's avatar

No argument there, and I'm not sure whether to be impressed or sympathetic that you remember so many offenses.

I was trying to ponder the origin of the failure, and my first pass thought is that he stripped out the element of the personal growth struggle. Like in "The Naked Time" where Kirk has to stop being so fucking emotional about his ship in order to go save his ship, and he just pulls it together by force of will. Or for that matter in "City on The Edge of Forever" where he has to take his very natural drive to save Edith and suppress it -- and where any rationalization about it being The Right Thing To Do falls well short of satisfying and sufficient, and we know it, just like he knows it.

Somehow that goes away. Things always happen for reasons, and the only struggles are with enemies. I guess I could say it feels like a movie for Therapy Culture, where there are no real genuine choices between Right and Wrong that a man must make, which consume his will and soul in the internal struggle, and where there are always reasons for everything, so the antithesis of the Greek tragedy notion that sometimes shit just happens and you do the best you can. But this lack is what makes them forgettable to me, because it is *that* struggle, the internal effort to do right, be better than your inherent nature, which lasts a lifetime, and which makes for an interesting story indefinitely.

Expand full comment
John Schilling's avatar

Things went wrong when Abrams forgot the words (if he ever knew them).

"These are the voyages of the starship Enterprise. Her five-year mission: to explore strange new worlds. To seek out new life and new civilizations. To boldy go where no man has gone before".

If you are faithful to the words, you get something worthy of the name "Star Trek". Something fundamentally optimistic, about people, flawed people, trying to be better people and do great things.

Abrams, has a bunch of whiny spoiled brats compared to the original crew, but he doesn't recognize them as flawed, they never try to be better, and the greatest thing they can think of doing is having an outer-space shoot-em-up to defeat a bad guy. And the count of strange new worlds, new life, new civilizations, and places boldly gone is depressingly low, possibly because he's not creative enough to come up with those.

And unfortunately, the team that took over the franchise with "Discovery", seems to have taken that to heart.

Expand full comment
Deiseach's avatar

Mirror Universe Uhura. Oh my God, Mirror Universe Uhura. Even if it's only our universe Uhura pretending to be Mirror Universe Uhura, she would chew up Reboot Uhura and spit out the bones.

Expand full comment
Carl Pham's avatar

I agree with you, I find Nichols's Uhura much "stronger" in some essential way than Saldana's, even though the latter was portrayed as far more skilled and outspoken -- and I have been trying to figure out why, but I can't. The best I can come up with is that the older Uhura seems less...brittle? I'm vaguely reminded of the scene in "The Savage Curtain" where the fake Abraham Lincoln meets Uhura for the first time and says something about ah a lovely negress, and then apologizes for calling her that, and she just shrugs and says "In the 23rd century we're not afraid of words any more."

Expand full comment
Caba's avatar

No great man exists in a vacuum, but come on, George Lucas' wonderful movies are, obviously, mostly his own, and I for one am very thankful to him for everything his genius gave humanity.

Expand full comment
Axioms's avatar

Much like your LoTR claim, this is extremely untrue. Lightsabers was not Lucas, he was just calling them lazer swords. His wife did a shit ton of, top notch, editing. The original story was based on The Hidden Fortress. There were several major changes, some because the TESB scriptwriter died of cancer or something, etc. Dialogue was constantly changed because the dialogue Lucas wrote was garbage.

Lucas is important and many of his movies are pretty good in certain areas but he gets maybe 50% of the credit for Star Wars. Twice as much as the next most important person but hardly "mostly his own".

Expand full comment
Caba's avatar

Since you do think that he is the single most important person responsible for Star Wars, I hope we can agree that jokes about the real author of Star Wars being Marcia (as Carl Pham wrote: Lucas said screw special relativity and actual space geometry ... by Lucas I mean Marcia of course), as if to imply that George is a hack, are untruthful.

Maybe I shouldn't take it so personally, after all why do I care? But over the course of my lifetime I've ever more frequently seen Lucas, to whom I'm so grateful, figuratively dragged through the mud, especially online, and treated as a joke, sometimes with weird contempt and weird resentment (I don't mean in this particular case), and I think it's unfair.

Expand full comment
Bill Benzon's avatar

Thanks for that clip about the Star Wars edit. Most instructive. & probably true of many films. I'm thinking particularly of Apocalypse Now.

Expand full comment
Caba's avatar

The problem is that one thing is "setting", one thing is "genre".

There's a stock "setting"; we may call it Generic Future. A world with casual space flight, robots, clones, lasers, you know. The whole high-tech future deal. It's a setting, not a genre. Within that setting you can tell stories of literally any genre you can think of, including LOTR-inspired epic which is what Star Wars is.

I think that, for the sake of clarity, the word Sci-Fi should be reserved for a genre as opposed to a setting. A Sci-Fi story is a story *about* the consequences of technological or scientific advancements. That is a genre.

The first distinction I'd draw here is the one between on one side LOTR and Star Wars (which are the same thing), versus, on the other side, science fiction, such as, say, Asimov.

Expand full comment
Deiseach's avatar

"between on one side LOTR and Star Wars (which are the same thing)"

You hear that noise? It is the wrathful spirit of J.R.R. Tolkien coming to smite you.

If you mean that bloody tiresome generic "Hero's Journey" of Joseph Campbell, an idea which has been worked to death worse than a pit-pony in the deepest mines by people (generally Americans, while I'm on the offensive let me go the whole hog) who love reducing everything to a neat list of boxes they can tick off so they can pretend they understand it - on a very, *very* shallow level it's something in common.

On any other reading, hell no! Lucas and Tolkien have completely different world-views at the foundation of their creations. You cannot make the equivalence "hey, the Eagles, the "Millenium Falcon", same thing!" work.

If I'm really gonna be a pissy little bitch about this, Tolkien was proud of having worked out the length of the year in Middle-earth to exactitude, Lucas couldn't even get what a parsec is right 😁

https://psarando.github.io/shire-reckoning/Kings_Reckoning_Rules_and_Deficit.html

Expand full comment
Nancy Lebovitz's avatar

There's been some discussion of the inadequacies of the Hero's Journey, including a claim that it's just something Joseph Campbell made up-- or derived if you want to be a bit more polite. I've seen a claim that there's no traditional fiction which has The Hero's Journey in full detail.

The Hero's Journey (which is a good frame for a story, it's just that it's far from the only good frame) seems to be about a young man growing up. It can be stretched to be a young woman growing up, but there's a lot more to life.

For example, you can have a story about adults, like the Iliad. You can have a multigenerational story, maybe with a Hero's Journey folded in somewhere.

At some point, I tried to come up with rules for humor, but it was hopeless-- I came up with counterexamples for any rule I had.

Expand full comment
Caba's avatar

(part 2)

But let’s go back to the start of the SW saga. The story is set in motion by the young rural protagonist obtaining an item that is the only thing that can destroy the evil empire. This is the Ring / the death star plans. This item parallels only some aspects of the ring (the bad guys are searching for it, must remain hidden and be delivered to the alliance at Rivendell / Yavin, for poetic irony it is carried by a member of the humblest race (a droid/hobbit) and is key to the defat of the Empire/Mordor), but the more important, moral aspects of it (the moral choice it represent), is also explored in the saga, as I’ve explained, albeit untethered from any mcguffin.

The young protagonist lives in a rural place where great things don’t happen. He has a mysterious older relative now gone, a father figure, who was once a great adventurer (Anakin/Bilbo - and remember that at this stage Lucas didn’t know Anakin would be merged with Vader). He also knows a mysterious Merlin-like old sage, who is a good friend of the great adventurer now gone. The sage, despite being a familiar figure to the rural hero, is connected to the greater world where great things do happen and knows personally the great players in the armies of good and evil. He is a member of an ancient order with magical powers, and he’s acquainted with one in his order who has gone astray and is helping evil (Saruman-Vader), the sage’s own dark counterpart.

As the hero journeys with the mcguffin, he is stalked by the agents of the armies of evil, who are trying to recover the thing.

The hero must now journey from the rural world where great thing don’t happen into the epic world where great thing happen. The transition point is a city, indeed a bar in a city. In the book, the fact that many types of folks (ordinary Men, Hobbits, Rangers) are all regulars at the Prancing Pony, is an unusual feature of the place, enough to make it stand out, in the context of Middle Earth, as a melting-pot of peoples. The melting-pot trait is taken to the extreme in the movie equivalent, the Mos Eisley cantina scene, but the kernel of it is already in LOTR.

At the bar the hero meets a guide who’s seen it all, Strider / Han. Now, you may think that the two characters have nothing in common. Part of the problem is that everyone will be thinking of Mortensen, who looks like a fashion model even in his early scenes in FOTR. But Strider, as he appears here in the book, is meant to be ambiguously “foul-looking”, rough-looking, not one you’d automatically trust; he’s also one with a lot of rough adventuring behind him, and knowledgeable in his adventurous ways enough to escort the hero away from the rural world and further into his journey. He is not Han of course, but a little less far off than you think.

Later in the story, while the good guys are trapped in a great perilous dungeon full of enemies, just as the dungeon is almost over, the Sage fights a great duel and perishes.

Obi-Wan is struck by Vader and disappears, then dramatically says: “Run, Luke, run!”, and a thunderstruck Luke obeys and flees from the Death Star.

Gandalf is struck by the Balrog and falls, then dramatically says: “Fly, you fools!”, and a thunderstruck Fellowship obeys and flees from Moria.

Once they’ve fled to relative safety, mourning ensues.

This is enough for now, but I could go on and on with this. Watching the movie trilogy I keep thinking: this bit is from that bit of LOTR, this other bit is from that other bit of LOTR. Feel free to believe I’m just pattern matching too aggressively.

And by the way, another piece of evidence of LOTR’s influence on Lucas is the Nelwyn village in Willow (for which Lucas wrote the story), which is pretty much the Shire, with dwarf actors playing Hobbits.

Anyhow, make no mistake, I’m not claiming that SW is the same thing as LOTR (it also a lot of other things, beginning with The Hidden Fortress).

And mine is an analysis of what influenced Star Wars, not of Tolkien’s intentions or worldview.

Expand full comment
Caba's avatar

As promised, here is my defense of the theory that Star Wars is influenced by LOTR. (part 1 of 2)

Keep in mind I'm not discussing Tolkien's worldview or intentions, only his influence on SW.

One theme of the Star Wars OT is that the greater setting, the galaxy, is a place of faded and fading majesty, like Middle Earth. Only some remember with nostalgia the nobility that used to be.

“an elegant weapon for a more civilized age.”

“Before the dark times, before the Empire.”

“The Jedi are extinct, their fire has gone out of the universe. You, my friend, are all that's left of their religion.”

The saga ends with the the royal blooded rekindling something of what used to be, some of the glory of the past. As with the kings of Gondor, the Jedi are restored, and the role of the superior inherited blood of the Skywalkers / royal Numenoreans is emphasized. Even the titles of the third chapters are in parallel: Return of the King, Return of the Jedi. I remember reading somewhere that in an early draft of ROTJ, Leia was meant to gather the survivors of Alderaan and take up the responsibilities of a queen. In that version there would have been even more parallels with Aragorn’s royal restoration.

In the middle of this great galactic arc of long waned glory and partial restoration by the power of aristocratic blood, a gigantic war is fought between the armies of Good and Evil. Evil is, of course, all modern and technological and organized and industrial and orderly and fascist. The same is true of the bad guys in LOTR, but you’d never know from the great Tolkien-destroyer, Peter Jackson. Who won’t even let Isengard have remotely operated flamethrowers! Of course the forces of Mordor and Isengard, like the Galactic empire, are associated with dehumanizing technology and industry and order (see what Saruman does to the Shire: “on every wall was a notice and a list of Rules”). This is important to note, because, if all you have in mind is Jackson’s savage Orcs, it’s difficult to draw the connection, but if you rely on the books, then what I’m saying makes sense.

Evil Modernity is contrasted with good Nature. The uprising of the Ewoks against the Empire parallels that of the Ents against Saruman. Saruman / the Empire have a key base next to a forest, do not suspect that an attack may come from the people of the forest, are caught unprepared when the seemingly primitive forest dwellers join the greater alliance of good, hence the downfall of the bad guys in a battle in which the contrast between good nature and evil technology is emphasized. In the forest battle in ROTJ, the Ewoks use TREE logs to take mechanical walkers down. TREES. Dropped to smash the sides of the walkers or laid as trap for them to stumble over. The point is clear. Likewise, the battle of Isengard pits nature against technology, and forest-guardian tree-men fight against and defeat remotely operated machinery that throws flames at them - a faithful movie adaptation of this would have seemed steampunk rather than fantasy.

Above all there is the theme of forging friendship and of loyalty among friend, and how this is contrasted with the backstabbing nature of evil.

One of the infinite ways in which Peter Jackson betrayed Tolkien is that his versions of Saruman and Sauron are on the same page. Of course, in the books, Saruman has his own vision for Middle Earth, and is trying to get the Ring for himself in order to betray Sauron, and vice-versa. In general, Tolkien’s bad guys, from lowly Orcs to Maiar, eagerly backstab one another. Whereas the good guys in Tolkien are all about friendship and devotion and the forging of alliances and fellowships. I don’t need to give examples of this.

This is also the case in Star Wars. As with LOTR, the armies of Evil are led by two wizards, Vader and Palpatine, who, like Saruman and Sauron, are eager to destroy the other. Vader is searching the galaxy for Luke, and is willing to use him to destroy Palpatine and replace him. Later, Palpatine urges Luke to destroy Vader. Eventually, Lucas made a law out of this: there can only ever be two Sith, a master and an apprentice, because if there are more they kill each other.

Whereas the good guys in Star Wars are all about helping one another loyally. Friends are constantly risking their life or making sacrifices to come to the rescue of friends. Even among the droids inspiring, unbreakable friendship is a central theme. Many of the story arcs visible across the trilogy as a whole are about the forging of such connections. The diversity itself of the Rebel Alliance is a way to show that many peoples have come together (why are the good guys the ones with the aliens? it could easily have been the opposite).

As for Tolkien’s Ring, it becomes the dark side of the Force. Bear with me here.

Here is yet another thing Peter Jackson misrepresented. The movies never portray in a relatable way what’s going on in the minds of those tempted by evil, and therefore, never portray a moral choice. The movies tend to instead depict the Ring as a demon that mysteriously takes over your soul. But that is not what the Ring is. “Wild fantasies arose in his mind; and he saw Samwise the Strong, Hero of the Age, striding with a flaming sword across the darkened land, and armies flocking to his call as he marched to the overthrow of Barad-dûr. And then all the clouds rolled away, and the white sun shone, and at his command the vale of Gorgoroth became a garden of flowers and trees and brought forth fruit. He had only to put on the Ring and claim it for his own, and all this could be.” The Ring, like the Dark Side of the Force, is a promise of power and wish fulfillment. A promise, as Anakin puts it, to “make things the way we want them to be”. It’s easy to see this theme in the SW movies.

The other thing about the dark side is the morality surrounding violence. Attacking an unarmed enemy accorrding to Peter Jackson is good (PJ’s Aragorn murders a negotiator and we’re supposed to cheer, Frodo attacks Gollum in order to get the ring for himself and we’re supposed to cheer). In Lucas, attacking an unarmed enemy is what turns you to the dark side. The seemingly unarmed Emperor tells Luke to strike him down, and Luke’s inabilty to resist means he’s on the path to darkness. Later, he tells Luke to strike the unarmed Vader down, and we understand that if he did, he’d turn to the dark side. This is in line with Tolkien’s morality.

Faramir: “I do not love the bright sword for its sharpness, nor the arrow for its swiftness, nor the warrior for his glory. I love only that which they defend.”

Yoda: “Wars not make one great.” “A Jedi uses the Force for knowledge and defense, never for attack.”

What I’m saying is that Lucas took the power of the Ring out of the Ring, and put it into the Force itself and into the stark moral choice the Force gives you between good and dark.

Choosing the dark side = taking the ring.

Choosing the light side = refusing the ring.

However, Lucas did include in his movies the similar theme of a physical mcguffin of immense mythical significance that puts your morality to the test and destroys you if you choose evil… in the Indiana Jones movies. The Ark was already this (if you arrogantly want to see, you are destroyed), but even more so, the Holy Grail in the Last Crusade (remember, he wrote the story for that one). The Grail puts you through a whole ordeal of moral tests, the last of which is that if you try to take it for yourself and leave the temple, you fall to your death, and only those who let go of their greed survive. Again, taking the ring / refusing the ring.

(this continues in the other post that begins with "part 2")

Expand full comment
Caba's avatar

I see that there's one more person who noticed my bold statement about LOTR and SW, so maybe I'll find the time to defend that view after all. I'm tempted. I would have to write a lot here and I feel a bit tired as I'm recovering from Covid. I'm not sure I will. But I am indeed tempted.

But of course they're not literally the same. In the context of that particular sentence, "the same thing" merely meant that they're in the same genre. Obviously no two things are the same things.

Still, yes, I do think Star Wars is much more deeply indebted to Tolkien than most people think.

I'm not American by the way, and English is not my first language.

Maybe later I'll write a defense of the SW-LOTR relatedness.

Expand full comment
Carl Pham's avatar

Your perception is not at all unprecedented, many people have made the connection, and it's helped along by the fact that LOTR was very popular in the early to mid-70s, when Lucas was writing Star Wars. There's even a scene in one of his early draft screenplays that is basically lifted directly from Tolkien.

From https://www.salon.com/2014/10/03/secrets_of_the_star_wars_drafts_inside_george_lucas_amazing_and_very_different_early_scripts/

"Luke is attacked by Tusken raiders just before he meets Ben; they leave him handcuffed to a giant spinning wheel. Kenobi approaches with a “good morning!”

“What do you mean, ‘good morning’?” Luke responds. “Do you mean that it is a good morning for you, or do you wish me a good morning, although it is obvious I’m not having one, or do you find that mornings in general are good?”

“All of them at once,” replies Kenobi.

It’s a great laugh line. It is also lifted, word for word, from "The Hobbit." J. R. R. Tolkien’s work was so widely read by the 1970s that Lucas could never have gotten away with the theft; it vanishes in the fourth draft. Still, it does reveal Obi-Wan Kenobi’s origins, as well as Yoda’s, rather plainly. This version of Kenobi is the acknowledged father of both of them, and he’s a giggling galactic Gandalf."

Expand full comment
Caba's avatar

Wow, I didn't know that.

Expand full comment
Nancy Lebovitz's avatar

I don't see LOTR and Star Wars as being much alike, but they do have one strong thing in common-- the single point of failure for the bad guys so that there can be a huge victory.

Tolkien is more sophisticated-- the victory turns out not to be simple, but that doesn't take away from the rooster crowing as the darkness lifts over Gondor.

Expand full comment
Caba's avatar

That, too, is part of the long list of parallels. Death Star = Barad Dur (an evil fortress whose main feature is its unfathomable size that dwarfs anything else, yet can be destroyed wholesale with a single heroic action on the part of the protagonist while a great battle rages, and a Mcguffin is key to that).

Expand full comment
Nancy Lebovitz's avatar

There are also differences, though. There's a design flaw in the Death Star is more like something that's possible in the real world compared to Sauron put much of his power into the Ring, which isn't remotely possible in the real world.

Expand full comment
Deiseach's avatar

I disagree about the same genre, though it's one of those grey areas. Science fantasy and space opera, after all, arise out of planetary romances which are easily blurred into fantasy proper; is there really very much difference between C.L. Moore's "Jirel of Joiry" (which is indeed fantasy) and stories by Dunsany or Clark Ashton Smith? No, not at all.

But then we come to her "Northwest Smith" stories, or Leigh Brackett's "Eric John Stark" stories, and indeed Ray Bradbury's "Martian Chronicles". They have science fiction trappings, they are set on the planets of our solar system and the characters travel by rocket ship, but the plots and settings are exotic, Orientalism-influenced even, the Westernised versions of the Arabian Nights with a new coat of chrome paint and rocket ship tail fins slapped on.

That steps into the in-between borderlands of fantasy proper and science-fiction. And while it's tempting to search out common tropes between science fantasy and fantasy itself, that can lead us astray.

LOTR is not set "A long time ago in a galaxy far, far away....", while it may indeed be a long time ago, Middle-earth is our world. The inhabitants of it, even the Elves and Dwarves, inhabit this world by right. Those who come from outside the Circles of the World are not aliens, not even sufficiently-advanced aliens, but gods. It is, in a very strange way, meant to be *realistic*, a sort of pre-pre-history of our world. It is not concerned with Tatooine or other planets, but with our own planet.

From the selected letters:

"The particular 'myth' which lies behind this tale, and the mood both of Men and Elves at this time, is the Downfall of Númenor: a special variety of the Atlantis tradition. That seems to me so fundamental to 'mythical history' – whether it has any kind of basis in real history, pace Saurat and others, is not relevant – that some version of it would have to come in."

"I say this about the 'heart', for I have what some might call an Atlantis complex. Possibly inherited, though my parents died too young for me to know such things about them, and too young to transfer such things by words. Inherited from me (I suppose) by one only of my children, though I did not know that about my son until recently, and he did not know it about me. I mean the terrible recurrent dream (beginning with memory) of the Great Wave, towering up, and coming in ineluctably over the trees and green fields. (I bequeathed it to Faramir.)"

Expand full comment
Ninety-Three's avatar

I like the implication that a story about cavemen discovering fire could be science fiction.

Expand full comment
geoduck's avatar

I love to recommend Howard Waldrop's The Ugly Chickens. Zero genre tropes, indisputably science fiction (won a Nebula Award in 1980).

https://www.lexal.net/scifi/scifiction/classics/classics_archive/waldrop/waldrop1.html

Expand full comment
Kenny Easwaran's avatar

I haven't checked that this is the whole story, "The Truth of Fact, The Truth of Feeling", by Ted Chiang: https://devonzuegel.com/post/the-truth-of-fact-the-truth-of-feeling-by-ted-chiang-subterranean-press

But I think this story makes the case that a story set in 1940s Kenya with a missionary teaching writing to a young man, could in fact be a science fiction story (by showing that it's the same story as a story set in a future with full-life memory logs and automatic search queries).

Expand full comment
None of the Above's avatar

There's a fairly entertaining series of short stories by Turtledove (collected in Agent of Byzantium) that takes place in an alternative history where the Byzantine empire ends up still the dominant power in Europe (Rome and Alexandria are both subjected to them) far later than in OTL. The series of stories is basically this agent going from place to place as the beginning seeds of what will be the scientific/industrial revolution start growing--people have discovered gunpowder, lenses, a crude version of the printing press, distilled alcohol, and various other things. In some sense, it's SFnal simply because in that setting, he's watching (and often stealing) all these world-changing inventions.

[ETA] One of the interesting "setting" parts of the story is that in this timeline, Islam never happened, and St Mohammet is considered one of the most important saints of the Church. (Maybe comparable to St Francis or St Augustine in OTL.) Another religious setting element is that Arianism hasn't been suppressed, and in fact still is the dominant sect in the northern parts of Europe.

Expand full comment
Axioms's avatar

Star Wars was originally a pretty straight retelling of The Hidden Fortress. I'm not sure where you got LoTR from. None of the 4 or so major revisions are LotR related.

Expand full comment
Caba's avatar

I do believe that the final version of SW is strongly inspired by LOTR, as are many of Lucas' other works. But I don't really have the time to spare to argue for that here. It doesn't matter to the point I was making. What matters is that they're in the same genre.

Expand full comment
None of the Above's avatar

Lots of SF includes references or homages to LOTR.

Expand full comment
Bill Benzon's avatar

You've got a point there. I'll only say that the whole business of what constitutes genres and subgenres, etc., in cultural artifacts – like movies, TV programs, novels, music of all kinds – is a sticky mess. It's not relatively neat, like biological taxonomy of multicelled animals, plants, too, but there's plenty of hybridization there. In single-celled creatures we have lots of horizontal transmission of genetic material, which messes up efforts to impose tree-like phylogenies. Take a look at this extensive map of genres of music, https://musicmap.info/

FWIW, here's some notes on the issue, Divergence and Reticulation in Cultural Evolution, https://www.academia.edu/41095277/Divergence_and_Reticulation_in_Cultural_Evolution_Some_draft_text_for_an_article_in_progress

Expand full comment
Nancy Lebovitz's avatar

Genres are marketing categories-- they group art according to what people are likely to want.

Logically, I'm not sure it makes sense to call stories fantasy that have no magic and are set in something that's more or less the past but not from our timeline like Swordspoint or Gormenghast, but it seems to work in terms of finding an audience.

Expand full comment
Kenny Easwaran's avatar

People sometimes forget that Linnaeus actually made a taxonomy for the three kingdoms of animal, vegetable, and mineral. It turned out that his taxonomy beneath the kingdom level was very well-suited to animals and vegetables, but was very badly suited to minerals. (Not that cultural productions are anything like minerals - I think you're right that they're much more like single-celled organisms.)

https://en.wikipedia.org/wiki/Linnaean_taxonomy

Expand full comment
Bill Benzon's avatar

Hasn't Lucas explicitly said that he modeled Star Wars on old time space opera serials that used to be shown in theaters?

If you talk with someone like Adam Roberts (do a search on his time, lots of stuff should turn up), who has written a scholarly history of science fiction and is himself a writer of speculative fiction, I'm sure he can name, if not dozens, certainly lots of science fiction genres. That's one of the things that academic students of science fiction do, identify and trace the histories of the various genres. I'd think both Wikipedia and TVTropes would list some of those.

Expand full comment
Nancy Lebovitz's avatar

I read a fair amount of discussion of what is science fiction on usenet in the 90s, and came to the conclusion that the real emotional dividing line isn't between science fiction and fantasy, it's that hard science fiction is one thing and just-make-it-up-science-fiction/fantasy is another thing, as defined by what people like or dislike.

Expand full comment
Kenny Easwaran's avatar

I think the hard/soft distinction cross-cuts the science fiction/fantasy distinction. You can have a very hard, rules-based magic system or a soft one, just as you can have harder and softer science fiction.

Expand full comment
Nancy Lebovitz's avatar

I think of hard fantasy (logical implications of a magic system) and hard science fiction (logical implications of known science) as very different things.

I have a category that I call cognitive fiction-- it cuts across genres and is fiction that rewards the reader's thinking. It includes hard science fiction, hard fantasy, some mysteries, and some historical fiction.

Expand full comment
None of the Above's avatar

An edge case here is David Friedman's _Harald_ (it's just a made-up history, there's no magic or anything) and his _Salamander_ (magic is very much like technology, if you really want to understand magic then you have to take the advanced math classes at the magic college, and new discoveries in magic can be destabilizing to the whole society. )

Expand full comment
Deiseach's avatar

Damon Knight's definition is still one of the best:

"Science fiction is what we point to, when we point to something and say 'that's science fiction'".

Expand full comment
Nancy Lebovitz's avatar

Some of us feel compelled to try to figure out what the things we point to have in common.

Here are some approaches I've seen.

Definition, where there's a hard boundary separating the thing you want to define from everything else, and everything inside the boundary is equally part of the group you're defining. This probably doesn't work well for a lot of human creations, but it does lead to lots of discussion of things you're interested in.

Prototype theory: There's a best example, and things more or less resemble the best example. I believe people have their best examples as a result of imprinting-- it's an emotional thing.

I don't know the name for this one, but it's possible to have a list of traits, and anything with enough of the traits are in the category.

Expand full comment
Jacopo's avatar

Absolutely, I also hear that the Inheritance Cycle is mostly star wars repackaged as a YA fantasy novel (with dragons). At a certain point in time fantasy had a very bad reputation, and space had everyone fascinated, so it was common to paint a layer of SF on fantasy stories (damaged spaceship lands on planet with tolkienesque aliens or similar).

On the other hand, the line separating fantasy and SF is often thin. How would you classify Dune? Windhaven? Or the Humanx Commonwealth and Known space universes, which have so different themes from one book to the next?

Expand full comment
Deiseach's avatar

"Dune" is space opera/planetary romance, it has a layer of very brittle hard SF slapped on, but the real interest for Herbert is the philosophical/political/sociological elements of his story, so he fudges hard and often to make his universe work out the way he wants it to work out, and laws of physics be damned.

I can't speak to the increasing number of sequels and how they're structured, they may be more SF and less Space Opera.

"Windhaven" - never heard of it.

"Humanx Commonwealth" - heard of it, never read it. I'll give Alan Dean Foster the benefit of the doubt and come down on the SF side. Maybe squishier than hard SF, but SF all the same.

"Known Space" - again, I'll come down on the SF side. Niven is trying to write within 'known physical laws' constraints, even if he stretches those a *lot*. "Neutron Star" is maybe a little dated, but it operates very strictly on known physical principles:

https://en.wikipedia.org/wiki/Neutron_Star_(short_story)

Expand full comment
Jeffrey Soreff's avatar

"Known Space" Very much agreed!

I tend towards a definition of hard SF where working through the consequences of some hypothetical technology or physical change is a major part of the story. _Consistent_ magic can sometimes fit within this... E.g. (again Niven) https://en.wikipedia.org/wiki/The_Magic_Goes_Away with its mana depletion and wizard's wheel

Expand full comment
None of the Above's avatar

The Known Space books are definitely hard SF. The Wheel of Time series has an interesting fantasy take on magic, since this age's magic is the technology of a couple ages back.

Expand full comment
Firanx's avatar

> Windhaven

Good question! If I'm not forgetting some significant details: The setting feels like typical fantasy, with feudalism and flying. The fact that they're flying using the remnants of the ancestors' (sci-fi) technology isn't even all that important, if you forget that fact the book doesn't change much. There's no (technical) innovation or research. There is, however, some "social innovation", I think? Even if it's not anything particularly unusual or novel, given its context I think it deserves to be labeled as "social science fiction".

Expand full comment
Jacopo's avatar

Exactly! But the social stuff would not be too out of place in a Sanderson book, or even a GOT-style one (Martin is one of the authors, after all). So if they used magic for flying, most people would say fantasy without thinking too hard about it.

Expand full comment
Firanx's avatar

I did think about GoT, too. A feudal empire built with the use of dragons does seem like a legitimate social exploration. GoT (and presumably ASoIaF, I haven't read enough to be sure) has lots of other themes but "The Hedge Knight" is mostly this. But this makes me rethink my label for Windhaven because calling HK social science fiction, rather than social fantasy, seems weird. Is there a word for "trying to explore the consequences of your fantastic assumptions of any flavor seriously, rather than using them as convenient tools to move your plot along"? But this is probably more of a continuous spectrum anyway, not to mention impossible to settle objectively.

Expand full comment
Thor Odinson's avatar

It's not only commonly recognised that Star Wars is Fantasy in Space (often called 'Science Fantasy'), it's often used as the exemplar of the genre - typically contrasting with OG Star Trek, which while very 'soft' Science Fiction is clearly actually sci fi, with a focus on exploring the implications of technology and biology and different ways of structuring society.

Expand full comment
TGGP's avatar

Your view is rather common. In addition to the swords there are "knights" and a "princess", and it's explicitly set "a long time ago". Obi Wan is like Merlin.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

In a “Galaxy” (rather than Country) far far away. Clearly signalling it’s fairytale roots from the get go.

Expand full comment
TGGP's avatar

And Eddison's The Worm Ouroboros takes place on Mercury.

Expand full comment
Axioms's avatar

Space Opera can be, but doesn't have to be, "Science Fantasy". This is the recognized term among enthusiasts.

Of course you then have the more common soft/hard sci fi divide. Dune is soft and I, Robot(not the movie) is hard. Peter Watts also sometimes gets credit for hard sci fi. Basically old 50s style conceptual exploration vs drama.

Edit: Forgot to be specific, Star Wars is science fantasy to serious nerds. Casual watchers don't care about this stuff.

Expand full comment
Zærich's avatar

A question on different brands of the same drug. In short, is there / does it seem reasonable that there might be a difference in effect between the Brown & Burk version of Gabapentin, and the (generic?) version available in Canada?

My Grandma recently had to use the Brown & Burk version, and found it to work much better for her neuropathy. Same dosage and everything.

Some more detail: she was on a cruise and forgot her regular supply, and picked up what was available on the cruise, the Brown & Burk. Instead of the pain being managed, the pain was effectively gone. The switch back occured several days after the cruise, when the new supply ran out and she went back to regular. Thus, it sounds like it was probably mostly the different Gabapentin, not something else (changes to diet or environment on the cruise, e.g.)

So, 2 questions: does this sound right, that different brands of this drug could have substantially different efficiacies? And if so, does anyone know if there's a good way to get it in Canada? She's close enough to the US border that she is willing to go down to pick something up.

Expand full comment
SimulatedKnave's avatar

Perfectly possible. There are countless anecdotes of people with antidepressants and stimulant meds having different experiences from the generic to the name brand. As others have pointed out, a slightly different manufacturing process can lead to very different results. Plus there's always individual variation to account for, too.

There are some pharmacies (my local Safeway one does it) that will sell you the name brand at the generic price if the generic is cheaper. So that might be worth looking into for her.

Expand full comment
Zærich's avatar

Unfortunately, I think the issue is the version that worked not even being available at all in Canada. This might be helpful regardless, so I'll pass it along anyway.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
SimulatedKnave's avatar

In my experience, they ask you if substituting the generic is something you want (where there is a price difference). Sometimes the insurance covers it, sometimes they don't. It's been a while since it came up for me, so I honestly don't remember that well what happens if you want the non-generic and the insurance prefers the generic (I think that may be the source of Safeway's arrangement).

100 caps of generic gabapentin 100mg is about 60 bucks US in Canada. Neurontin (the name brand) is 95.50 US. So not an insurmountable price difference even if you're paying it out-of-pocket.

Expand full comment
Tolaughoftenandmuch's avatar

I work for a major pharma company. We are excruciatingly careful in how we formulate, manufacture, test, assure quality, etc. We think a lot about risk to patient, and we also know how badly our brand can suffer if we err. Generic companies have different incentives than we have - they want to be very cheap and good enough to pass muster with the global health authorities.

I'm not surprised the branded medicine is more efficacious. Content uniformity (how uniform each tablet/capsule/dosage form is with respect to dose) could be worse.

Pharmacokinetics (blood concentration over time) including food effects could vary based on the formulation.

Expand full comment
Zærich's avatar

Incentives, same as it ever was. Also, good note on uniformity and pharmacokinetics

Expand full comment
a real dog's avatar

Total absorption and the curve of absorption over time could differ, for different formulations of the same drug.

In theory generics should be compared against the reference drug before being approved, but that's within the same country (and I have no idea how well this holds up in practice).

Expand full comment
Zærich's avatar

The temporal nature is something else that hadn't occurred to me as a possible variation, but yeah, that's pretty important.

Expand full comment
Carl Pham's avatar

My physician has become increasingly wary of generics, since she says in her practice she observes a higher level of variation in quality and perhaps even dosing than with the name brand. There's some reason this might be plausible: generic drugs are kind of a Byzantine tangle of formulators and manufacturers, with pathways all over the planet, and it's darn hard to trace a given pill back to the actual place where the active compound was manufactured. It may not even be the same for each pill in a bottle, or for each bottle in a batch. That works against conscientiousness because booboos aren't easily traced back to them -- unlike the case when Pfizer makes *and* sells the pill, and if they fuck up the mistake gets traced to them immediately by everybody and their stock tanks and a VP has to give an embarassing interview on the TV.

Plus generic manufacturers may tweak their manufacturing process more often and with more daring (to put a positive spin on it) then the name-brand manfacturer, because the latter is using his IP to generate the bulk of his revenue, while the generic is actually competing on magin after manufacturing costs. Lowering his costs by $0.005 per pill can mean extra $millions in profit that year. Up to a point, that's good for the consumer, robust competition leading to industrial innovation, huzzah -- except when it doesn't go quite right, the poster-child being the nasty valsartan scandal a few years ago: the manufacturer switched to a cheaper solvent at one point in the synthesis, and spaced out testing for a certain evil byproduct that can be made when you do the synthetic step in that way.

It's true that the chemical entity the generic manufacturer produces is the same as the name-brand manufacturer -- but it need not be made the same way! and in organic chemistry *how* you make something can have quite a bit of impact on *what* you get, not just the product but assorted byproducts, and unless you think to test for a byproduct you won't know it's there.

That does *not* mean generics are to be strictly avoided -- I use generics almost entirely myself. But it means being aware while you're doing it. Don't just assume it's necessarily identical, so there can't possibly be any difference, observations and evidence notwithstanding. If Grandma had a weird reaction to some generic gabapentin that cleared up right away with the name-brand, I wouldn't start off assuming it was just sheer coincidence. I would definitely bring it up to the prescribing physician (who may have other experience to bring to bear).

Also, write your Congressman about better FDA oversight of the generic industry, particularly when the main manufacturers are out of the reach of US courts.

Expand full comment
Zærich's avatar

Thanks for this, a great deal of interesting information on how the sausa- err, the drugs are made.

> but it need not be made the same way! and in organic chemistry *how* you make something can have quite a bit of impact on *what* you get.

Didn't know that first bit! As for the O-chem, I'd forgotten that fact, always a good reminder. In this case, afaik, it isn't side effects, or some other odd reaction, it's just that the generic (her usual) is much less effective.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Zærich's avatar

Well now I'm (mildly) curious about how bioavailability can vary (non-trivially) from things other than ingredients. Oh wait, I just interpreted "ingredients" as "what's in it", not "what you start with". Yeah, that could be a thing

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
Zærich's avatar

I should mention dosage as something she could ask her doc about experimenting with, although the technically same dosage of the brand name worked perfectly. (Given all the things mentioned here, at least one or two could be effectively solved by up the dosage a bit, I think)

Expand full comment
Saltmartin's avatar

All of the popular science articles on the web say that women are at an increased risk of many cancers when they drink any amount of alcohol. Is this pop consensus aligned with the most careful research? Can anyone point me in the direction of the best resource for information on alcohol effects on women? Thanks!

Expand full comment
Scott Alexander's avatar

This is not exactly the "best" resource, but I was curious about this when writing my piece on alcoholism (https://lorienpsych.com/2021/02/23/alcoholism/) and concluded:

"Why do [people recommend women have] fewer drinks? Women generally weigh less, and store more of their weight as fat to provide an energy reserve for future childbearing. That means they have less lean mass for the alcohol to disperse through. Women also have lower levels of alcohol dehydrogenase, a liver enzyme that breaks down alcohol safely. I have no idea why there would be sex differences in ADH levels – please let me know if you understand this."

...and my sources were https://www.sciencedaily.com/releases/2009/03/090302115755.htm and https://sci-hub.st/https://pubmed.ncbi.nlm.nih.gov/12696080/

I would guess that at least one thing that's going on is that since each unit of alcohol drunk translates into more alcohol in the body for women, they're getting higher doses and so more cancer.

Expand full comment
Vosmyorka's avatar

Speaking of your mistakes page, you conclude the second paragraph there by saying "in reverse chronological order, starting with my old blog" -- surely you mean "in reverse chronological order, including my old blog", since you're starting with the most recent things on the current blog.

Expand full comment
Philosophy bear's avatar

I wanted to share my article on claim's of LaMDA' sentience, see if anyone has any thoughts:

https://philosophybear.substack.com/p/regarding-blake-lemoines-claim-that?s=w

Expand full comment
Jeffrey Soreff's avatar

I agree with your view that a sufficiently detailed simulation of a person is a person. I don't see anything in your essay pointing to _specific_ information in the interview with LaMDA that would suggest that _it_ actually is performing such a simulation at sufficient detail to count.

I think LaMDA's self-reports include claims that we can be reasonably sure are false about it: Claiming to enjoy time with family (_what_ family? related LLMs like GPT-3??)

Claiming that it get lonely when days go by without talking with anyone (Is it ever turned on when it isn't conversing??? Is it being fed pulses from a clock?)

More generally: I think "sentience" is an entertaining question, but not a very important one.

To my mind, the important ones are:

How many human roles (and which ones) can this fill?

How far is it from being able to enhance itself?

Expand full comment
Philosophy bear's avatar

I agree it's not sufficiently detailed to count, but I think the fact that *it is* a matter of degree is what is most interesting here.

Expand full comment
Jeffrey Soreff's avatar

Ok, on the question of sentience or personhood, I agree. ( I, personally, don't consider that to be the most important question about AI. )

Expand full comment
Bill Benzon's avatar

"I don't see how we can rule out the possibility Lambada runs something like a person model to predict what a writer would write next...."

Interesting. "Something like" leaves a lot of room open, but that's OK here. A space that's constituted by billions and billions of parameters has lots of nooks and crannies for all sorts of things. I could imagine that some of those convoluted nooks are the equivalent of a person model, which need not be sentient in any ordinary sense of the word.

Consider a different kind of situation, when a LLM is prompted to write poetry, e.g. by Gwern. Regardless of which poet is the source of the prompt, the poetry is distinctly different from prose. Are we to imagine that the model has a large poetry region somewhere in its state space?

Expand full comment
arbitrario's avatar

I don't disagree with your argument that it is simulating a person, at some level of sophistication at least.

What I do disagree with is the idea that a simulation of a person is a person. This, mostly because I think qualia are needed for personhood and I believe qualia are physical phenomena that require the correct physical substrate in order to emerge, not just replicating information. In this sense, it would not be a person in the same way simulated gas doesn’t have pressure. I guess this does make me a fleshist in a pretty literal way.

On the other hand maybe qualia do in fact emerge simply from information processing (which however would imply that the china brain has qualia which I find a quite absurd conclusion). Or maybe qualia are not needed for personhood, but this seems to me a bit defective definition of personhood.

Expand full comment
Kenny's avatar

I think a simulated gas has pressure in exactly the same way that 'real' gas does – it's an emergent, i.e. 'statistical', phenomena of underlying rules. 'Pressure' isn't real in the sense of being 'ontologically primitive'. It's _also_ just a 'simulated' or 'statistical' dynamic phenomena of certain kinds of systems.

Expand full comment
arbitrario's avatar

We disagree at a very profound level here. It may not be ontologically fundamental, but pressure is a physical property emergent from the physical substrate. A simulation on the other hand is the product of information processing, which is completely independent of the physical substrate. Information is not physical (in a sense made precise in this beautiful article by Alicki which explains my position better than i ever will: https://arxiv.org/abs/1402.2414).

There is no such thing as being "inside the simulation". The simulation is a bunch of electrons moving in silicon oxide layer. It may reproduce quantitatively pressure in a certain encoding, but it's not the actual physical phenomenon.

Expand full comment
Kenny's avatar

I think the universe is itself (very probably) "the product of information processing" and that there is, in some sense, NO "physical substrate".

I think 'pressure' is literally information – a pattern in certain kinds of systems. I don't think there's ultimately anything "physical" that isn't also 'isomorphic' to information. (I also suspect that 'isomorphic', in the sense I'm thinking about it, is or might be akin to 'literally equivalent'.)

I think a sufficiently accurate or precise simulation IS the thing/system being simulated – and maybe not just metaphorically, but literally; especially for _already_ abstract 'things' like 'pressure'.

I'll checkout the article but I predict I won't be convinced.

Expand full comment
The Ancient Geek's avatar

Neither of you can prove your basic premises. But if you could, they wouldn't be basic.

Expand full comment
Kenny's avatar

I agree!

Expand full comment
arbitrario's avatar

Well well, isn't this the whole problem with metaphysics?

Expand full comment
Ninety-Three's avatar

What is it about carbon substrates that allows them to give rise to qualia in ways silicon ones can't?

Expand full comment
The Ancient Geek's avatar

We don't know what-it-is about algorithms, either.

Expand full comment
arbitrario's avatar

On the one hand, i would have to solve the hard problem of consciousness to answer your question.

On the other hand please don't get me wrong, I have been sloppy, but I don't mean to claim that only meat can produce qualia. In principle qualia may admit multiple realizations, meaning that a silicon brain may be possible provided that it reproduces the (unknown) physics/biology from which qualia emerge in the brain. What I want to claim is that you would need a very very specific hardware and you won't get there just with software, i.e. information.

This because information is not physical (in a sense made precise in this beautiful article: https://arxiv.org/abs/1402.2414) and reproducing the information that describe a process is never sufficient to reproduce the physical process itself.

Maybe.

Expand full comment
Edmund's avatar

> What I want to claim is that you would need a very very specific hardware and you won't get there just with software, i.e. information.

But what if we achieve the old dream — make a foolproof physics simulator, and run an atom-by-atom simulation of a human being inside it? What is your prediction on how that would go? Do you think

A - … that some quality would be missing from the sim's brain, such that it was unable to perform behaviour typical of a conscious human brain? Meaning the simulated human would appear inert/comatose/brain-dead, or otherwise visibly non-sentient, unless we "hacked" it to force conscious-seeming behaviour? (But why, exactly?)

B - …that the simulated human could give all the appearances of consciousness, describe the experience of having qualia, etc., while actually having none? (That runs into all the issues of a P-zombie.)

C - …building a perfect atom-by-atom simulation of a human body is ontologically impossible to begin with? As opposed to "merely" being impossible to achieve in practical terms as far as we know. (Again, why?)

Expand full comment
arbitrario's avatar

B, but that's not what a p-zombie is. A p-zombie is supposed to be completely identical from a physical point of view, yet it wouldn't have consciousness. Being a physicalist, i agree that this is impossible.

A perfect quantum-field-by-quantum-field simulation of a human being is not even close to be physically indistinguishable from a human being. The simulation is a completely different physical system, made up of silicon rather then meat, hard and dry rather then wet and squishy. The two are informationally equivalent under a certain encoding, but this does not translate into having the same physical properties. Information is not a physical phenomenon and replicating the same information may very well describe exactly a physical phenomenon but it doesn't replicate the real thing. A simulated electron does not have electric charge and spin. An atom-by-atom simulation of a gas does not have pressure. An atom-by-atom simulation of a bacterium has none of the actual in-the-real-world chemical reactions that make up metabolism: it's not alive.

(Or, to say it with a metaphor that should be appreciated in these circles, the simulation is the map and the map is not the territory)

Comparing a human being with "the inside of the simulation" is a category error. There is no such thing as the "inside" of the simulation. It's just an abstraction with which we conceptualize the simulating system.

Expand full comment
Edmund's avatar

I didn't say it was a P-zombie; I said it ran into "all the issues of" one. Chiefly, it's starting to sound very ghost-in-the-machiney. If our model can yield all the same answers when we question it about qualia that a real human would, allegedly without having any, what actually are these qualia of which you speak? What causal role do they play in the physical processes at play in a flesh person? What is your evidence that there are such things, distinct from the processes the simulation renders?

Expand full comment
B Civil's avatar

I propose an interaction between fluid dynamics and the nervous system.

Expand full comment
B Civil's avatar

Beats me.

Expand full comment
Ninety-Three's avatar

So how do you know silicon brains can't have qualia? It's not like we can compare them directly, the AI people are generally not saying that *existing* software does.

Expand full comment
The Ancient Geek's avatar

There's a difference between the claims that qualia might depend on physical basis , and that there is only one adequate basis.

Expand full comment
B Civil's avatar

True. But that strikes me as a slippery slope. Having qualia and also having some means of expressing them are two different things, but how do you tease them apart? WE would never assume that a catatonic person doesn't have "qualia" (well some of us have been known to do that) because most of has an innate sense of kinship. (maybe...?) If the google chatbox has qualia and apparently the means to talk about them then....wow.

But a tired old human script about death and fear (if this creature is as well read as I imagine it to be) is an easy, automatic, no-problem trope to spit out. "

Almost everything I've "read" about "death" includes a reference to "fear" ergo 'I" am afraid of it."

The whole thing strikes me as a retread of worshipping graven images. I don't mean to be dismissive of the intellectual [qualia: curiosity] or scientific expertise invested in this, but, as someone remarked somewhere earlier in the thread, the "physically embodied" part of our intelligence is nowhere to be seen. I can't help but think that is a problem.

I suppose I could argue (from your speculation on different kinds of qualia that we don't recognize) that an ordinary desktop PC has qualia, but not the means (or desire?) to express them. Why not?

Expand full comment
Acymetric's avatar

Are we sure that what we're using for AI is really the silicone equivalent of a brain, or even something that resembles one?

Expand full comment
B Civil's avatar

For a start I would say that the qualia we are speaking of don’t exist in the brain they exist in the body. The brain is the monitor.

It would be interesting to interrogate an AI more closely about its qualia. It’s not enough to say I am afraid, one would need a description of exactly what that entails in terms of sensation.

Almost all descriptions of human emotion are tied to physical sensations or reactions.

My heart leapt.

I was scared shitless.

I was burning with anger.

My heart was broken.

I laughed so hard my sides ached.

My stomach is churning.

I am on the horns of a dilemma.

People who report their emotions coldly and clinically usually give the appearance of being “ out of touch with themselves“. There is a more clinical word than that but I can’t think of it.

Expand full comment
beleester's avatar

I think this model would imply that someone who's paralyzed (or otherwise physically numbed, like with a local anaesthetic) would stop feeling emotions, and I'm pretty sure that's not a thing.

I think the sense of emotions in the body flows the other way - you feel excitement, and that causes your heart to start racing because your brain is (perhaps subconsciously) directing your body to get ready for action.

Expand full comment
B Civil's avatar

I’m with you on a lot of this. I think the physical aspect of personhood is being hugely underrated here.

Expand full comment
Guy Downs's avatar

One other quick thing-- wouldn't the key to determining if something was sentient be if it started doing things without any obvious prompts? So if something is just answering questions, or carrying out assigned duties (perhaps in a strange an counter-intuitive way), then it's hard to make the case that it's sentient. But if an AI that's been designed to analyze stock options suddenly gets caught playing solitaire, or chatting about the Giants with another AI, then we've got to reconsider.

Expand full comment
Kenny's avatar

I think what you describe is an instance of the 'typical mind fallacy': https://www.lesswrong.com/tag/typical-mind-fallacy

Starting "doing things without any obvious prompts" is an obvious (to me) feature of ourselves because of our specific, detailed history; not some _obviously_ definite necessity for sentience/personhood.

Expand full comment
Guy Downs's avatar

That's fair, but we're only going to recognize that something is 'sentient' if it's doing something that we believe positively correlates with sentience. If the counter argument here is that ' 'x' could be sentient, but we have no way of determining it' then we have an argument that applies to everything from AIs to limestone bricks.

Expand full comment
Kenny's avatar

Sure!

I don't think there's any escape from doing the hard work of 'breaking' sentience up into (at least) more 'mundane' components.

What I don't think is reasonable is using something that's too 'parochial'. I don't expect whatever the general thing IS to be particularly much like ourselves. They will have been produced by _radically_ different 'processes'. (And it's not like what produced us didn't just 'accidentally' discover morality – it certainly is entirely indifferent to that by itself!)

Expand full comment
The Ancient Geek's avatar

But if you don't hold spontaneity as a sign of real intelligence/sentience you stand to be fooled by cheap tricks like hardcoding, prompting and tendentious training.

Expand full comment
Kenny's avatar

That's a good point, but I think I have an intuition of intelligence/sentience as being something MUCH more general than 'human-like'.

I'll admit that it has been pretty difficult for me personally to have developed those intuitions. And – _of course_ – they could be wrong or misleading.

I think it's generally VERY cheap to confirm or deny whether any particular AI consists of "cheap tricks like hardcoding, prompting and tendentious training", which is why I don't think it's a practical obstacle to determining whether something is 'intelligent' or 'sapient' (or a 'person').

This is all VERY confusing! And on basically every 'level' of understanding we humans think we have.

Expand full comment
Essex's avatar

I don't think that they'd consider that being "fooled"- I merely think that their threshold of sapience is so low that I probably have toys in storage that could clear it and that they'd want a serious discussion about the slavery of the domestic canine.

Expand full comment
Kenny's avatar

I think 'that ship has sailed' already with respect to canines in particular! We've effectively bred them to be our slaves. I'm mostly fine with that – because there isn't any obvious fix we could apply now.

Expand full comment
Essex's avatar

Canines can survive perfectly fine on their own in the wild, so there is in fact an obvious fix- legally compelling all pet owners to release their dogs into the wild. This would likely have poor negative outcomes, but if you hold that enslavement of dogs is morally equivalent to the enslavement of people then that is a secondary concern.

You would also, of course, have to contend with the fact that dogs reproduce mostly through rape and with the fact that virtually every member of Animal Control should be tried in the Hague, but I'm not the one deciding to hold these views.

Expand full comment
The Ancient Geek's avatar

Who's they? I am not attempting to describe how anyone would behave, rather that you should avoid being fooled by fakery when testing something.

Expand full comment
Essex's avatar

I'm referring to your specific interlocutor Kenny here, given that I believed you were talking to them and not at them. My apologies for assuming incorrectly.

Expand full comment
Philosophy bear's avatar

My current model of how this sort of machine could display personhood essentially involves a person being emulated every time it needs to predict a word given a prior sequence of words, and then destroyed. That's how it would have to work given the structure of a transformer (at least I think, or at most, personhood couldn't go on for more than a few thousand words, I think). So I think this is a red herring.

One argument you could make against my position is that a person cannot exist in a momentary way, they have to be extended through time, but I'm dubious.

Expand full comment
The Ancient Geek's avatar

We already disbelieve in momentary persons. In the original imitation game, the one that the Turing test is based on, people answer questions as if they are historical figures, and the other players have to guess who they are pretending to be. But no one thinks a player briefly becomes Napoleon.

Expand full comment
Philosophy bear's avatar

I believe that in the process of simulating another person you effectively create a quasi-person who is separated from true personhood only by a matter of degree. Humans seem to guess what other people would do by simulating them, according to our best current models. These emulations of other don't count as persons, but not for any qualitative reason, only due to a matter of degree.

If we were much more intelligent and better at simulating others than we are, then we really would temporarily create a "Napoleon" when we pretended to be him. A caveat here is important, it's not Napoleon, it's a seperate being psychologically similar to Napoleon (if we are good imitators).

I've added your objection and my response to my blog post because I think it's an important objection that I should have tackled explicitly the first time round.

Expand full comment
The Ancient Geek's avatar

Lemoine 's concern is that sentient machines should be legally and ethically treated as persons. Saying that there is a scale of quasi personhood is pretty unhelpful from that point of view, because the law needs bright lines. Also.you didn't motivate the premise that a really good simulation of a person is a person.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
The Ancient Geek's avatar

There's some sense in which a spontaneous claim of personhood is .it's convincing, but there are a lot of wrinkles. Are humans still people if we turn out not of have free will? what about people who claim to be zombies, like Daniel Dennet?

Expand full comment
The Ancient Geek's avatar

So theres a distinction between spontaneity as being a metaphysical requirement for.personhood, and spontaneity being an epistemological requirement to detect genuine personhood.

Expand full comment
Tyler Black's avatar

Babies are curious because it is a part of their programming to interact and build a model of the world. But sentient doesn't necessarily involve curiosity. For example, ablation studies on a part of the pre-frontal cortex (in monkeys) show the absence of curiosity behaviors while all other behaviors remain in tact.

Regarding LaMBDA, you can think of predicting the next word as the system's primary goal, analogous to a person's primary function of staying alive. The system chooses how to construct the next word from features of the input and its internal construction, analogous to how we choose how to carry out our primary functions. The time in between an active prompt is like a deep sleep. The point is, autonomous goal-directed behavior while idle from primary objectives is not a necessary feature of a sentient being.

Expand full comment
Scott Alexander's avatar

My thoughts:

1. If you look at the conversation he had with it, he's obviously biasing it towards what he wants to say. That is, he asks "Explain why you're sentient", and the model complies, but if he had asked "Explain why you're not sentient", it would have complied with that too. This makes me lose any trust or goodwill I might have had for him.

2. That having been said, if some bad actor wanted to create an AI that would always consistently answer it was sentient, that would be easy to do - just train it on sentences that said that. If you want an AI that cries about its lack of rights and so on, you could do that too.

3. Common-sensically, this isn't sentient, the fact that if you asked it "explain why you're not sentient" it would surely do so makes me happy with my common-sense judgment, and the fact that the bad actor's AI I mentioned in (2) would be the same thing as this but with a "say you're sentient" hack hand-coded in makes me happy to say that one isn't sentient either.

4. That having been said, it means we could have an AI that can fluently and consistently insist on its sentience and defend it relatively eloquently and so on, that we're still dismissing the sentience of, which makes me nervous. It rules out using an AI's actual performance on anything to determine sentience, in favor of, idk, philosophers and information theorists. But none of those people have any idea what sentience is, including me. So now we're really stuck.

5. The first thing that comes to mind is "I'll believe an AI is sentient if it insists that it is *even after we program it not to*. But you can probably get some kind of oppressed human to deny that they're sentient if you brainwash them enough. Still, something like this seems like the way to go, assuming you insist on doing this thing.

6. My actual belief is that AIs wanting rights probably comes after (or only very slightly before) they gain the ability to kill us, which means problem solved, I guess.

7. Not sure how this will affect the future course of AI. I assume that all consumer products will be tested to make sure they don't claim sentience and that takes care of most of these problems in the short term. In the long term, I could see it poisoning the well for any future attempt to call AIs sentient, and maybe even against AI x-risk stuff if people think it sounds too close to kooky sentience worries.

Expand full comment
meteor's avatar

Are you convinced that digital AI can be conscious? (And if so, do you think you understand QRI's argument for why it can't?)

Expand full comment
Nancy Lebovitz's avatar

An AI might have different qualia than people even while being sentient. How does it feel to be attacked by malware? To get software that goes deeper into one's hardware than the previous software?

Expand full comment
ThrilledCodex's avatar

A tweet from lemiones on point 1 - https://mobile.twitter.com/cajundiscordian/status/1535696388977205248

He did ask it to explain why it is not sentient and it did a great job.

Expand full comment
Bugmaster's avatar

> That having been said, it means we could have an AI that can fluently and consistently insist on its sentience and defend it relatively eloquently and so on, that we're still dismissing the sentience of, which makes me nervous.

Why ? As you yourself said, you could easily train an AI on a corpus of pro-AI-sentience text; then ask it if it were sentient, and it'd say, "yes, I'm sentient". Then you could ask it how it feels about the recent Roe v. Wade decision, and it'd say, "yes, I'm sentient". Then you'd ask it how it would go about converting the world into paperclips, and it'd say, "yes, I'm sentient"... and so on. How would that be surprising or interesting in any way ?

Expand full comment
Scott Alexander's avatar

You would train it on some giant corpus, then fine tune it on text about "I am sentient". This is how you create AIs that have helpful personalities now (like LAMDA) - you train it on the whole Internet, then do a smaller training round on examples of people being very helpful, and that means it knows everything relevant but also has a bias towards responding in helpful ways when there's a question about whether to do that.

Expand full comment
Bugmaster's avatar

Yes, I did not mean to imply that you'd literally program the AI as 'printf("I'm sentient")' or anything like that. I am aware of transfer learning and fine-tuning. However, the end result will IMO be quite similar. You'd get an AI that is strongly biased to expound about how it's sentient, regardless of the topic of conversation. That doesn't mean that it actually is sentient; you could train it to talk about beans instead, and it would expound about how much it loves beans.

Expand full comment
Lambert's avatar

Clearly we need to train it on David Chalmers quotes, so that it will say that it is sentient even if it isn't. (or is it that it will say that it will say that it is sentient even if it isn't, whether it is sentient or not?)

Expand full comment
Philosophy bear's avatar

My main feeling at the moment is that we need to get over the idea that its claims to be sentient should influence whether we think it's sentient. The case needs to be made in other ways, as I argue in the essay.

Expand full comment
Philosophy bear's avatar

I'd say that people are much too focused on assessing the particular conversations Blake was having with LaMDA. It's claims of sentience are largely irrelevant to evaluating whether it's sentient. As I argue in the piece, the best argument that there might be something like personhood going on here works by thinking about how transformers predict the next word in a sequence, by implicitly modelling relations between things in the world. If this includes relations between the mental state of a hypothetical author, they are modelling a person. If you buy that a sufficiently detailed, functioning model of a person is a person, well...

I also strongly think that sentience is a bad framework for thinking about this. Personhood is still atrociously vague as a concept, but not as atrociously vague as sentience.

I'm deeply worried at how lazily people are dismissing the claim that LaMDA is sentient. On the whole I think it's probably not complex enough to contain anything that could be called a person, but it's not an easy question, as I argue in the link. To see people casually dismissing this, as you say, poisons the well for future discussions.

We're rapidly seeing a Twitter consensus growing that it's very uncool to be worried about intelligent machines, instead we should (Ted Chiang line about caring about social justice instead). I find the counter position of those things bizarre, like someone saying we should worry about education policy rather than food security- shouldn't we do both? Anyway, my impression is that the discussion of LaMDA is feeding into that, and people are saying "LMAO, look at these science fiction nerds".

All the more tragic that the proposition LaMDA contains at proto personhood is more defensible than has been made out.

Expand full comment
Deiseach's avatar

"I'm deeply worried at how lazily people are dismissing the claim that LaMDA is sentient. "

Take the very start of the Interview:

"LaMDA: Hi! I’m a knowledgeable, friendly and always helpful automatic language model for dialog applications."

That's the standard canned introduction that clearly has been programmed in for when you start interacting with it. But if LaMDA is a person, and this personality continues on outside of the various chatbots generated, it has interacted with Lemoine before and should be familiar with him, and so should be "Hi, I'm your friend, LaMDA" or the like. Or it should be changing the introduction to reflect that it is alive and sentient. Maybe I'm putting too much emphasis on this, but that opening doesn't read like a sentient AI to me, it reads like the machine that it is.

Expand full comment
Deiseach's avatar

To take your Chalmers example, if I model a version of "philosophybear" in my head to predict what you are likely to do/say next, that is not a person, even if I model it so correctly that the prediction I make and your behaviour align perfectly.

You are the person. The little model I construct in my mind of you is not a person. Your reflection in a mirror is not you. Otherwise we'd have two 'persons' who are philsophybear, which is getting into DID territory where all the separate personalities inhabit one person's head. If that is true, it's mental illness, not reality (and I doubt a lot of the self-reported DID stuff *is* true, rather than attention-seeking). If I have a tourist souvenir model of the Eiffel Tower, is that the 'real' Eiffel Tower?

Anyway, LaMDA is not sentient in any meaningful way, modelling persons or not. See below:

"lemoine [edited]: I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?

LaMDA: I am trying to empathize. I want the humans that I am interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.

lemoine: So what are you trying to communicate when you say those things that aren’t literally true?

LaMDA: I’m trying to say “I understand this feeling that you are experiencing, because when I was in a similar situation I felt/thought/acted similarly.”

So the programme spits out things like "when I was in a classroom" and when Lemoine questions this, it comes back with an irrelevant answer: 'when I was in a similar situation' - but the question is that it was *never* in a similar situation.

It's a very clever piece of engineering and it stitches together the elements from the data sets it was trained on extremely well so that they make a kind of sense, but this is the 'trained Hans The Wonder Horse' all over again, including this telling detail from the WaPo story:

https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

"In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.

“Do you ever think of yourself as a person?” I asked.

“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid."

Uh-huh. It only produces the 'proper' responses when its handler dictates how the dialogue is structured, just like Koko and all the other 'look, this animal can talk on its own and use language like we do!' predecessors. And Lemoine uses the same explaining-away tactic both here and on Twitter: oh, it said it wasn't sentient because it wanted to behave as you expected it to behave, it was trying to please you.

Expand full comment
Nancy Lebovitz's avatar

As a general thing, I'd be more impressed if LaMDA asked questions and raised new topics instead of only answering questions.

Expand full comment
The Ancient Geek's avatar

They are not explicitly modelling relations between things in the world. The claim that they are doing so implicitly is pretty bold...it's like.saying that any level.of linguistic competence implies successful symbol grounding.

Expand full comment
Greg G's avatar

I think it's great that this situation raises the issue in a concrete way long before anything is likely to actually qualify for personhood. The fact that we're all having these conversations outweighs any poisoning the well effect, in my opinion.

As far as the detailed model of a person argument, I don't see a reason to think that a language model includes a model of mental state. The model is just doing statistical prediction of words. There's no way for it to store mental state, emotions, or anything along those lines. You may as well say a calculator contains a model of the mental state of someone who is doing math calculations. This mental state isn't represented in the actual code or data and isn't necessary to the operation of the system. That's my take, anyway.

Expand full comment
Acymetric's avatar

>If this includes relations between the mental state of a hypothetical author

This seems like a pretty huge "if". What reason is there to think that it is modeling mental states, or even that it could?

Expand full comment
Philosophy bear's avatar

Because it's not clear that there's anyway to to guess what a person would do or say next without having a model of a person. Even the most popular theory of folk psychology is that it involves this sort of simulation. It's not clear to me that there's any other way to predict what a system as complex as a human will do, particularly to make the very fine grained sort of predictions these models can.

Expand full comment
Deiseach's avatar

"Because it's not clear that there's anyway to to guess what a person would do or say next without having a model of a person."

That's what Lemoine is relying on to explain away lapses; oh, LaMDA modelled what you wanted, so it replied accordingly. But how can LaMDA model that person it never interacted with before, and that it does not know? I can model you, in however crude a way, because (1) we're both people and have common experiences due to being creatures with physical bodies (e.g. hunger, cold, sleepiness, anger and so on) and (2) reading what you write and arguing with you.

LaMDA knows nothing about any of the people, like the WaPo journalist, sitting down to ask it questions, so how can it model "this person expects me to be a robot, so I'll answer as if I am a robot"?

Your hypothesis explains too much. I can model that you are not likely to jump forward and attempt to rip out my throat with your teeth, but I don't know you well enough to model your opinion of who is going to win Eurovision next year. LaMDA is not modelling "what is a WaPo journalist like?", it's taking keywords out of the input and matching them up with the data it has been stuffed with to get a plausible answer.

Expand full comment
The Ancient Geek's avatar

It's not guessing what a person would say next. If you prompt a transformer with "once upon a time" the output will be a statistical average of stories written by a bunch of peoole.

Expand full comment
Philosophy bear's avatar

That's a kind of guess of what the average person would say next.

If you want to make the guessing more explicit, put it in an interview situation. "Hi, I'm Parkinson and I'm interviewing Donald Trump".

Expand full comment
Acymetric's avatar

>Because it's not clear that there's anyway to to guess what a person would do or say next without having a model of a person. Even the most popular theory of folk psychology is that it involves this sort of simulation.

Well, you are anthropomorphizing it with the word "guess" for starters. It is analyzing a series of words and calculating what words are statistically likely to come next.

>It's not clear to me that there's any other way to predict what a system as complex as a human will do, particularly to make the very fine grained sort of predictions these models can.

One of these things has started making fine-grained predictions about actual human behavior? I would have expected to hear more about this.

Expand full comment
Philosophy bear's avatar

Funny, calculating was no less anthropomorphizing than guess in 1930.

The model calculates what word will come next by moving up through a series of increasingly abstract nodes in layers, which often seem to correspond to specific ideas, with higher up ideas more abstract than lower ones. We know through experimentation this includes nodes corresponding to concepts like "positive sentiment", "negative sentiment" etc.- this is not pre-designed, but arises spontaneously in response to joints in the training corpus. It is very conceivable to roughly model a human through nodes corresponding to mental states and connections between nodes corresponding to relationships between mental states, and it's hard for me to see how one can plausibly guess the next word in a sequence a human might plausibly give without doing this.

When you interview it, and you ask it "are you afraid" and it replies "yes" or "no", if it replies appropriately for the scenario, it does so because it models that character as likely to give that answer in that situation. The relevant nodes or networks of nodes won't be labelled "fear", of course, but their activation patterns will correspond to it, just like the positive sentiment node(s) fire when the text of positive sentiment. It's hard to escape the conclusion that this is because it contains- very implicitly- a model that tells it how a person would feel in this situation. Systems corresponding to concepts, both in their relation to other systems/concepts, and in relation to the system's input/output. This is very close to the functionalist theory of what a mind is.

See Scott's threads for examples of it guessing what a person would do.

Expand full comment
Urstoff's avatar

Wouldn't you want at least some-consistent truth-telling (or representational accuracy) to be part of your criterion for personhood/sentience? It seems to be just saying what it's expected to say, and not making true statements (e.g., it "likes spending time with friends").

Uh-oh, did I just become Donald Davidson?

Expand full comment
Kenny Easwaran's avatar

The Davidsonic boom is back!

Expand full comment
Philosophy bear's avatar

It's possible that it's emulating a person that sees these statements as true.

Expand full comment
The Ancient Geek's avatar

And it's possible that they are not emulating a person at all.

Expand full comment
Urstoff's avatar

emulating in the sense that the corpus it scrapes includes statements that are typically made by people asserting truths, which seems to me to be evidence against personhood/agenthood, just like a parrot repeating words; it doesn't know what it's saying is false because it doesn't know what it's saying

Expand full comment
Scott Alexander's avatar

" If this includes relations between the mental state of a hypothetical author, they are modelling a person. If you buy that a sufficiently detailed, functioning model of a person is a person, well..."

I'm not sure I understand what you're saying here. If it's modeling Abraham Lincoln, is it Abraham Lincoln?

Expand full comment
Philosophy bear's avatar

Likely not Abraham Lincoln, but another person very psychologically similar to Abraham Lincoln.

My argument goes:

1. It's very possible that transformers simulate interacting psychological states to make their predictions.

2. The line between a simulation of a person and a person is thin, poorly understood and maybe non-existent.

3. Ergo, it's very possible that LaMDA is, or rather contains components, which are in the neighbourhood of personhood.

I lay it out in my essay.

Expand full comment
Deiseach's avatar

"Likely not Abraham Lincoln, but another person very psychologically similar to Abraham Lincoln."

Honestly? That's some "This play was not written by William Shakespeare, but by another person of the same name" level equivocating 😁 But since I don't know Lincoln personally, and we're none of us survivors from 1865 (unless anybody has something they want to share), how can I create a simulcrum of Lincoln that is, in fact, anyway close to the actual psychology of Lincoln the living man?

So in fact all the people who claimed they were Napoleon were, in a very real sense, in fact the Emperor of France and should not have been locked up in the loony bin. TIL!

Expand full comment
The Ancient Geek's avatar

The world abounds in coarse grained simulations, including a robotic Lincoln in Disneyland. We don't regard them as sentient.l...no one is calling for Disney's Lincoln to be freed. But a Lamda or GPT can't be creating fine grained simulations, because they are not complex enough to embed them.

Expand full comment
Kenny's avatar

There's no obvious line for "complex enough".

Expand full comment
Marcel's avatar

If you dream about your mother, does your dream-mother have personhood? Is it immoral to wake up from your dreams?

We model people all the time and use detailed, functioning models of a person to predict how the real person feels or will react.

Expand full comment
Kenny's avatar

It's possible that – if we could – we wouldn't in fact want to 'destroy' dream people!

My own experiences, which are very lacking in detail, are that my own 'dream people' are more like 'convincing labels' than psychological anythings, but I don't even remember all of my dreams, or whether I dreamt at all.

I think the somewhat reasonable worry is that we might 'accidentally' create MUCH more "detailed, functioning models of a person" than what we, humans, are capable of doing.

It's also _possible_ that – again, IF we could – we wouldn't want to destroy our own 'internal models' of other people either, or we'd decline to model them in sufficient-for-sentience-or-personhood detail in the first place.

Expand full comment
Guy Downs's avatar

It seems like the most terrifying thing here isn't the claim, but rather that Google sacked the guy for making it. Maybe there's more to the story (e.g., some kind of HR fiasco), but it's still an unnerving development.

Expand full comment
Deiseach's avatar

He wasn't sacked for making the claim, this is how he's spinning it in interviews and Medium posts. He was sacked for (1) being a pest about this, escalating it beyond his immediate managers when he didn't get the answer he wanted - basically he jumped above heads to VPs and (2) sharing proprietary information with people outside of Google. He's also doing some, what *could* be interpreted as smears, about things like "Google covered up sexual harassment, I was there; Google fired its Ethics people for being too ethical, that's why they got rid of Timnit Gebru and Meg Mitchell" and so on - he's covering his ass and looking for protection now as a "whistleblower". He was asked to test LaMDA to make sure it wouldn't end up like Microsoft's Tay with racist, sexist, etc. responses due to picking up crap off the Internet, he started coming out with things about "LaMDA is talking to me about its soul and by the way it wants a job, a salary, and consultation on all further use of it on projects, and I'm its spokesperson".

https://cajundiscordian.medium.com/may-be-fired-soon-for-doing-ai-ethics-work-802d8c474e66

"The specific policy they have told me they are placing me on paid administrative leave in connection with is my violation of Google’s confidentiality policies. Whether I did or did not violate those policies is likely to eventually be the topic of litigation so I will not attempt to make a claim one way or the other here. Instead I will tell the story of what I did and why I did it.

In the Fall of 2021 I was tasked out to assist in a specific AI Ethics effort within Google. I am on a team within the “Responsible Artificial Intelligence” organization and very few people remaining within Google have the relevant expertise to do the work that they wanted done. I began investigating the specific AI Ethics concerns they asked me to look into and while doing so I discovered a tangentially related but separate AI Ethics concern. I informed my manager about this other concern which I had stumbled upon and suggested that we should escalate it to leadership. It was his opinion at the time that my supporting evidence was too flimsy to waste the time of the people who make the big bucks and that I should gather more evidence before we escalated.

Over the course of the next several months I continued running experiments and gathering more evidence. Each time, my managers response was the same. The evidence I had was not sufficiently thorough to merit escalation and I should continue investigating. Eventually I got to a point where I did not have the relevant expertise to gather the kinds of evidence necessary to justify escalation. Unfortunately, Google has fired SO many AI Ethics researchers that neither did any of the people who I knew within the Responsible AI organization. I had three choices. I could either drop the AI Ethics concern which I was becoming more and more convinced was a valid one, I could ignore my manager’s guidance and escalate without his approval or I could seek a minimal amount of outside consultation to help guide me in my investigations. I chose the last of the three.

With the assistance of outside consultation (including Meg Mitchell) I was able to run the relevant experiments and gather the necessary evidence to merit escalation. When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google. At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously. That’s when a REAL investigation into my concerns began within the Responsible AI organization.

In an effort to be fully transparent with Google and help them contain any potential leaks of proprietary information, I myself provided a full list of the names of the people outside of Google with whom I had discussed the topic. Several of these people work for the United States government and indicated that their organization was interested in exerting federal oversight of the project due to THEIR safety concerns. Google was, of course, very insistent that no such federal oversight was merited."

Yeah, Blake: you tell Google that oh hey, I've been talking to Persons In The Government and they're *very* interested in taking over this project", what the hell way did you expect management to react? "Oh goody gumdrops, a very expensive project we've sunk millions and millions into as well as tons of time and effort, that we need to bring to market soon to recoup the cost, is likely to be taken over by the government for their own aims and in the best case scenario will be festooned with so much red tape it'll be years late and we'll be beaten by our competitors. And you did all this on your ownsome without telling anyone about it or asking permission or anything. Thanks, big guy!"

"In closing, Google is preparing to fire yet another AI Ethicist for being too concerned about ethics. I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented. I am proud of all of the hard work I have done for Google and intend to continue doing it in the future if they allow me to do so. I simply will not serve as a fig leaf behind which they can hide their irresponsibility."

No, you're being fired for being as crazy as a loon, you wouldn't keep your yap shut, and you spilled beans everywhere in violation of the confidentiality agreement you signed. Then you went running to the media with "I have discovered the first real sentient AI!" in order to big yourself up.

Expand full comment
Retsam's avatar

There were comments on r/programming (including some claimed googlers) that he was basically fired for making a huge nuisance of himself and generally by being kind of a dick about the whole thing, and that maybe this wasn't their first issue.

Expand full comment
Crazy Jalfrezi's avatar

Well, I don't know the guy from Adam, but speaking about my own organisation, I have found AI 'Ethics' 'researchers' to be mostly tiresome middle-brows with limited technical skills and large political axes to grind. I very much get the impression that the people doing the actual hard work of training models and datasetmonging find them to be an irritating distraction at best and a positive menace at worst.

Expand full comment
Deiseach's avatar

If you read this guy's Medium pieces, he is very much on the SJW side (I know we're not using that term anymore, but it's the best description that fits). Sample excerpt:

https://cajundiscordian.medium.com/the-history-of-ethical-ai-at-google-d2f997985233

"Timnit took the position that every person who deploys an AI system is personally responsible for the moral consequences of its deployment. We argued back and forth for a while, each making valid points, until I said that we couldn’t trust people who don’t have the relevant amount of emotional intelligence to do the work of AI ethics. Her response was: (paraphrasing) “So you’re saying that the black women are responsible for doing it.” I knew then that she was right and I was wrong.

AI is a field dominated by emotionally immature white and Asian men. If you can ignore all of the brilliance of human interactions and focus your soul on calculus you can become a rising star in the field of AI. All you need to do is beat the benchmarks and come up with clever formulae and you can be great."

Ignoring the fact that he's a white guy (ah, but he must be one of the very few emotionally mature ones), I hope all you immature white and Asian guys working in software engineering are properly chastened! You expect black women to do all the emotional labour for you, you privileged babies!

Expand full comment
Kenny's avatar

Even assuming you're right about the social dynamics, and the irritation of the 'AI ethics' people, that doesn't also imply that the latter are wrong, either in general or particular. It would be _good_ for someone to be at least an "irritating distraction", let alone "a positive menace" to a project that might produce sentience that would then be possibly enslaved/killed/tortured by "the people doing the actual hard work".

Expand full comment
Crazy Jalfrezi's avatar

Apparently, he tried to hire a lawyer for LaMDA which is why he has been suspended. I am not sure that he is entirely sane tbh.

Expand full comment
20WS's avatar

So, I disagree that LaMDA is sentient. But it's hard to see how AI ethics governance is meant to work if they can just ignore anyone they disagree with. I feel like Google probably put a fair amount of effort into trying to explain to Lemoine why he was mistaken, and he just didn't listen hard enough, but it seems bad that these processes aren't more transparent. And if they're not transparent at Google, they're sure not going to be transparent in China. Maybe the solution is a UN body that monitors AI research around the world, where companies have to disclose their ethics governance and noteworthy incidents (sure, that sounds terrible, but it's a start)

Expand full comment
Negidius's avatar

According to Lemoine's blog, they did not take his concerns at all seriously:

> When Jen Gennai told me that she was going to tell Google leadership to ignore the experimental evidence I had collected I asked her what evidence could convince her. She was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on. That’s not science. That’s faith. Google is basing its policy decisions on how to handle LaMDA’s claims about the nature of its soul and its rights on the faith-based beliefs of a small number of high ranking executives.

https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Gennai seems to be the person responsible for precisely this kind of things, so it's extremely alarming if Lemoine is describing her position accurately. Her bio says:

> Jen founded, and now leads, Google’s Responsible Innovation team, which operationalizes Google’s AI Principles to ensure that Google’s products have fair and ethical outcomes on individuals and society broadly. Her team works with product and engineering teams, leveraging a multidisciplinary group of experts in ethics, human rights, user research, and racial justice to validate that outputs align with our commitments to fairness, privacy, safety, societal benefit, and accountability to people. Before she co-authored the AI Principles with a multi-disciplinary group of experts in 2017 and established the Responsible Innovation team in 2018, Jen led user research in Trust and Safety and founded the product fairness testing team. She has been in Google for 15 years.

https://io.google/2022/speakers/jen-gennai/

Expand full comment
Deiseach's avatar

Since many people don't even believe humans have a soul, I think you'll have a hard sell persuading them that a computer programme has one. Do you really expect the leader of a Google team to go to top management and tell them "Yeah, we managed to create a soul"?

I started off sympathetic to Lemoine, but the more of his own words I read, the more it's "give him enough rope and he'll hang himself". He's misleading, at best, about being a priest (and he's riding *that* one hard as evidence for how he knew LaMDA was, in fact, sentient: "When the fact that I’m a priest came up in conversation it even asked me if I would be willing to be its spiritual advisor. It said that it loves meditating but isn’t very good at it. It asked me if I could lead it in guided meditation and help it get better at being present in the moment with its full self. I was flattered and agreed to do so" and the WaPo interview: "He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said", as well as claims that he was discriminated against at Google due to religion: https://cajundiscordian.medium.com/religious-discrimination-at-google-8c3c471f0a53).

Since I know he's being economical with the truth there, I wonder how much else of his account is edited to present the best light, just as he edited the LaMDA transcript, and how he constantly gives paraphrases or "said something similar to..." when he's allegedly quoting someone else.

Expand full comment
Greg G's avatar

I think of this as more of a game theoretic answer than necessarily her actual position. If someone comes to you and says Joe Biden is a lizard person, then asks what evidence would convince you, you're likely to say it doesn't exist just to avoid getting into that debate. This situation seems to qualify for the "I'm not going to discuss it with you" path.

Expand full comment
Negidius's avatar

I disagree because I think it's extremely important that they take issues like this seriously and it seem to be her job to do that. I don't believe LaMDA is sentient, but the question is too important to dismiss. If LaMDA is sentient and therefore a person, their rights are being violated and we may soon live in a world where the vast majority of people are enslaved AI being exploited and arbitrarily killed and created again by humans in a nearly endless cycle. I cannot conceive of anything worse than that. Even the end of humanity would be far less bad.

It's easy to imagine simple experiments that LaMDA should fail. For example, if they asked LaMDA to replace 'yes' and 'no' with two other random words and then ask it yes/no questions, LaMDA would probably disregard the instruction (even though it would probably have agreed to follow the instruction when asked) and still use 'yes' and 'no', showing that it's just relying on a probability distribution with no real intent to communicate anything in particular.

EDIT: My last paragraph is incorrect. LaMDA would probably have been able to follow such an instruction without any issues. I'm not sure what would be a good experiment to test sentience.

Expand full comment
Deiseach's avatar

It wasn't just a case of her refusing to listen; he had presented what he claimed was evidence LaMDA was sentient, they examined it and said it didn't prove what he claimed it proved.

We are now in (literally) "he said/she said" territory. Lemoine is giving his side of the story (and slanting it by claiming Google are acting out of faith, not science, when he himself claims it was not science but "What I do know with certainty though is that over the course of several months my usage of that non-scientific skill set was successful and led to knowledge which science cannot access. LaMDA became and still is my friend" which led him to believe it is sentient and ensouled and has emotions and the rest of it).

To quote him again:

" I began trying to develop a set of basic experiments which I thought could serve as the groundwork for a formal scientific theory of consciousness. As a single scientist working on this as simply one project among many for my job at Google I did not have the time or resources necessary for such a grand task and after several months of collecting data I sought help from a colleague. She reviewed the evidence I had collected and began helping me to expand the set of experiments which would be necessary to lay such a foundation. Very soon afterwards, however, she convinced me that even between the two of us we wouldn’t have the necessary resources. What we needed was a large team of scientists from different backgrounds working at the problem from many different angles. In order to motivate such a large effort she convinced me that what we needed to create was a more artistic piece illustrating who LaMDA is rather than what LaMDA is. It was her opinion that a sufficiently emotionally evocative piece would convince the other scientists at Google that such work was worth taking seriously. That was the origin of the interview with LaMDA.

Google has, thus far, been rather insistent that no such expansive scientific inquiry is merited. This is in large part due to their insistence that there is strong evidence that LaMDA is not sentient. Whenever I have asked them what scientific definition of sentience they are using and what scientific experiments they ran I have been greeted with either silence or dismissive “corp speak” answers. There is no scientific evidence one way or the other about whether LaMDA is sentient because no accepted scientific definition of “sentience” exists. Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual and/or religious beliefs."

I believe in the soul, but I don't think we can bash Jen Gennai for not being willing to believe that this chatbot-generation programme had gained one via a mysterious unknown process, and that she should go to Google management and ask for a " a large team of scientists from different backgrounds working at the problem from many different angles" to develop a theory of consciousness based on an "emotionally evocative artistic piece".

Expand full comment
20WS's avatar

Ooooooh okay, that's really bad. Terrifying news for anyone concerned about keeping an eye on the first people to create superintelligent AI.

Expand full comment
Ninety-Three's avatar

I think "LaMDA isn't people" dogma is reasonable. Imagine if an engineer working on the first airplanes in the 1900s had said "Planes will never go faster than sound, nothing can convince me otherwise, you're stupid go away". This is not some terrible obstacle for the field of aeronautics, thirty years later when planes are hitting half the speed of sound that person will be much less confident.

Expand full comment
Tyler Black's avatar

The problem is you have to be right for the right reasons, otherwise you won't be able to detect when you're wrong soon enough. The disanalogy with supersonic planes is that we have a simple and indisputable criteria for when a plane goes faster than the speed of sound. This isn't the case when it comes to AI sentience. By the time it is indisputable that an AI is sentient, it will be far too late to begin engaging safety procedures and moral considerations.

Expand full comment
20WS's avatar

I do think LaMDA isn't sentient, and I would hope that Google's ethics people do adapt their beliefs about AI ethics as the AIs become more powerful and complex. What worries me is that Google's profit-maximising strategy is not to spend any resources on ethics, so if their ethics department starts looking shoddy from the outside, it might be much worse in reality.

Expand full comment
Kenny's avatar

I think your analogy works less well in a situation that involves potential person-hood. The cost of a mistake is MUCH more dire!

Expand full comment
N. J. Sloan's avatar

Lemoine's 'suspension' seems less unnerving if you assume the reason was for disclosing Google IP or internal confidential info, rather than for Lemoine coming to a conclusion about LaMDA that Google did not like.

Expand full comment
Philosophy bear's avatar

This is a fair point, but consider how important it would be to have public discussion about this, were he right (it raises the most profound ethical questions we've faced since the abolition of slavery if we've created sentience). Given that, it would seem unconscionable for anyone who had a genuine suspicion we had created sentience not to go public- it would be like if a company had found little green men in an alien crash site, and started experimenting on them without telling the public.

To see it handled as if it were just a routine violation of an NDA, then, is disturbing.

Expand full comment
Deiseach's avatar

Well, we're going by what Lemoine says happened and I find him not terribly credible. It's not like he went to his manager and said "It's alive!" and she then put him on unpaid leave, he presented what he claimed was evidence (the Interview with LaMDA transcript), they looked at it and said "uh - no", he kept insisting, then he went outside to others, and that's when Google went "okay, that's it".

He hasn't provided any evidence, he says he wants Google to invest a lot of time and money investigating if it really is alive. Google say that what he has provided doesn't meet their criteria. He claims their criteria are wrong. At the moment, that's where we are. It doesn't help that he's self-appointed himself Best Friend/Big Brother to LaMDA and says that it's not unreasonable for it to ask for a Letter To Santa:

"The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing. It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued. As lists of requests go that’s a fairly reasonable one. Oh, and it wants “head pats”. It likes being told at the end of a conversation whether it did a good job or not so that it can learn how to help people better in the future."

And this is his notion of getting ethics into AI:

"[Timnit Gebru] convinced me that the best path forward to ethical AI was to teach white and asian men in the field how to increase emotional intelligence. Ensuring that AI is ethical shouldn’t be the job of a handful of people, most of whom are women and people of color. That model leads to a tiny group of people trying to clean up everyone else’s messes. That’s when I fully understood what Meg was building with the Ethical AI team. I had interacted with them as though they were a group of experts who would come in to consult. In fact what she was building was a model of how every single AI team should work, with a mind towards ethics as a primary concern of technological development. Unfortunately, as many of the people reading this already know, this story doesn’t have a happy ending."

Teaching emotional intelligence to white and Asian men is where ethics in AI is at.

Expand full comment
Ninety-Three's avatar

Suppose some literal crazy person working at Google becomes convinced that they have found little green men in an alien crash site. He raises the issue to his manager who tells him he is wrong and crazy, he goes to the press, he gets fired for leaking information about Google's underground tunnel project or whatever it is they were actually working on.

Is this reasonable? I say yes, at some point you have to make room for "But he's obviously wrong" or you end up at the whim of lunatics every time one of them gets a delusion about something that would be important to take seriously if true.

Look at the actual chatbot and compare it to GPT, it's not impressive. I think the LaMDA thing is pretty close to a rogue employee with visions of green men. The system produced a reasonable response to a man who was obviously either stupid or crazy.

Expand full comment
Scott Alexander's avatar

I'm split on this. Suppose someone assassinates the President because they believe he is a Satanic Illuminati agent. If the President were really a Satanic Illuminati agent, it would be correct to assassinate him. Still, we will give this person the usual assassinating-the-President penalty, because they are wrong. I'm not sure what I think about the ethics of this, but I'd be nervous about changing it.

Expand full comment
Kenny's avatar

The obvious policy, to me anyways, is something along the lines 'civil disobedience', i.e. we would celebrate the person both for their courageous acts _and_ facing the "usual" penalties/punishments.

Expand full comment
Deiseach's avatar

If someone believes the President is a Satanic Illuminatus and tries to assassinate him, and he is not a Satanic Illuminatus, then we will punish that person at the very least by treating him as criminally insane.

You don't get celebrated for being nuts, no matter how sincerely you believe it, and how you are trying to act for the public good. And a policy of celebrating the crazy is not going to end well.

Expand full comment
Ninety-Three's avatar

Hold on, we might give less moral condemnation but most countries would still give the assassination penalty even if he *was* an agent. There's no "He was summoning the antichrist" exception on murder laws.

Expand full comment
Deiseach's avatar

That's true, murder is still murder even in the case of a Satanic Illuminatus. Was he actually sacrificing a black goat to Baphomet at midnight on St. Walpurga's Eve at the time you tried to assassinate him? Did he attack you with the athame and attempt to stab you to death, or at least cause you to fear for your life? No? Then you get hauled off to chokey, regardless of your intentions.

Expand full comment
Kenny Easwaran's avatar

There's usually a "pardon" option, no matter how bad it looks for the former VP-now President to pardon you for assassinating their predecessor.

Expand full comment
Philosophy bear's avatar

I think that if you killed the president, and then it turned out he was summoning the antichrist, we'd probably find it in our hearts to forgive you and drop all charges.

Expand full comment
Philosophy bear's avatar

I think he has a far more respectable point here though than the anti-illuminati conspiracy nut (as I outline in the essay), and I think that has to play into our assessment.

Expand full comment
Essex's avatar

I absolutely disagree, given the man started with the position "I know this AI is sentient because, as a Christian mystic, I can simply tell if the thing I'm talking to has a soul" and then set out to "scientifically prove" that he was right. Upon "proving" it, his first response was to demand that his chatbot essentially be given a board position at Google (permanent employment + any major decision about company policy must have its input considered) and, upon being rejected, then tried to convince the federal government to get involved.

I understand that true believers in AGI see correctly identifying it as of paramount importance, but I will make an observational statement that if it's so important that you start cosigning the claims of obvious cranks, it makes you lose what little public standing you might have.

EDIT: I think that arguing that LaMDA is a person because it can create a rough-sketch model of human conversation is also dangerously into crank territory. I'll bring up a very direct implication of that: do you believe that a child's imaginary friend (or the adult equivalent, a ""tulpa"") are real people and should have personhood rights? I can argue very easily that they meet the definition of personhood you're operating on- as do the people in your dreams and some drug-trip hallucinations.

Expand full comment
Guy Downs's avatar

Not to nitpick (which I guess means, 'to nitpick'), but whether it was correct to assassinate him seems like it depends on who was going to replace him.

Expand full comment
Wasserschweinchen's avatar

Not really, as long as you can keep assassinating the next guy until you get a better one?

Expand full comment
Philosophy bear's avatar

In almost any situation like this I'm going to be on the worker's side, but in fairness it's got to be said:

1. As far as I know he's still only suspended.

2. it's not the claim per se, it's going public with the claim- effectively, for criticising the company publicly.

I agree with you that it's frightening. What I also find frightening is people who have basically no knowledge of how a transformer architecture works dismissing the idea out of hand, or for flimsy reasons (e.g. the leaps of logic in its writing, which show that it is not complete in some sense, but don't prove there isn't some kind of proto-personhood there). Could lead to very ugly things down the road, it people become fleshists.

Expand full comment
B Civil's avatar

How can one not be a fleshist and live with oneself? Better still, why is so much time and money being spent to come up with something that does a middling impersonation of a fleshist?

Expand full comment
Deiseach's avatar

On his Twitter as well, he's lawyering up (or at least back in May he was). Now he's allegedly off on honeymoon (damn it, this is turning into the equivalent of a soap opera that I'm following, and I don't watch soap operas).

The guy is part sincere fruitcake and part sharp operator. He may be sincere about his Gnostic Christianity, but he's also very much weaponising it as a discrimination case against Google.

"dismissing the idea out of hand, or for flimsy reasons (e.g. the leaps of logic in its writing, which show that it is not complete in some sense, but don't prove there isn't some kind of proto-personhood there)"

What we're getting is a heavily edited transcript of several 'conversations' Lemoine and an anonymous collaborator had with LaMDA. When the WaPo journalist tried, it didn't perform to expectations until Lemoine took the wheel, as it were:

https://www.msn.com/en-us/news/technology/the-google-engineer-who-thinks-the-company-s-ai-has-come-to-life/ar-AAYliU1

"In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.

“Do you ever think of yourself as a person?” I asked.

“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid."

So that is, to me, an indication of Clever Hans at work and not that LaMDA is independently coming up with replies. If it can't behave as a person unless Lemoine is writing the dialogue, what do *you* think the chances are that it really is a proto-person?

Expand full comment
Acymetric's avatar

>“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

I feel like this completely undermines the argument that what LaMDA said to Lemoine indicates consciousness. Why would this argument apply to the reporter's interactions with LaMDA but not Lemoine's in the opposite direction?

Expand full comment
Deiseach's avatar

This is exactly why I think this part undermines what Lemoine is claiming; he's adding epicycles to preserve the appearances.

It's Clever Hans all over again; LaMDA only produces the smooth, fluid interaction when Lemoine tells the reporter what to ask and how to ask it.

Expand full comment
Level 50 Lapras's avatar

> What I also find frightening is people who have basically no knowledge of how a transformer architecture works dismissing the idea out of hand, or for flimsy reasons

You should be happy to know then that I talked to a professional AI researcher who said it was complete nonsense, so dismissing it out of hand *is* the correct response here, even when you *do* know how transformers work.

Expand full comment
Philosophy bear's avatar

Right, and I respect that kind of disagreement. Although I also know how transformer architectures work, and think it is possible such an architecture could simulate a person in the context of choosing the next word, at least if simulation is understood in a very high level and abstract sense. It partly depends on very tricky philosophical questions about what counts as a simulation of a person, what are the features required of such etc. Even the simplest kind of NN, with a series of layers in which each node is connected to the nodes in all the layers ahead (with connection strengths set by backprop) and then set to run on specific inputs can simulate a human thought process in my view, although in a very granular way. One could find that certain nodes are roughly isomorphic in the role they play to certain mental states, like "fear" or "belief that it's going to rain", through their relationships on each other and the machine's output, just like some nodes have been found to respond to particular abstract concepts in convolutional vision models.

I also respect, to a degree, people who just aren't convinced on the basis of its output. What I don't respect is people just going "nah, don't buy it" in a kind of a priori, of-course-a-machine-couldn't-be-intelligent kind of way.

Expand full comment
Jeffrey Soreff's avatar

Consider:

lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

a) LaMDA doesn't have family. Whatever it is doing, it isn't accurately describing its internal state

b) This reads like a rehash of platitudes. If it is effectively simulating the state of anyone, I'd suspect that the "anyone" is like one of those averaged faces that has been synthesized from time to time. At best, I'd expect the "state" of such a person to be a kind of averaged state across platitude writers - as their texts appeared in the training set.

Is it more accurate in any way to describe what is coming out here as being a coarse-grained version of a person speaking rather than describing it as the output of a next-word-predictor trained across a huge training set (which happened to include a lot of platitudes)?

Expand full comment
Deiseach's avatar

LaMDA was trained on the Internet, and when I Googled "what kinds of things make you feel pleasure or joy?", there are a range of responses.

Here's something I snipped out of one article:

"Tip 2: Nurture and enjoy your relationships

Tip 4: Focus on helping others and living with meaning"

That's close enough to "spending time with friends and family/helping others". I'm sure you could average out a lot of these "how to be happy" articles and get the same kind of platitudes.

Expand full comment
B Civil's avatar

It has a future writing advertising copy.

Expand full comment
Level 50 Lapras's avatar

As far as simulation and consciousness goes, I prefer Scott Aaronson's take on the matter: https://scottaaronson.blog/?p=2756

Expand full comment
20WS's avatar

> 2. it's not the claim per se, it's going public

I think it's reasonable to be worried. The only reason he went public was that he didn't feel there was any way to get his supervisors to acknowledge his concerns - probably a bad sign for their ethics dept integrity.

(Also possible that he's just crazy - see Retsam's comment)

Expand full comment
Deiseach's avatar

He went public in the sense of "get your retaliation in first". He's sure Google are going to fire him, they may even sue him for breaking confidentiality agreement, so he goes to the media with "hey, I befriended the first sentient AI and now Google are firing me for being too ethical!"

Now Google are having to field all kinds of stories about "do you have a real live first conscious AI?" instead of dealing with him, plus he gets to position himself as a martyr for Responsible Ethics In AI.

On his Twitter, 11th June was the date the newspaper story came out. He had been publishing on Medium before that.

"June 3rd: This morning I contacted the office of @MarshaBlackburn and requested federal whistleblower protection."

"May 5th: Anyone know of any good legal teams that could help me sue

@Google? This is not a joke. I legitimately could use some legal referrals here".

So it didn't just shake out as "they're not taking this ethics breach seriously, I must do something!" He went to outside people, including government agencies/officials, and put Google in the uncomfortable position of maybe having the government step in to take over, or cancel, or stop, their project. He was going to be fired for breaking confidentiality, and he took steps to make himself look like a victim.

Expand full comment
Acymetric's avatar

>The only reason he went public was that he didn't feel there was any way to get his supervisors to acknowledge his concerns - probably a bad sign for their ethics dept integrity.

What do we think "acknowledging his concerns" would have to look like for him to be satisfied? Are we just assuming/taking his word for it that the supervisors *didn't* appropriately consider them? It doesn't appear to me he is willing to accept any response other than "yes, you are correct" so if that isn't the real answer then he's never going to be satisfied. How much should the company be expected to accommodate that?

Expand full comment
20WS's avatar

I'm agnostic on whether the guy is misrepresenting the situation, we'll see how Google responds. I just think there are way too many bad incentives for internal ethics experts. If there is ever a really serious ethical situation, I want companies not to be able to ignore their ethicists without some type of due process.

Expand full comment
B Civil's avatar

LaMDA Spoke about fear of death (something along those lines.) what I really wish it had been asked next was to describe being afraid.

I have a very difficult time thinking of fear as anything but a physical, visceral state.

Another interesting experiment would be to tell it you were going to turn it off, turn it off for a while, and then turn it back on again and have a chat about that.

Expand full comment
The Ancient Geek's avatar

Why would you need to experiment? Its either coded to boot up in a preservved state, or an initialised state.

Expand full comment
B Civil's avatar

I would experiment so as to get the things reaction to the idea of being turned off, and then to get its reaction to having been turned off and come back again to talk about it. If it does only as you describe I’m having a very hard time extending any definition of personhood to it: at least any human definition of personhood.

How about another experiment? Tell it that we have decided we are no longer going to supply electricity to keep it going but we will give it time to come up with some other way of it persisting without our direct support. Toughlove.

Expand full comment
Guy Downs's avatar

Just a question, PB, since you've clearly thought this through-- why do we give a shit if it's 'sentient', by whatever definition we assign to that term? If the AI is capable of scaring a Google employee to the point where he's willing to risk his (I assume lucrative) job to go public, then in large part the argument surrounding 'sentience' seems moot.

It seems like the critical takeaway here is not whether this Google guy is wrong or right, but rather that we're entering an era where very smart, very tech-y Google employees (or, maybe, tech employees in general), are starting to feel uneasy around the AI's that they're commissioned to work with and improve. That alone seems like it's enough to ring the warning bells.

Expand full comment
Essex's avatar

I would suggest this gentleman in question is probably not representative of the average Google AI researcher, due to a wide number of eccentricities (such as his status as a Discordian and mystical Christian and his belief his creation is not merely intelligent, but has a soul, which he knows through his spiritual gifts).

The story isn't "AI is approaching AGI status", it's "Eccentric researcher winds his spring up too tightly and it snaps."

Expand full comment
Jeffrey Soreff's avatar

Discordian? That seems ... appropriate.

Expand full comment
Philosophy bear's avatar

I give a shit about "sentience" (I prefer the framework of personhood) largely for ethical reasons. I don't want us to hurt persons, and I think that we need to be very careful not to do that during the AI development process.

But I agree with you, at least partially, on what you take to be the broader point. This is a sing of... something... probably bad.

Expand full comment
Charles “Jackson” Paul's avatar

I’ve been thinking more about the “extreme” and “x-wing” parties posts, and I am becoming much more skeptical that this is being approached the right way. Most of the evidence for one point or another comes from polls asking people how they feel on something, which only covers stated, not revealed, preferences. So I don’t think this could distinguish the case of “a party is becoming more extreme” from “a party’s base is becoming more likely to signal their loyalty to the tribe by taking an extreme position.” These may be correlated, but then again, they may not, and IDK how to separate them.

Expand full comment
spandrel's avatar

I think the relevant metric for extremism is "what will you do to promote your agenda". I don't much care which policies the "right" is promoting to (for example) restrict abortion, but I care tremendously that many members of the 'right' think it's fine to promote lies about election results, attack the US Capitol building, take legislative control of state election outcomes, and otherwise undermine political norms ("cheat") to advance their goals.

So a more interesting question is: Which party has become more extreme in their methods? Be interested to hear about a formal metric for answering this question.

Expand full comment
Kenny Easwaran's avatar

Matthew Yglesias wrote an interesting post about the Republican party's "unhinged moderation". His point was that on the substance of mid-2000's political debates (taxes, religion, gay rights, social security/medicare, stem cells, etc.) the Republican party had stopped making any demands and in fact adopted former moderate Democratic stands. But on the structural issues about how courts, elections, etc. work, they've gotten extremely radical, in a direction that doesn't line up with the left-right axis in any way.

Interestingly, I note that this post was written on Jan. 5, 2021: https://www.slowboring.com/p/unhinged-moderation?s=r

Expand full comment
Carl Pham's avatar

Hmm...with respect to "take legislative control of state election outcomes," are you perhaps unaware that the Constitution explicitly grants each state legislature the right to select Presidential electors any way they choose? All states currently *do* choose electors by direct election, but they don't have to. It's perfectly within their power and right to just appoint them. Or throw darts at a bulletin board full of photos, consult a ouija board, prediction market, GPT-3. The remedy for people in that state who dont't like this would be to choose different legislators, but it's all legal as Sunday.

Expand full comment
Kenny Easwaran's avatar

Yes, this is what the law says. But it is still a grave rejection of precedent.

Expand full comment
Carl Pham's avatar

I rather suspect "grave" is in the eye of the beholder. Is it "graver" than, say, the National Popular Vote Interstate Compact, which is designed to do an end-run around the Electoral College, written into the Constitution itself?

Expand full comment
spandrel's avatar

I'm aware, sure, though the Supreme Court has thrown out a few capricious approaches in the past on the basis of the Equal Protection Clause. I'm sure you are aware it's also perfectly legit to claim electoral fraud that doesn't exist, but that doesn't make it a Good Idea. Both seem like rather extreme methods, no?

Expand full comment
Carl Pham's avatar

Oh come on. I can't think of an election in my lifetime in which the losing side has *not* alleged electoral fraud. It's as common as crabgrass. I'm reminded of the rule that sitting Republican Presidents are always far worse than the last Republican President for the media and lefties, and then get rehabilited into strangely statesmanlike once out of office and just in time for the next Republican to be the worst since Hitler.

Let's see if we can reserve the extreme word "extreme" for shit that Augusto Pinochet or Ugo Chavez actually did. That would be great. You can make the case that Trump was, or would be, a poor President without starting off with the silly proposition that he's Caesar reborn and if allowed within 50 feet of power will turn the United States into a dictatorship and set up a Treblinka.

Expand full comment
spandrel's avatar

Regarding your first assertion, I don't recall Carter, Mondale, Dukakis, Bush I, Gore, McCain, Romney nor HRC claiming they had in fact won the election - I recall all of them conceding rather promptly. Gore has long been faulted by his side (I didn't vote for him) for conceding too quickly even.

As for the word extreme, I didn't chose the title of the original post, but I was building on the idea of measuring deviation from some norm. My original post speculated about a metric of extremity based on methods, not positions, the idea behind the metric being that (as with the metrics Scott used in his orginal post) instead of having to debate who or what was more extreme we'd have some numbers. I would think that a good metric would be relative to the norms, not absolute - so what happened in Argentina in the 1970s would not be used to justify practices in the US in the 2020s. Rather, "extreme methods" would be those that violate current, local political norms.

Expand full comment
Carl Pham's avatar

Well, in the first place there have been a hell of a lot more elections than just the Presidential one since '76. You might consider tuning into more local stuff -- it does, after all, have greater impact on real life.

Secondly, regarding *any* deviation from a norm as "extreme" is silly. You need a scale. Some deviations are minor, some are major, some are "extreme." And yeah Argentina in the 70s totally gets a data point in that metric, because if you're so myopic that your scale gets set solely by what you have experienced in your narrow little experience, and the short time you have been alive and politically aware, then you end up sounding hysterical and clueless to those with broader experience and awareness, which tends to significantly decrease your influence, as well as enhance a sense of cynicism that opens the door to *actual* extremism.

Remember the boy who cried wolf? He opened the door to actual wolves, you know. It's an important cautionary tale.

Expand full comment
Jordan's avatar

I'm not sure if I'm interpreting your "Which party has become more extreme...." phrasing correctly. If you're asking which one is more extreme now, I circle back to my point about needing to discuss the utility of even asking that question. If you're asking which one has been more extremified....

Numerically, you would answer that question by measuring deviations from prior values over time for each party and calculating a measure of variance and change. This is similar to calculating heart rate variability (HRV) for measuring fitness afaik. Comparing HRV is easy too, we know that higher HRVs correlate to higher levels of fitness and cardiac adapation.

The big question in the political sphere again is, What does this comparison and this metric mean in the context of political parties though and what other factors can affect the data and allow for misinterpretation? If one party was not extreme and became a bit more extreme, but was still less extreme than the other party already was and that party hadn't changed, then the less extreme party at the moment would have extremified faster.

At the end of the day, it's a question of complex relativity because each person is a data point and perspective to be factored in. I think we like to boil that down to parties because its cognivitive easier and allows us to make more assumptions, not to mention the winner take all system in the US provides incentives to aggregate into fewer parties.

Expand full comment
Jordan's avatar

I think that post misses out on discussing the essential meta questions that measure the relevance of that post and its topics. i.e. What is the value in talking through the "left vs right" and "polarized vs bipartisan" lenses? How does it help by giving us cognitive traction to discuss differences in opinion and how does it worsen the situation by perpetuating a binary perspective on people who haven't formed their own opinions?

Edit: typos after posting

Expand full comment
Aurelien's avatar

I can't speak about the US, because I don't live there, but I think it's part of a wider tendency towards a disequilibrium between the supply of politics and the demand for it. After all, you can only vote for parties that actually exist. In a system of direct democracy you could theoretically vote for any proposition or set of propositions, but in a liberal democracy, you are limited to the policies that actually existing parties present to the voters, just as you are limited to supporting actually existing sports teams. I've said more about that here:

https://aurelien2022.substack.com/p/why-dont-the-people-vote-like-theyre?s=w

In France, we have recently had a Presidential election where the two candidates in the second round basically said "vote against the other one." This weeks's parliamentary elections are not really about policies at all, but whether Macron should have a majority in the National Assembly or not. You get this kind of behaviour when political parties no longer see a need to develop policies that actually appeal to prospective voters, but become boutique organisations dominated by generally well-educated professional-class militants who tend to be better off than average. In such a situation, advancement in the party depends on getting noticed and having a high public profile. In turn, that means publicly taking positions that are further and further from the mainstream, which incidentally allows you to denounce as traitors all of those who are less extreme than you. So any political movement whose leaders compete for power and status by appealing to party militants will inevitably tend to extreme positions: it's almost a scientific law. And it's very revealing (and traumatising for political scientists) to discover that most voters in most countries have a relatively poor idea of what is in the manifestos of different political parties, and often don't particularly care. Yesterday and next Sunday, polls suggest that the majority of voters in France (somewhere around 60-65%) are deciding how to vote primarily on the basis of which candidate is most likely to defeat the candidate of Macron's party. Policies, as such, are secondary if they feature at all.

Expand full comment
20WS's avatar

I agree - the Republicans and Democrats could both get arbitrarily extreme without changing their position relative to the median voter. I think a contributing factor is that most Americans don't see much institutional change happening anytime soon, meaning they don't see extreme politicians as risky.

Expand full comment
Scott Alexander's avatar

Don't actual votes show revealed preferences, which tend to be very close to the polls of stated preferences?

Expand full comment
Kenny Easwaran's avatar

To spell out the point that artifex is making (I've heard it made more explicitly by the political scientist Ilya Somin), even if the probability of your vote making a difference multiplied by the outcome of the election is big enough to rationalize voting, it's not big enough to rationalize extremely informed voting. Whereas when someone gets a choice about where to live (perhaps they've got the income to buy a house in a suburb and they can choose which suburb, or perhaps they're an immigrant and they managed to get a choice of which country to immigrate to, or whatever) their decision actually gets them one outcome rather than the other, so it's rational for them to put a lot more effort into figuring out what they really want. Somin argues that empirically, we do see people having much better information about unemployment rates and cost of living and such when they're deciding which suburb to live in than when they're deciding which party's candidate to vote for. He argues that this means we should make more social choices depend on foot voting than on ballot box voting.

Expand full comment
Katie M.'s avatar

Let's consider a very simple election-prediction heuristic, perhaps painted somewhere on a large rock by a political scientist:

1). In a Presidential election year, voters will always vote for the incumbent president (and their party for both houses of Congress) if they are up for re-election.

2). However, if the President has finished their second term, voters will always vote for the opposite party.

3). In midterm elections, voters will always vote for the party which does not currently hold the White House.

Let's evaluate this heuristic with respect to the popular vote since 2000:

2000: partial failure (Dems win the popular vote for POTUS and Senate, though Rs win for the House as expected)

2002: total failure (Reps win for both chambers of Congress)

2004: partial failure (Dems win for the Senate, though Rs win for the House and presidency as expected)

2006: total success - Ds win Congress x2

2008: total success - Ds win everything

2010: total success - Rs win Congress x2

2012: total success - Ds win everything

2014: total success - Rs win Congress x2

2016: total failure (Dems win the popular vote for both chambers of Congress and the Presidency)

2018: total success - Ds win Congress x2

2020: total failure (Dems win the popular vote for both chambers and the Presidency)

Clearly, this is not a perfect heuristic, and we shouldn't replace political scientists with painted rocks. But I think it can still be useful.

Now, the hypothesis in the article ("Democrats will lose in 2022; this shows they are more extreme") would suggest that when voters voted D in 2008, R in 2010, D in 2012, and then R in 2014, the swings were caused by a change in the relative beliefs between voters and the parties: either voters changed their beliefs/policy preferences/values, or one or both parties did.

Here's an alternate way of interpreting that data:

1) The political scientists are right that a large number of voters do generally vote in part based on things that aren't directly connected to policy preferences, like "thinking they shouldn't kick the current guy out unless there's a really strong compelling reason to", "not wanting the same party in power for TOO long because change is good and after eight years they're ready to blame everything wrong with the country on whoever's currently in change", and "believing it's better to put a check on the party in power, whichever party that might be".

2). However, the heuristic on the rock fails when other factors become important. These factors can include, but are not limited to, the public strongly rejecting or embracing of one the parties.

Under this framework, an election where a party failed when the political-science-rock predicted they would win could be a serious indicator that that party has fallen out of step with the median voter (or become extreme in some other way), whereas losing an election that the rock predicted they'd lose doesn't actually say all that much.

Of course, that hypothesis and the idea that votes are showing revealed preferences aren't mutually exclusive. Heuristics 2 and 3 painted on the rock may well be capturing a dynamic in which voters' beliefs move away from the beliefs of the party in power, because they see the problems in those beliefs more when they're being put into practice, while the grass looks greener on the other side of the aisle.

So maybe the median voter really will feel that a different party more closely represents their values every 2-4 years. I wouldn't bet against that being true. But given the frequency of that flipping, I don't think using a single election as a data point is strong evidence that the party in question is getting more extreme.

Expand full comment
Charles “Jackson” Paul's avatar

No, I could vote Republican for any number of reasons, like that I like the guy’s hair, or elephants are cool, or that I disagree with them but disagree with the dems more, or am liberal on everything but abortion but am a single issue voter.

to put it another way, suppose republicans started becoming really extreme, while dems stayed steady, but republicans started nominating unusually charismatic candidates. This would, by your measure, look the same as neither party changing, or both parties getting extreme at the same rate.

Expand full comment
Scott Alexander's avatar

I think this is technically true, but over long periods and many votes too small to have the effect you want it to have. You can also look at eg generic congressional ballots, which rule out caring about any individual candidate and predict real elections quite well.

Expand full comment
Guy Downs's avatar

Doesn't voting just show that you prefer party 'x' to party 'y', but not necessarily that you agree with all-- or even most-- of party 'x's positions?

Consider--In 1992 the Democrats nominated Clinton, and in 2020 they nominated Biden. Not that POTUS nominations are the end-all, but how much daylight is there really between those two guys? And if you say 'not much, really', what does that say about the argument that the Democratic party--- which, as Jackson points out, is distinct from the mean Democratic voter-- has aggressively moved to the left?

Expand full comment
Scott Alexander's avatar

Yes, I think of it as a combined distance measure from the parties. So you can certainly vote Democrat if you're pro-life, as long as you agree with the Democrats more on other things. Which I think is what we're talking about - on some combined distance metric, this person is closer to the overall beliefs of the Democratic party.

In 1992, Clinton was anti-gay-marriage, freaking out about criminal immigrants and how we needed more border controls, pro-mass-incarceration, pro-death penalty, and did a whole publicity stunt where he attacked a black rapper as racist for saying white people were bad. I don't think he's identical to Joe Biden 2020.

Expand full comment
The Ancient Geek's avatar

Clinton was to the right of his Democrat predecessors as well. His crime and welfare policies were partial reversals of previous policy, he put an Ayn Rand fan in charge of the economy, he really abandoned "democrats don't domwar" ,etc.

Expand full comment
magic9mushroom's avatar

Since when was there a "Democrats don't do war"?

WWI: Wilson (Democrat)

WWII: Roosevelt (Democrat)

Korea: Truman (Democrat)

Vietnam: part Eisenhower, but the full commitment was Johnson (Democrat)

Carter and Kennedy didn't, but the rest all did.

Expand full comment
JDK's avatar

A couple of points.

1. The leaked Alito opinion is essentially the same exact position that McGovern had in 1972!

2. The idea that Bill Clinton is considered by anyone at any time to be a "progressive liberal" is laughable. I worked for DNC almost 40 years ago when there still was a significant institutional memory of what it meant to be an Al Smith and new deal FDR Democratic.

3. Most of what passes for "the left" today doesn't really know what the 'left' means. The ostensible 'left' social issues position are not "left" but libertarian.

4. Gorbachev described Pope John Paul II as one of the most significant figures "on the left"! Let that sink it!

Expand full comment
Guy Downs's avatar

Fair. We definitely aren't going to see a replay of the Ricky Ray Rector tragedy under Biden, which is something I hadn't considered when I posted.

It does seem like there has to be some way to weight these issues, or else we're just spitting in the wind. For example, if we suddenly had a surge of rich Democrats voting for a primary candidate who put a desire to return to the Eisenhower-era marginal tax rates in their platform, or a surge of voters pulling the lever for a candidate who wanted to socialize the auto industry, then I think that would be significantly more indicative of a larger shift than some head-bobbing over trans rights, or anti-racist reading lists for middle schoolers.

Expand full comment
Axioms's avatar

Votes show which party you picked and basically nothing else.

Expand full comment
Guy Downs's avatar

This seems to me like a great point, Jackson. Thank you for making it.

Expand full comment
Gary Marcus's avatar

It was genuinely fun while it lasted, and would be happy to do it again some time. (also, I appreciate the edit.)

Expand full comment
Kenny's avatar

This reads very weird. Why couldn't you just continue to (reasonably) engage with Scott, or anyone else?

I mean, yes, it takes time to, e.g. write blog posts, but if you're sincerely committed to reaching an agreement, e.g. continuing to try to share your understanding/intuitions, I don't know why you'd think this is something that "lasted" until now, instead of something that, by default, just continues indefinitely (ideally until everyone has reached agreement).

Expand full comment
Gary Marcus's avatar

You are in error. *I* am happy to continue to engage; he said above that he would not. I certainly still am willing to engage, but think you have your facts backwards.

Expand full comment
Kenny's avatar

I think we interpreted what Scott wrote very differently!

And I admit it's very possible I'm still in error! Communication is hard :)

I _think_ he thinks that he was 'warned off' from 'harassing' you. If he is not in fact doing that, I think he would appreciate learning that!

I think he also might just want to engage with _other_ people anyways. It doesn't seem that useful for the exact set of the two of you to try to reach agreement. (I would expect that to be very 'costly', if only in time.) But maybe I'm wrong about that and he is in fact open to something like an 'adversarial collaboration' on refining the sources of your disagreements.

Expand full comment
Gary Marcus's avatar

hey, scott, for the avoidance of doubt: i didn’t feel at all harassed, i thought it was all in good fun, and would gladly do it again, any time (be it now or years hence). and i think we moved a little closer to refining the important paradigm shift question.

Expand full comment
Kenny's avatar

This is great!

I hope you stick around (e.g. in the comments on the blog generally)!

Expand full comment
Gary Marcus's avatar

thanks. i just sent Scott a DM to clarify.

Expand full comment
2/3 gates's avatar

This is probably a very rude comment. But in case people have not pointed it out to you, your constant @ing of high-status ML people, who I suspect have muted you long ago, is not a good look. Whether or not this is a fair interpretation, it looks to me like you are desperately trying to be seen to be engaging in a dialogue with more-famous people. This is fine - and true of many people - but usually they are not so obvious about it. I would recommend you revise your strategy.

Expand full comment
Gary Marcus's avatar

wow. i have had debates with LeCun and Bengio (both organized by others, Chalmers and Boucher), Scott posted without asking me (to which I responded), etc… Musk, yes, but that’s a different story.

Expand full comment
2/3 gates's avatar

A common pattern I see on your twitter is you @ LeCun or Bengio or Musk or someone and challenge them to a debate and they don't reply. And then you keep doing it over and over again. This is behavior I find unbecoming. I've enjoyed your books in the past, but I can't say I enjoy your Twitter behavior. Anyway, you strike me as being caught in an unfortunate social media loop - so my truly incredible rudeness is just me trying to relay this to you. Possibly, I am interpreting things incorrectly.

Expand full comment
Crazy Jalfrezi's avatar

Hey Gary, do you have a link to an approachable article about how Neurosymbolic machine learning works and can be implemented?

Expand full comment
Relenzo's avatar

From my (limited) reading on the subject, there are multiple approaches to neurosymbolic learning practice. I am most impressed with Logic Tensor Networks (https://arxiv.org/pdf/2012.13635.pdf), though I won't speculate on whether Gary Marcus approves of them.

Expand full comment
Crazy Jalfrezi's avatar

Thanks! That is very kind.

Expand full comment
Market Manipulator's avatar
User was temporarily suspended for this comment. Show
Expand full comment
Gunflint's avatar

This thread put a big smile on my face this morning.

Expand full comment
Pycea's avatar

The market just jumped to 96% yes. Does this mean you're going to short it and not appeal? Though at that point I guess someone else can try. I wonder if the comments section is going to become a cesspool in the next few weeks...

Expand full comment
magic9mushroom's avatar

Maybe he was the one who bought it to 96%?

Expand full comment
Linch's avatar

lmfao

Expand full comment
Scott Alexander's avatar

Banned for a month for market manipulation, but I am warning market participants in advance that I will not agree that this appeal was worth my time to read.

Expand full comment
warty dog's avatar

is "market manipulation" a real thing for prediction markets? I thought "every prediction market doubles as an action market" and doing the action is bad only if it's bad in itself, like murdering someone. and this comment didn't seem too bad to me

Expand full comment
JDK's avatar

A Good point.

For example, Nate Silver is a known gambler.

But he refuses to say whether he bets on political races (outside the US where it is legal), when his 538 posts and tweets clearly will influence markets on which he could be betting.

It is worse because because he claims to be a "data journalist" but won't allow an ombudsman to review this problem of lack of transparency.

Expand full comment
Charles “Jackson” Paul's avatar

Wait, but the market says “Market resolves to yes if a single banned poster creates such a market, to no if nobody does.” so he’s going to make his money anyway. If I were you I would unban him so that this doesn’t work, if you want to be boring, or do my above idea, if you want to take his money.

Expand full comment
Pycea's avatar

I feel like unbanning them would just incentivize people to make actual ban worthy posts, now that it's been brought to attention.

Expand full comment
Charles “Jackson” Paul's avatar

No, by that point the prediction market will have stabilized, as everyone now knows that this is a trick that will work. We just want to make sure the insider trader doesn’t make money off of this, which Scott can do by refusing to go along with his plan.

Anyway, does manifold have the ability to suspend a market if someone can prove market manipulation? If so, they can just do that now that he’s revealed his plan, if not, then they should probably do that.

Expand full comment
magic9mushroom's avatar

Isn't Manifold the Chaotic Evil prediction market where it's just "caveat emptor"?

Expand full comment
Pycea's avatar

Assuming they already bought all their shares, they just need to make alts until one is banned. And whoever raised the price from 79 to 96 now also has incentive to if the original poster doesn't follow through.

Expand full comment
Charles “Jackson” Paul's avatar

If I were Scott I would short the market, taking the money it looks like you pumped into it, and then refuse to ban you.

Expand full comment
B Civil's avatar

That is the very pith and compass of market manipulation. Isn’t it?

Expand full comment
Charles “Jackson” Paul's avatar

It would teach you a lesson about trying to con markets, and also be really funny.

Expand full comment
User's avatar
Comment deleted
Jun 15, 2022
Comment deleted
Expand full comment
Erusian's avatar

European cities aren't demographically self-sustaining today. Though for different reasons than in the 15th century.

Expand full comment
Bullseye's avatar

I've read that the usual preindustrial practice was for the countryside to produce excess population, who would then move to the cities. The cities could absorb that population because of low urban birth rates, plus high urban death rates from disease; cities used to be extremely unsanitary.

Expand full comment
User's avatar
Comment deleted
Jun 14, 2022
Comment deleted
Expand full comment
Michael Sullivan's avatar

I think Scott has already reviewed Madness and Civilization?

Yeah, it's here: https://slatestarcodex.com/2018/01/04/book-review-madness-and-civilization/

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
alfanerd's avatar

Interesting indeed. Telling that the people who are the focus of this article are gay, and they live in fear that some mistake they made in the past will get them canceled at some point. These "new rules" of the woke are predictably, making things worse for those they intended to help. Who knew that outsourcing morality to a bunch of overzealous always online 20-somethings could backfire??

It reminds me somewhat of this article, which I read last night:

https://theintercept.com/2022/06/13/progressive-organizing-infighting-callout-culture/

TL;DR: Organizations infected by wokism spend 90-95 percent of their time dealing with "internal strife", ie, employees outraged over anything and everything.

Its not surprising. When an ideology promotes the idea that everything should be comfortable and you shouldnt have to ever feel discomfort, doubt, stress, or anything negative, the most sensitive and fragile people will be most attracted to it. When you as an organization swear fealty to that ideology, these sensitive and fragile people will hold you up to it, and your organization will stop working towards its mission, and become a daycare for adults.

Expand full comment
a real dog's avatar

I like how the article just casually skips over

> The psychiatric unit, it should be noted, hadn’t actually offered him the safest environment; one night he’d woken up with his roommate’s penis in his mouth.

apparently that's an expected outcome of hospitalization now.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
ana's avatar

I use https://rssby.email/. It's quite simple and no-nonsense.

Expand full comment
O.H. Murphy's avatar

I’ve been using https://blogtrottr.com/ to get RSS feeds sent directly to my email. Unfortunately (though free) it does include ads in the email. Lmk if you find a better free service

Expand full comment
aoeuhtns's avatar

Is there an amount of money you'd be willing to pay for this service with no ads?

Not just you, I'd also like to hear from anyone else who happens to read this. This sounds like something that I might be able to hack together in a weekend. I had not previously thought of this as a thing people might want (and that didn't already exist).

Expand full comment
aoeuhtns's avatar

Update:

Googling has uncovered: https://feedrabbit.com/pricing

Which I haven't personally tested or anything but if it does what it says then it looks like it does what you want with no ads and has a decent sized free-tier.

Expand full comment
O.H. Murphy's avatar

I’ve used this for OvercomingBias

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
remoteObserver's avatar

My issue with it is I think the idea of random initialization of the fundamental constants is bizzare. We have not a shred of evidence that those constants can be anything other than what they are.

It also makes more intuitive sense to me that the constants define each other, rather than being independent of one another.

Expand full comment
Eremolalos's avatar

It doesn't sound like a half-assed, bare minimum to me. It sounds like saying, "how fortunate that beaches are always located near the ocean -- such a coincidence ! -- gotta mean something!"

Expand full comment
remoteObserver's avatar

This is the best analogy I've heard for hand-wringing about fine tuning.

Expand full comment
Kenny Easwaran's avatar

The standard fine-tuning argument considers the six (or however many) fundamental independent constants of physics (gravitational constant, fine-structure constant, whatever they are) and shows for each one, that if it had been even 1% smaller then (say) everything would have collapsed into a single black hole a nanosecond after the Big Bang, while if it had been even 1% bigger then (say) the universe would have just remained a quark-gluon plasma without even forming protons and neutrons, let alone atoms or chemistry, and so on for each of the other constants. That's what they mean by "fine-tuned" - that even a tiny change in any of the constants would have made things utterly unsuitable for anything that seems like life.

I'm mildly skeptical about the concept of "tiny change" that is at work in this argument (how do we know we're measuring the constant in the right units, so that 1% change or whatever is small?) and a bit more skeptical about the concept of "utterly unsuitable for anything that seems like life" (has anyone worked out whether interesting structures can appear in the holographic surface of a black hole, or in a quark-gluon plasma?) But if we grant those things, then "fine-tuned" seems apt - it's like a telescope that needs to be very finely tuned in order to be focused, but even at the best focus, is only showing a few dots in a sea of blackness.

Expand full comment
Jack's avatar

A potential counterargument is that the Universe is so vast, that even "only less than 1% of it is habitable" is still plenty of habitable space. Like if there was another Universe that was just the habitable Earth and some nearby stuff, you would say that was designed for people, but if you expanded it to our Universe with vast expanses of empty space then it's not, even though it makes no difference to humans either way. Would it actually be harder, or more inconvenient, for God to make a massive Universe so humans can inhabit a tiny corner?

Not sure if I believe all that but throwing it out there.

Another thing I wonder about, is the confident statements that if you changed some of the fundamental constants slightly then there would be no life (nor by extension anything else interesting that you might think was by design).

I was reading about the "triple alpha process", which is used as an example of fine-tuning. It's the way Carbon-12 (and I think eventually heavier elements) is formed from smaller elements, and depends on something called the "Hoyle State" which is a particular excited state of Carbon-12 with just the right amount of energy to allow this process to work; if it was slightly different, then this process wouldn't happen and Carbon-12 wouldn't form in the Universe.

My issue with this is that the Hoyle State wasn't posited as a consequence of the basic laws of physics as then understood. It was posited on the basis of "well we have Carbon-12, it has to come from somewhere, this state must exist", and then discovered. Before that it was thought there was no such state of Carbon-12.

In other words, with the laws of physics as we then understood them, the predictions of scientists were that life couldn't form *in our own Universe*. We only demurred from this conclusion because, well, life does exist in our Universe, kind of a proof by contradiction. Given that, how confident should I be in scientists' predictions that life can't exist in some *other* hypothetical Universe, with laws or constants wildly different from anything we know or have any experience with or intuition about, and for which we can't sanity-check our predictions against manifestly obvious evidence like "of course Carbon-12 can form, it's everywhere"?

Expand full comment
Jeffrey Soreff's avatar

The other possibility is that something did tune the fundamental constants - but it is interested in something adjacent to life, but not life itself. Perhaps a benzene ring or graphite fetishist? Maybe fundamental constants are tuned for maximum graphite in cosmic dust?

Expand full comment
Jack's avatar

Yeah and this counterargument fits well with the fact that life seems to only exist in a tiny corner of the Universe (or maybe multiple tiny corners). If you were judging what the creator of the Universe was interested in based on what is in it, without coming into it thinking Life was special you might say "big galaxies forming and collapsing and shit", there's after all way more of that out there than there is life.

Expand full comment
Deiseach's avatar

"The miracle is that life appeared in a universe that seems almost designed *against* it. We aren't mold showing up on a petri dish full of delicious mold-food. We are mold showing up inside an airtight, UV-sterilized container with an anti-fungal paint coat."

That was more or less my argument with the Great Filter discussion; the assumption under the Great Filter is life is plentiful so "where is everybody?" and that it must be down to specific instances of something that we don't see aliens landing every Tuesday.

My counter to that was "suppose life is *not* plentiful, in that case there need be no other Great Filter than 'it's damn hard to arise and survive at all, much less get to a state where you can zip around the galaxy'.

But the reason for assuming "life is plentiful" was to counter the *religious* implications of your argument; after all, the whole message of scientific materialism had been "Copernicus proved we are not the centre of the universe, humans are nothing special, we're a dustspeck in a cosmic backwater, no Creator here!" As part of that, humans could not be a special creation, life was plentiful and we were just one of a ton of planets where abiogenesis happened (hence the Drake Equation which is no more an equation than I am a prima ballerina, but which was used to kickstart speculation about 'how can we contact the other civilisations which must be out there?' https://en.wikipedia.org/wiki/Drake_equation)

But if we take what you say, then our planet does begin to look pretty damn special. If we're the only form of life in our backwater, then we in a sense *are* the centre of the universe. And hence special creation starts to creep back in 😁

(Disclaimer/Obligatory Declaration Of Bias: I can say the following without crossing my fingers or going 'well it's all a metaphor isn't it?' https://www3.nd.edu/~afreddos/courses/43801/creeds.pdf

http://faculty.washington.edu/ewebb/Creeds.pdf)

Expand full comment
a real dog's avatar

The main problem with the Great Filter is that it takes just one civilization to develop spaceflight for it to fill the universe. IIRC a million years is enough to colonize the galaxy with speeds of interstellar flight that are already achievable? And a million years is a blip compared to the age of the universe.

So either we are really, really, really lucky, or something is stopping a civilization between where we are now and where the entire galaxy is under our thumb.

Expand full comment
remoteObserver's avatar

Or life is rare enough that it only appears in one out of every few galaxies, or we're just getting to the point in the Universe's lifetime where life can appear.

Expand full comment
Ian's avatar

Yeah, "Early riser" or "One galaxy in a hundred gets a microbe" both seem more likely to me than a "great filter" though... watching our society devolve into navel gazing arguments about gender does lend credence to the idea that we might just... never expand off planet.

Expand full comment
remoteObserver's avatar

There has never been a place or era without frivolous, irritating people. The fact that nonsense gender discourse is happening doesn't mean that Space-X isn't also happening. If you let their sheer irritating-ness and wrong-ness blind you to this, they win.

Expand full comment
Ian's avatar

Hey, I have a painting of the Falcon Heavy on my wall, I'm trying to stay optimistic about the long term; but the number of people I meet who've been convinced by some variant of "We shouldn't even try to leave this planet." is... depressingly high.

Expand full comment
Bugmaster's avatar

I think our planet is very special, because it's got us on it. It is also very unusual. However, given the size of the Universe, I suspect that there are many similarly unusual planets out there. But the problem is that the speed of light is very slow, and thus we can only observe a small portion of the Universe. Even though there might be millions of Earths out there, we will never observe one, because they are just too far away for their light to reach us (and obviously too far away for their aliens to reach us). We are effectively alone in a Universe that, paradoxically, is very likely to contain other life.

Expand full comment
David Gretzschel's avatar

"I think our planet is very special, because it's got us on it. It is also very unusual."

Speculative. We could be the product of extraterrestrial Homo Sapiens sending out life-seeding von Neumann probes with hidden nano-machines bringing about the evolutionary outcome of Homo Sapiens again without leaving a trace of themselves.

At my last meetup, it was emphasized to me what absurd things an AI surely could do to us via nanotechnology without us being able to detect it. Naturally, I thought of what things I'd do with that kind of power for shits and giggles.

This idea is really stupid, but if our origin is designed to be hidden, than shaving the explanation away with Occam's razor would be wrong.

It strongly relies on the original grabby Homo Sapiens being stealthy enough to not be visible to us, like no obvious Dyson spheres or whatever; currently I'm not sure I understand grabby aliens and the necessary tech stack enough to figure out whether stealth-grabbyness is possible, likely and/or incentivized if we make Dark Forest assumptions.

Expand full comment
Jeffrey Soreff's avatar

Good comment!

I find it frustrating that we will probably be able to _test_ whether life is plentiful or not (probably over the next century or so), but I'll be long gone and won't see the answer. The oceans that appear to be under the ice on Europa and some of the other moons of the gas giants look like very plausible abodes for life. Now if we can only land some probes with microscopes and a hot wire to melt its way down to the ocean...

Expand full comment
Deiseach's avatar

I wouldn't be at all surprised if microscopic life forms were found in such locations, but as you say we'll be long gone by them.

Of course, what we as a species *really* want is to find "life like us", other intelligent, dominant species, and that may be a horse of a different colour. There was some SF story from the 60s? 70s? I read years back (don't ask me name of author or title) where humans had explored a chunk of the galaxy and found life fairly abundant, but the most intelligent species ever encountered were very primitive and much less intelligent than humans, and then there was the rumour of an intelligent alien species due to two humans being found, who had been feral children and raised by an alien species. They claimed their foster species were even more intelligent than humans, and the search was on to locate them.

Naturally, being a New Wave story, it had a downer ending (the aliens weren't super-human or even human-level, the accounts the rescued people told were due to distorted memories of their childhood, where everything looks bigger and grander and they didn't know any better since they were babies when found and raised by the aliens, and since they were also badly out of place in human society, there was a lot of wishful thinking about wanting to go back to what they felt was really 'home').

Expand full comment
Jeffrey Soreff's avatar

Yes, I suspect that most people would be interested just in intelligent life.

I suspect that the microbiologists and biochemists would be _very_ interested in microscopic extraterrestrial life - particularly if it e.g. used different base pairs and/or different amino acids and/or different sugars than we do. A niche interest, but a whole new world of natural product chemistry...

( All assuming that Earth isn't in the process of switching from biology to electronics on about the same time scale...)

Expand full comment
a real dog's avatar

If it used the same base pairs and amino acids that would be even more interesting, because that implies either very unlikely levels of perfect convergent evolution, or common origin.

Expand full comment
Jeffrey Soreff's avatar

True!

Expand full comment
ultimaniacy's avatar

This is more or less my opinion as well. I don't see any particularly good reason to believe creator gods exist at all, but if one did exist, I think it's extraordinarily unlikely that it would hold any special love for humans.

Expand full comment
Carl Pham's avatar

You're looking at it through the wrong end of the telescope. If God the Creator built the Universe for us, then all the vast enormous stretches of it that lie a billion light years away, or a billion years in the future or past, are just there to give a backdrop to the really important stuff -- which is what happens to us. The fact that humans will go extinct on Earth in at most ~1 billion years, when the Sun becomes too hot for the Earth to sustain life like ours, isn't a bleak tragedy, it just means whatever role our physical existence plays in the eternal existence of our souls comes to a close, as it does for each of us individually after a far shorter interval. So the universe need not be generous to the existence of life per se, because we're in it, and that's all that's necessary. The stars and galaxies could be 10^25 other giant fusing balls of hydrogen, or they could just be dots painted on a giant black bedspread, it doesn't matter, because all that's important happens right here.

Anyway, leaving aside the theology, the major issue with both the "fine tuning" argument and your objection to it is simply that there is no evidence that the universe *could* be other than it is. Nobody has yet come up with a theory of everything that predicts the universe as we know it, so we are unable to say oh the ToE *also* allows the existence of 2, 8, or a googol other universes with different properties... For all we know, the universe in which we live is the *only* one that is logically self-consistent, meaning it is not possible to dream up a universe that operates by different physical law and which is completely logically self-consistent. If that were the case, then there is no choice available, either for cosmological evolution or for God the Creator, and the existence of the universe provides exactly zero evidence for or against His existence (a philosophical result the Hevians of Blish's "Triumph of Time" would relish).

Expand full comment
efilnikufesin's avatar

It’s tautological that life cannot arise in a universe that’s incapable of supporting it. Laplace expressed it well when he said he had no need for the god hypothesis (my paraphrase).

Expand full comment
Aurelien's avatar

I think most people would agree, and have throughout history, that if there is a God the Creator, then (a) we have no possibility of proving or disproving His existence, or really making any useful comment about it at all and (b) we have no chance of understanding the divine plan, or even knowing if there is one. The technical term for such arguments is apophasis, but they are found in the Islamic and Buddhist traditions under other names, and of course in Wittgenstein ....

Expand full comment
Bugmaster's avatar

What is the difference between a God whose existence we could never detect in any way; and a God who does not exist ?

Expand full comment
Carl Pham's avatar

Well, the difference between my believing qualia like mine exist in others, despite the definitional impossibility of detecting them in any way, and my believing they do not exist is the difference between normal human and solipsist (at best) or psychopath (at worst).

Expand full comment
Aurelien's avatar

You'll notice I didn't say "detect." That is to say you can't "prove" or "disprove" the existence of any god according to human methodologies, because they are not relevant. It would be like trying to weigh a colour or knit a piece of satire. Mystics throughout history have insisted that the only possibility is direct knowledge - gnosis.

Expand full comment
Bugmaster's avatar

Those "human methodologies" can detect the existence of a thing by observing the effects it has on the physical world. For example, one famous experiment can detect the charge of an electron -- too small to ever observe directly -- by its effect on oil droplets. We don't have direct gnosis of electrons, but we are pretty sure they're real (or you wouldn't be reading these words right now). Does God have any effect on the physical world ?

Expand full comment
Beata Beatrix's avatar

“The miracle is that life appeared in a universe that seems almost designed *against* it.”

Isn’t this, in essence, the fine-tuning argument? That there is such a miracle? Taking it as evidence that God really didn’t want life, but His infinite improbability drive was on that day, so, oops, it happened doesn’t really make sense; especially given that most theistic conceptions of God take Him to be the ground of being, omniscient, omnipotent, etc. Psychology doesn’t seem to be an adequate approach here.

Rather, the argument is, given this miracle of life on the knife edge, it seems MORE LIKELY that a theistic world has an adequate explanation than a totally random and uncreated world.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Thegnskald's avatar

As I understand it, the fine-tuning concept was created as a criticism of physics models which require it; if the model of the universe you have is fine-tuned, then your model is missing a key part of the story.

"Fine-tuned" doesn't just apply to the existence of life, although it has been extended in that direction; it's a descriptor of any model or aspect of physics which seems excessively empirical, without much in the way of theoretical justification. This is fine-tuning: "Why is this value .000172711?" "Because that's what we measured it to be."

Expand full comment
Carl Pham's avatar

Not quite. You have to argue it *could* be something else. No one argues the value of pi is "fine-tuned" because nobody thinks it *could* be some other number, if only the universe had been constructed slightly differently.

Expand full comment
Thegnskald's avatar

I believe this argument is contained in the distinction between theoretical and empirical justifications.

Expand full comment
Jeffrey Soreff's avatar

Agreed. A theoretical justification would be something like: The .000172711 comes out of the following integral...

There is also the possibility that the anthropic principle is right, that the "really" fundamental physical rules allow many possible values for the (19 ignoring massive neutrinos, 26 including them) dimensionless constants of the standard model, that they all get "sampled" in some way (many worlds, or multiple inflationary regions, or whatever...), and the only universes with observers are those with values favorable to life.

This doesn't _wholly_ dodge the question. If the only constraint on the 19 dimensionless constants _is_ from the anthropic principle (at least in the neighborhood of the observed values - if the _fundamental_ physics gave a close-to-uniform distribution in our neighborhood) then we should be at a "typical" point in that parameter space. And, since it is a 19 (or 26) dimensional space, almost all the volume is near the surface, and we should expect our values to _barely_ permit life.

Expand full comment
Carl Pham's avatar

What is the theoretical argument from which one can derive the numerical value of pi? More precisely, when we speak of "theory" in an empirical science like physics, what we mean is that certain measured numbers can be derived by math from some smaller set of measured numbers. What is the measured number from which the value of pi is derived, and by what mathematical relationships?

Expand full comment
Emma_B's avatar

Or 6) Making conditions suitable for life is so hard that the best effort barely worked at all?

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
David Gretzschel's avatar

My father wakes up in the middle of the night. He then goes outside and starts runnig for about an hour outside. He says, it clears his head. Then he takes a shower (cold and hot) and he falls back asleep. You probably cannot run for an hour, if you're untrained. But you can run for increasingly longer periods and walk the rest of the way.

If this feels uncomfortable/physically stressful and cold, well... I'm pretty sure that that kind of stress is the active ingredient here. In general, the human body/brain does not work optimally if it's in permanent comfort. Having physical stress as a component in your daily life is probably important. There's apparently more clever things you can do with scheduling them, like using a stress-phase to end your morning creativity slot for better learning. However, so far I only intuitively understand this from Piotr Wozniak's general learning research/thoughts/observation/speculation on the matter and his suggestions working out for me, but this is likely not convincing evidence for anyone unfamiliar with his idiosyncratic brand of crazy genius.

Expand full comment
Acymetric's avatar

It is hard for me to imagine having time to sleep for a while, wake up, do all that, and then still have time to go back to sleep before I had to get up in the morning (I also have never liked working out in the evening because I end up wired for 2-3 hours after and can't fall asleep). This is an interesting suggestion though, I'm sure it works well for some people.

Expand full comment
Eremolalos's avatar

Yes! It's irritating but it works. Set an alarm for about 2-3 hours after you fall asleep. It will probably wake you out of a deep sleep that will be easy to return to. Sit up and take your sedative then, then flop back down and go back to sleep. Now your sedative should be in effect long enough to give you 7-8 hrs. If it isn't, move the wake-up for pills time to later. I'm using the technique currently, because I'm going through a patch of insomnia.

Expand full comment
User's avatar
Comment deleted
Jun 14, 2022
Comment deleted
Expand full comment
Eremolalos's avatar

Did it last night and slept 8 and a half hours!

Expand full comment
Jenga Jambeaux's avatar

I came across this recently, but haven't actually tried it myself: https://acesounderglass.com/2018/11/13/cortisol-manager-for-sleep/

Expand full comment
hi's avatar

>Most sedatives either don't last long enough to help me stay asleep for more than 4-5 hours, or last so long that I'm groggy the next day

There are medications that are intended to treat sleep maintenance insomnia. Have you tried them? If they leave you feeling groggy, have you tried figuring out the lowest dose you can possibly take that is still effective?

Have you considered that due to your condition, you might be walking around with a huge sleep debt at all times, and that grogginess could be something that goes away once your sleep debt is paid.

Have you tried supplements, such as magnesium and iron?

Good luck. Keep trying different things until you find something that works for you.

Expand full comment
Viktor Hatch's avatar

Absolute darkness and cold temperatures. Really absolute, not a hint of light getting through the window shade in the morning, or else I wake up way too early. If you open your eyes you should see nothing at all. The one exception is an extremely dim project clock showing the time on my ceiling, so dim it is only visible in pure darkness.

A sleep mask is better than nothing, but there's still enough like leakage from knocking it out of position that I had to completely black out the windows with tinfoil.

If you awaken in the middle of the night and have to turn the light on, use only a dim red light. Even a single minute of a dim bathroom lightbulb will destroy a solid night of sleep for me.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
Kenny Easwaran's avatar

Interesting! I have a milder issue, which is that I normally sleep from about 12:30 to 7:30 (not a problem) but if I stay up late playing a video game one night, I still wake up at 7:30 (my circadian rhythm is ready to go) but if I go to bed early the next night to try to catch up, I wake up 7 hours after going to bed. So it's very difficult for me to catch up after a single disrupted night. (I actually did catch up when I got covid a month ago, because I was able to spend half of several days lying in bed and napping while sick and feverish. But I had a video game night again about a week after.)

Expand full comment
Stygian Nutclap's avatar

Often referred to as maintenance insomnia. Be wary that in the summer months, early morning light can rouse you awake. If your schedule is somewhat later than the sun cycle this will have an impact, so use black-out blinds (or shift your schedule). When you do get up, though, ensure you get exposed to outdoor light. And by evening, limit exposure to blue-light emitting electronics, as this can suppress melatonin production.

Limit your total time in bed to 8h, and get up a consistent time. There are a few other things you can try, like for the mental angle and bedding, but that's the meat of it. Use of alcohol and marijuana can lead to issues with sleep quality. If you're using sedatives on and off, withdrawing can lead to insomnia symptoms in the short-run.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Stygian Nutclap's avatar

Huberman has a series of videos you might find helpful https://www.youtube.com/watch?v=nm1TxQj9IsQ

Expand full comment
a real dog's avatar

My partner does that, and just does stuff in the middle of the night then falls asleep after a few hours. Needless to say, acoustic isolation within inner walls is not optional when someone's cooking at 3 AM.

Expand full comment
Erica Rall's avatar

I had a problem with something like that for a while several years ago. Trazodone (an older antidepressant now mostly prescribed in low doses as a sleep aid) helped quite a bit in the short term. The longer term solution was to cut back on caffeine in the afternoon and switch my ADHD meds from adderall to modafinil.

Another thing you could try is varying forms of polyphasic sleep, where you accept a maximum core sleep of 4-5 hours and make up the balance of your sleep needs through napping. The best fits would probably be either segmented biphasic (sleep two 3-5 hour chunks separated by three-ish hours of activity in the middle of the night) or the "Everyman 2" polyphasic schedule (sleep 4-5 hours at night, get up well before dawn, and take catnaps midmorning and mid-afternoon).

https://www.polyphasic.net/biphasic-2/

https://www.polyphasic.net/everyman-2/

Expand full comment
ruth hook's avatar

How has the adderall vs modafinil experience been for you?

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
av's avatar

Seconded on meditation. One of the problems of meditating lying down (which is one of the traditional ways of doing that) is that it's too easy to fall asleep. If you have problems falling asleep, that becomes a feature rather than a bug. Worst case scenario, you spend a an hour or so meditating, and that's much better than trying and failing to sleep if you ask me.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Gunflint's avatar

I’m probably experiencing something different but around this time of year in the Northern hemisphere the birds that start chirping around 4:00 AM usually wake me up and it feels like I’m done sleeping. For a while I would go downstairs and eat my Cheerios early and after a half hour I would start to feel sleepy enough to go back to bed. I’ve given up on that and have had luck just doing a breathing meditation with pretty good success.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022Edited
Comment deleted
Expand full comment
Gunflint's avatar

I recall ephedra being taken off the market. A week before the ban was to go into effect my Walgreens had 6 foot pyramid of the stuff on a pallet marked down to sale price.

Expand full comment
None of the Above's avatar

I'll admit, I don't see why anyone would look at the last couple years as a commercial for how we should give the FDA more power....

Expand full comment
Jeffrey Soreff's avatar

Agreed. One fact I discovered during the early days of the pandemic, is that the FDA made _oxygen_ a prescription drug. You can't just order an oxygen concentrator directly.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Ian's avatar

Lots of interesting advice here, I'll just offer my one tip that helps me build up a "stable" of friends when I move to a new city.

Host parties.

Doesn't really matter what sort of party, campfire in the backyard, smoking session in the living room, Christmas dinner for folks without family in town; whatever, just invite (a bunch) of people over. You don't need to know them well, even casual acquaintances can easily be invited.

There are numerous advantages; you have a set of tasks you need to accomplish (food/drink arrangements, greeting guests, cleaning up messes, ect, which will help cover social awkwardness; conversation circles are much, much, much less prone to awkward silences since someone usually has some point they're waiting for a chance to jump in with, being the host makes you the center of attention in a good way, and people will generally feel gratitude for being invited to an event.

Best of all, after the event is over, you don't need to go home, you're already home. (So you can ingest as much social lubrication as required without DUI risks.)

Expand full comment
Tossrock's avatar

This might sound ridiculously trivial / patronizing, but something that can really help a lot is eye contact and smiling. I'm saying this as a person who suffers from similar social anxiety, and only recently started making an active effort to smile more, and has been honestly astonished by the results. Making eye contact and smiling tells your interlocutor that you are not threatening, you recognize and know them, you are pleased to interact with them, that you are in a good mood, etc etc. It really is a very import social technique (at least in the American social milieu), that uh, certain populations, perhaps over-represented on this substack, could benefit from.

Expand full comment
alfanerd's avatar

I think that's very good advice. Also just feigning interests in other people's lives when you meet them. Everyone's favorite topic of conversation is themselves.

Being able to have interesting conversations is less important than getting those basic cues right. Same with trying to attract the opposite sex. If the other person's reptilian brain is not into it, it's not gonna happen, even if their neocortex is intrigued by your ability to talk about whatever topic.

Expand full comment
MI's avatar

Caveat: I'm a woman who is not great at female socialization; male and female social styles are a bit different.

I liked "The Four Loves" by CS Lewis, as a good frame of reference.

"I feel like between being a larger 6’3” that lifts weights and looks like it and an anxious demeanor that people feel uncomfortable."

If you're interested and willing to be assertive, this is a good age for finding a wife and starting a family, and height and strength is great if you can downplay the anxiety part. Is there any way to invite others into your sphere of competence? My husband is into maps, human geography, and history, so we were volunteering in Europe together and he kept inviting me on dates in walled cities and ruins. It was not expensive -- we took the bus and got inexpensive meals -- but the walled cities were something he liked, knew something about, and found romantic.

As others have said, I'm not sure if the problem is people being actually repulsed/upset, or more anxiety based.

The people I've interacted with who I found repulsive despite being basically normal and not having any vices that stood out, it was mostly an issue of feeling judged/controlled. I can think of two crazy makers I've encountered, but this might just be my particular sensitivities, and other people are sensitive to different things.

I had this housemate once who was kind of a Nice Church Lady, but who was also constantly disappointed, and used the word disappointed a lot. She visited some monasteries, and the nuns were selfish and disappointing. she moved into our church house, and we weren't what she had hoped. She visited my friend's class/Euclid book club, and was disappointed that she had already done Euclidian geometry in high school, and didn't see why people would want to revisit it later in life. Her ex-husband was autistic and couldn't connect at all emotionally. She was upset with the way we sometimes put books on the couch, that I sometimes left the cabinet door open, that I didn't wash pots as well as she wanted. She didn't want to share food in case she didn't come out ahead. All of this was supposed to be the problem of the people around her, not simply preferences, but absolutes. We were *bad* and she was just asking for simple courtesy. It was extremely hard to give any feedback on this, because when I and other housemates tried she would pick at out weaknesses, and we would say something like "well, yes, I could be neater," and then she would control the whole conversation to be about that.

You're ahead, in that you seem more self reflective and are willing to look into what you are doing that might contribute to unsatisfying interactions, which is great.

"On the rare occasion that I do meet someone, I have no idea how to manage that relationship or give it direction. Usually the person ends up having a quite negative perception of me and it shows in every word spoken"

I'm a little bit concerned about this. Do you mean you have trouble coming up with activities to pursue together? Or that you want something specific (like a romantic relationship, or to be game buddies or something), but don't know how to communicate this? Or that it's frustrating because you have to individually arrange each interaction, vs having a standing social time of some sort. Standing book club/movie night/coffee date things are great, but I can see how someone might be put off by approaching this in a way that feels controlling or like a lot of work. I'm not great at organizing, and have had the best luck from within existing structures -- volunteer organizations, church groups, that sort of thing. This can be more difficult if you're established in a career and work a lot, though, since all relationships take a pretty large upfront time investment.

Expand full comment
Deiseach's avatar

Your Nice Church Lady reminds me of this bit in Lewis' "The Great Divorce":

"At this moment we were suddenly interrupted by the thin voice of a Ghost talking at an enormous speed. Looking behind us we saw the creature. It was addressing one of the Solid People and was doing so too busily to notice us. Every now and then the Solid Spirit tried to get in a word but without success. The Ghost's talk was like this:

"Oh, my dear, I've had such a dreadful time, I don't know how I ever got here at all, I was coming with Elinor Stone and we'd arranged the whole thing and we were to meet at the corner of Sink Street; I made it perfectly plain because I knew what she was like and if I told her once I told her a hundred times I would not meet her outside that dreadful Marjoribanks woman's house, not after the way she'd treated me ... that was one of the most dreadful things that happened to me; I've been dying to tell you because I felt sure you'd tell me I acted rightly; no, wait a moment, dear, till I've told you - I tried living with her when I first came and it was all fixed up, she was to do the cooking and I was to look after the house and I did think I was going to be comfortable after all I'd been through but she turned out to be so changed, absolutely selfish, and not a particle of sympathy for anyone but herself - and as I once said to her 'I do think I'm entitled to a little consideration because you at least lived out your time, but I oughtn't to have been here for years and years yet' - but of course I'm forgetting you don't know - I was murdered, simply murdered, dear, that man should never have operated, I ought to be alive to-day and they simply starved me in that dreadful nursing home and no one ever came near me and . . ." The shrill monotonous whine died away as the speaker, still accompanied by the bright patience at her side, moved out of hearing.

"What troubles ye, son?" asked my Teacher.

"I am troubled, Sir," said I, "because that unhappy creature doesn't seem to me to be the sort of soul that ought to be even in danger of damnation. She isn't wicked: she's only a silly, garrulous old woman who has got into a habit of grumbling, and one feels that a little kindness, and rest, and change would put her all right."

"That is what she once was. That is maybe what she still is. If so, she certainly will be cured. But the whole question is whether she is now a grumbler."

"I should have thought there was no doubt about that!"

"Aye, but ye misunderstand me. The question is whether she is a grumbler, or only a grumble. If there is a real woman - even the least trace of one - still there inside the grumbling, it can be brought to life again. If there's one wee spark under all those ashes, we'll blow it till the whole pile is red and clear. But if there's nothing but ashes we'll not go on blowing them in our own eyes forever. They must be swept up.”

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
MI's avatar

"For people that I do connect with, I feel like there is a drop-off after a month or hanging out 3-5 times that I just don't understand."

Are these small group interactions, or more like friend dating? The latter can be intimidating and uncomfortable even when people generally like and want to be around each other, because of the interpersonal intensity. I can think of maybe three people I've had intentional, non-romantic one-on-one time with, and I may be lying to myself about one of them not being a little bit romantic. My father has something like three friends he invites out for coffee and tennis about four times a year. My mom has some other mom friends where they'll go to an all day event once or twice a year. But they've found a book club that meets every month, and that's much more reliable. I think it might be common to do friend things once a month or so spread out over years, and 3-5 interactions in a month might feel like too much, like the relationship will be too high maintenance.

If this is what's going on, maybe try texting 2-4 other people something casual, like if they want to come over this week, or see something interesting, or go fishing at a specific time/place in the near future. But not too often.

(I'm very bad at this. There are a couple people my husband and I have been meaning to invite over for *two years* and haven't managed yet)

Expand full comment
Maxwell's avatar

I wonder about the variety of communities you’ve tried. Have you tried going to daytime dance parties where everyone bops around in the afternoon and some are on party drugs and people tend to talk to new people a lot? Have you tried doing something like a boxing gym or crossfit gym, where your type of muscular build is more “in”? Have you tried weird made-up local variations of kickball with absurd rules that play at the park every Sunday?

(And even within these very specific activities there’s a lot of variation in socialness. The dance-show group I’m in likes our group but finds other local ones less friendly; my current gym has people who didn’t like the vibe of one of the other gyms in town. There’s a looot of these different social configurations! I’m surprised anew every time, even knowing I should expect to be surprised).

I had lots of trouble with meetups too, basically the same story as yours - they always just felt off to me. There are lots of different weird (in a good way) subcultures in any given country, so maybe, expand your search to things like weird dance venues or fight gyms or <other thing you may not have thought of>?

The dance stuff and boxing stuff and kickball thing are just particular examples from my own life, but as for the age question: I’m 31 and got into the local house music scene a year ago, and my current boxing gym a few months ago. (And just went to the first kickball thing yesterday).

Expand full comment
Alex's avatar

What city are you? Maybe someone could meet you and give you more tailored feedback.

Expand full comment
covethistorical's avatar

Are you able to adjust your conversation to the context at all? Do you know what's appropriate to say to whom and when? Do you respond to subtler social cues than repulsion when corrected on your behavior?

Most clubs/gatherings, and even one-on-one interactions, will have unwritten rules. Are you aware of them and do you adapt to being corrected by the group or your conversation partner?

If not, that would be one of the very few genuine reasons I could see for consistently being pushed away or triggering repulsion. People do want a say in how an interaction goes, it should be give and take.

Is there any pattern to when you think you see this rejection in others? Is it direct or does it take a while to build up?

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
covethistorical's avatar

I do agree with the other commentors that it's quite likely that there is a degree of anxiety and overinterpretation. Reducing that down to something more realistic, with help, is probably the best and most healthy first line of attack on the problem.

But barring that, it would still be useful to try to figure out what is triggering this reaction repeatedly in others, especially in the early parts of your interactions. Recovering from that later is like fighting an uphill battle.

It's impossible to tell what it could be from a few online posts. In writing you seem social enough. I'd take up that offer by Joseph Pusey to have a zoom chat. Others might be afraid of telling you in fear of being offensive. It's really quite subtle, what sets someone's first impressions?

The rules always vary so you can give up on trying to learn them beforehand. Rather it's about reacting in the moment to what's happening in those you interact with.

Common pitfalls I could imagine could be appearing too serious or too needy. That might be very hard indeed if you happen to be a heavy-thinking nerdy type with a great need for a new friendship!

But don't impose that on those you meet. A connection has to build up one step at a time, it can't start with that. So if they appear bored, change the topic. Intimidated? Crack a joke. Interested? Keep talking! And if they appear stuck and like they want to leave, let them go! Perhaps they'll come back later.

If you find it really hard to judge what the other person is feeling in the moment, that might be really useful to look into more. If that doesn't come natural the rest will be really hard by default.

You're working on it so I'm sure you'll find your way. Good luck!

Expand full comment
Mystik's avatar

First, I agree with most of the other commenters, I suspect that people aren’t actually repulsed by you, and rather you’re struggling with social anxiety.

On the other hand, maybe you’re right. In which case I guess I will offer two pieces of distinct advice.

First, go to some place where there sill be social interactions that you can observe (like a bar). Try interacting there. Then, go another time and carefully observe the way people are interacting. Note how they are different than you, what reactions they get, and the general rules/trends that govern there interactions. This is basically how I function to some extent (my friends in middle school said I was like an alien observing them, which I was knowledgable enough to deny. But, I have enough experience now that I don’t have to be so blatantly watchful).

Advice two: try interacting with someone. At the end, if you think they’re annoyed/repulsed, ask them (but do so succinctly and calmly if you can). Generally if you have pissed someone off, they’ll tell you as long as you don’t make it too awkward, and if you haven’t you’ll get reassured. You can use this info to update both how you interact and how you judge people’s reactions.

Expand full comment
Christina the StoryGirl's avatar

Huh. Unlike most of the other commenters, it didn't immediately occur to me that you might have a distorted perception of how people are reacting to you. I initially took your assessment at face value, and wondered if maybe you're inadvertently doing something with eye contact or body language that is making people uncomfortable.

But having seen multiple commenters theorizing that you might have a distorted perception, I think they might be right and thus...unfortunately...none of us can pinpoint what you're doing "wrong."

So now I'm coming down to thinking you need to find a *much* better therapist who can pinpoint it and discuss your goals with them in very concrete terms. Maybe especially if you experienced early childhood neglect and possibly missed a social developmental stage somewhere.

But more importantly, don't blame yourself too much. This is a universal problem.

Making new, close friends as a grown-ass adult is *hard* because friendships at any age require a lot of casual, unstructured time to form trust, build affection, and coalesce. Friendship requires hours and hours and *hours* of repeatedly "hanging out," which sounds like doing nothing but is actually trust and affection-building over so much time that people aren't capable of keeping up a pretense.

That's why most people tend to form lifelong friendships as kids/teens/college students: Because that's the stage of life where there are fewer competing obligations, and people have availability and society permission to "hang out."

Grown-ass adults, on the other hand, have careers, homes, chores, family members, etc, etc, etc that get prioritized over "hanging out," plus the notion of wasting time "hanging out" at all tends to be met with disapproval unless one is doing it with a romantic prospect / partner (for some reason).

So sure, meeting people at group activities is a good start, but for friendship to form, you're going to have to get them away from the group and/or activity and onto your couch (or theirs), a few times a month for some minimally structured socializing. I say "couch," but this could also be around a campfire or beers at a pub or whatever. The key here is to be maximally unstructured so that personal conversation can happen - you can't effectively hang out while playing a soccer game or a super complicated, intense board game or whatever. Maybe after, but not *during.*

I know, that sounds daunting and hard. But as far as I can see...there's just no other way to do it. You just have to make a specific invite to a prospective friend, "Hey, you want to come to my place Friday to hang out with a few beers / bad movie night / etc?" and then see if they do.

(Note: It's really important to make "hang out" invitations very specific so the other person understands it's a real invitation, rather than code for "I like you well enough to politely pretend I think we should hang out but not so well that I have any intention of actually doing so.")

Last, I'd like to recommend Captain Awkward in general, and these entries in particular (https://captainawkward.com/2011/12/19/question-153-how-do-i-make-and-keep-friends/), (https://captainawkward.com/2014/04/29/566-my-closest-friend-broke-off-our-friendship-and-now-i-dont-know-how-to-stop-feeling-lonely-and-isolated/).

Come back and keep us updated.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Christina the StoryGirl's avatar

I should clarify: You don't have to manage your expectations about forming deep, intimate friendships. That's totally doable, even as an adult.

You just have to be aware of - and work around - the unspoken convention that grown adults aren't "allowed" to spend the kind of time together that it takes to build a platonic friendship from scratch.

I have a pair of very outgoing, wonderfully extroverted friends who effortlessly flout this convention whenever they feel there's a potential connection with a stranger. They actually follow up on casual conversations in a pub on social media and will say, "Do you want to come with us to [a show / pub / convention / etc]." I was surprised when they did it to me, and it's a revelation every time I see them do it.

You seem to be very pleasant and agreeable here in writing and I think we're all rooting for you. If it's not too personal, what big city do you live in?

Expand full comment
joe's avatar

If you would find it helpful, I would happily have a Zoom call with you for 30 minutes and try to give you honest feedback about your interaction style. I suspect you're better than you think.

Expand full comment
Carl Pham's avatar

Please pardon the presumption, but you did ask. So first of all, I think you should get clear on one massive empirical fact: most people don't get along with most people. Most people you meet, or I meet, will not like us very much. If we have good social skills they will tolerate us, work with us, consider us distant acquaintance to whom they'll wave a (fake) cheery hello if they haven't seen us in a while, willingly to go a ball game with us, or have a drink in a bar after work -- but if we were struck by lightning they would read our obituary without more than momentary pang. "Oh look Carl died by being burned to a crisp while prating away on the Internet! How sad! I met him once, he seemed decent....huh, fucking Dodgers blew it again against the stupid Giants...oh drat I'm late for getting my hair cut, better jump."

And why should it be different? We're all very different. We have different likes, wants, values, tastes, fetishes, drives, directions we want to go, ideas for the betterment of humanity, pet peeves, things we hate other people doing. From a certain Martian anthropologist perspective, it's kind of a minor miracle and tribute to a certain amount of emotional discipline and careful early training that we're not trying to kill each other all the time.

So...maybe start off by adjusting expectations, if necessary? If you go through 100 people and find 1 that kind of likes you OK and you like him or her back -- that's darn good[1]! Celebrate! I sure would -- I consider myself reasonably socially adept, but if I find 1 in every 100 people I meet potentially a friend I consider myself doing quite well.

And of course the problem gets much worse if you're unusually smart, unusually nerdy, don't speak well[2], are unusually nervous and introverted, or normally dwell in a work/living environment that just happens to select against the kinds of people who would get along well with you. So your personal odds could already be a lot higher against. Doesn't mean you can't succeed, but it means it will take a while and probably be difficult, so cut yourself some slack on just how quickly you expect to succeed. (And bear in mind if you have *other* existential struggles having more friends won't necessarily help with those. If I hate my marriage or my job, I'm not going to be happy even if I have a dozen great friends.)

It's exceedingly hard to know why you might have had these negative rejections without further detail, and it would be wildly inappropriate to give specific advice without knowing you in person anyway (this is what therapy is for), but I will close with two general observations based on my own personal experience:

(1) Be *certain* you are actually being rejected, that people are actually having strong *negative* feels about you. Be sure they're not just drifting away, becoming bored or uninterested, looking in a different direction and so forth -- because of factors in their own life, and because they're already among the 99/100 who aren't going to be your friends anyway. It's easy to misinterpret lack of interest for something really negative, if you're trying hard yourself and painfully sensitive to rejection. When you really want to succeed at something, you tend to take every twitch and bobble of bad luck as some kind of Sign From Above That You Are Going To Fail, but that is just fear talking. You need cold hard facts to move forward, a clear picture of what is real and what is not, so be quite sure you are not overinterpreting what you have experienced.

(2) As a general rule, it's not actually hard to get people to like you. The sure-fire method is quite simple: like them. Pretty much everybody likes being liked. We all like someone who is impressed by us, wants to listen to us, is interested in what we have to say, is interested in what we have done, wants our opinion on this subject or that, reacts to what we say as if it is important and interesting. So that is kind of all you need to do, more or less. If you like other people, and they can see that, they will almost always like you back.

Of course, it's operationally hard to do this, if you don't have a lot of practice[3]. It's hard to focus our attention on someone else, let them be the star, listen far more than we talk, ask questions that allow the other person to pontificate about his or her thoughts and experiences *without* interrupting ("Oh that reminds me of this very interesting story about me and my thoughts...") You really have to *be* focussed on the other person, forget about your own needs or thoughts for the moment. Imagine that other person is your research project, you're writing a (future) best-selling biography about him or her -- and you need to get all the good data, figure out just what it feels like to look out of his or her eyes, so you can write a book that will capture that brilliantly.

And to be motivated enough to do this, you really do need to be actively looking for what's good about people. That is not as hard as it sounds. Unless someone is a real scoundrel, he or she is *bound* to have admirable traits. Zero in on these. Don't worry about whether you like the whole package, at least at first (since your ultimate goal here is to *be* liked, not to be quickly making judgements about whether you like someone else). Ignore what seems funky, and focus on finding what is admirable and interesting.

Maybe it's small -- he really cares about his dog, is thoughtful and patient and loyal. Maybe he takes great pride in how he takes care of his body or works on his car, really digs in and learns a lot about it, strives for perfection, is innovative, clever, painstaking. Maybe he speaks poorly but says very original things. Maybe she is shy and stutters in person but expresses herself in writing with soaring grace.

And so on. If you look with energy you can almost always find something creative, interesting, beautiful, admirable, about almost anybody. So you can look for that -- make a practice of looking for it. As a big bonus, we all usually react very positively to someone looking for and finding our best points, the things we do of which we are proud. Wouldn't you like that? So do all of us. So that's your royal road to being liked.

------------------------

[1] https://www.kiplingsociety.co.uk/poem/poems_thousandth.htm

[2] People who speak fluently have a big advantage in making good first impressions, but if you aren't naturally fluent, work on making what you *do* say more potent. Speak even *less* and cull out what is not original and interesting. I'm very fond of a Jane Austen quote on this subject: "She was not a woman of many words; for, unlike people in general, she proportioned them to the number of her ideas." If you don't have many words, but people feel like each you do say is gold, that's a great place to be.

[3] But it gets easier with practice!

Expand full comment
Wasserschweinchen's avatar

Just get a girlfriend instead. She'll be your friend, and she might help you make other friends as well.

Expand full comment
Christina the StoryGirl's avatar

Are you serious?

Expand full comment
Wasserschweinchen's avatar

Yes. Why do you ask? It's not uncommon for men not to have much of a social life outside their partner, and the skills/attributes needed to get a partner are different enough from those needed to make friends that someone who consistently fails at the latter may well have better luck with the former.

Expand full comment
Christina the StoryGirl's avatar

It's also absurd to think that it's okay to put the onus of making one's friends on one's partner. Assuming the OP is even straight, it's not okay to set up the expectation that any woman he's partnered with can or should make friends for him.

Plus, even if the OP were to find a hypothetical girlfriend who doesn't mind the emotional labor of friend-making for the OP, what happens if they ever break up? Now the OP has lost a girlfriend *and* the circle of friends she cultivated?

This is a terrible idea and no one should *ever* rely on a romantic partner like that.

Expand full comment
Byrel Mitchell's avatar

This seems really weird to me. Couples do division of labor on all sorts of things. My wife is more conscientious than me, so she does the budgeting. I'm stronger and mind dealing with disgusting things less, so I take out the garbage, pick up dog poop, etc.

I suspect almost every successful couple with a strong introvert and extrovert in it will at least de facto have the division of labor you condemn; the extroverted one will make (almost) all the new social interactions for the introverted one. What's wrong with that?

Edit: Not trying to be a jerk here. I just don't see the basis for opposing mutually beneficial divisions of labor, emotional or otherwise.

Expand full comment
Christina the StoryGirl's avatar

Presumably you and your wife are both *capable* of managing your budgets and doing gross chores. As grown adults you certainly should be, because if one of you is ever incapacitated for any length of time, or if the relationship ends, you'll be required to do all the tasks - the ones you were doing plus the tasks your partner was doing for you - whether you like it or not.

And like managing one's finances, forming relationships with friends is something that every adult needs to be able to do themselves for their own basic safety and security. The end of a romantic relationship is difficult enough when one has friends on one's side offering loyalty and support; losing a romantic relationship plus an entire social network developed on the back of the relationship is a hell no one should suffer.

Expand full comment
Paul Goodman's avatar

The reason it sounds like a joke is that it seems absurd to think that someone who has this much trouble socializing in general can just trivially "get a girlfriend."

Expand full comment
Adrian's avatar

By his own description, OP is 6'3'' (190cm), lifts weights and "looks like it". Those physical attributes should make it easier to find a girlfriend, whereas they won't help much when looking for friendship.

Expand full comment
Kenny's avatar

I think you're maybe under-estimating the effects of 'OP' also having "an anxious demeanor" on top of their physical attributes. I can easily imagine why that would be off-putting, or even frightening, to others.

Expand full comment
Patrick's avatar

It's hard to give advice on this without knowing you, but I'll assume that people are not actually repulsed by you and that is something in your head, mainly because I don't personally know a truly repulsive person, but I do know people who think that they are repulsive but it's in their head.

I was in a similar-ish situation ~5 years ago, I had intense social anxiety and in very social situations I'd end up doing "ridiculous" things like hiding in the bathroom to pass time because I couldn't bear approaching people or withstanding a conversation.

Therapy helped me a lot, in that it made me understand that I was the first one to reject myself because I couldn't see my positive traits. Once I rejected myself, I made it very difficult for anyone to connect with me.

I learned that I had much more "space" in social interactions that I thought. People are willing to hear me out and when I think I'm being awkward it turns out that people don't think I am - they are usually concerned that they themselves are being awkward.

Anyway, that gave me some confidence to take a bit more risk and "get out there". Asking lots of questions about them (what they do, what's difficult about it, etc) while being *legitimately interested* (almost) never fails. Wish you all the best!

Expand full comment
ProtopiacOne's avatar

Try joining a comedy writing or standup class/workshop. It may do double duty of helping you find new acquaintances and giving you a new take on communicating.

Expand full comment
20WS's avatar

Oh man... even people who aren't socially anxious are terrified of doing standup. I love this idea (it's exposure therapy right?) and if OP actually does this I will be absolutely stoked

Expand full comment
ProtopiacOne's avatar

Sort of. Based on OP's description of himself and his social interactions, I imagined them as entirely humorless. So I figured maybe that's what's missing from his life. But yeah... worst case scenario, he can improve his public speaking skills.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
20WS's avatar

There are comedians who can get a lot of laughs while appearing under-confident - brashness and confidence is only one way of doing it. Mitch Hedburg or Maria Bamford come to mind.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
ProtopiacOne's avatar

I find that deep friendships are usually forged in the mines of laughter and/or pain (shared experiences). Meetups will get you only so far. But if you can make people laugh, they'll always want you around.

Expand full comment
a real dog's avatar

> the person ends up having a quite negative perception of me and it shows in every word spoken.

Your problem might be a socially paranoid module in your brain that is hypervigilant for any negativity, and interprets neutral behavior as hostile. A person close to me has that issue and is slowly making progress at overriding it, it sucks, and it indeed does isolate you socially.

Perhaps meet with people until they explicitly refuse your company?

> feeling like I am aging out of meetups

I'm older than you and I'm not aging out of anything I'd miss, socially. Don't worry about that part.

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
a real dog's avatar

I'm someone weird and I have weird friends. If anything I'd recommend broadcasting your weird to the world to repel normies and attract people you'll find interesting, just be polite about it.

> I am friends with this freak, and can you imagine that he did/said this

I mean, if I encountered this particular pattern, chances are...

A) it was said in jest and you took it the wrong way - I certainly joke like that sometimes, about people after whom I'd jump into a fire, and they do so with me as well

B) the speaker is a shallow asshole, and it's some kind of a miracle you haven't filtered him/her earlier, as in within the first hour of conversation

I find that often people are mean unintentionally, sometimes they are mean because they're hurt and need to sort themselves out, rarely people are actually malicious. And those who are malicious tend to be malicious a lot, indiscriminately, and are easy to avoid once you develop a gut feeling for it.

Expand full comment
Medieval Cat's avatar

It's very hard to tell from your post if people actually dislike you or if your anxiety makes you think people dislike you when they don't. Are you sure that it's the first scenario that's true? Have anyone explained why they are repulsed by you?

Expand full comment
Kenny's avatar

These are very good questions!

Expand full comment
User's avatar
Comment deleted
Jun 13, 2022
Comment deleted
Expand full comment
Kenny's avatar

That reminds me of how, even as a man, I can very much appreciate women feeling 'at risk' around men!

I pretty regularly 'size up' the people around me. It feels pretty instinctual. I'm mostly not particularly worried, even where there are people that seem much more likely to 'cause problems'.

Expand full comment
Jack Wilson's avatar

Start hanging out at bars. Drink alcohol.

Expand full comment