669 Comments

To anyone in the UK that understands the housing situation well: do you know why the Ministry of Housing consistently reports that there are more than 650,000 "vacant" dwellings in the UK?

I'm guessing this is something to do with local authorities overestimating the housing stock due to a large amount of old, uninhabitable, buildings? The numbers just seem incompatible with everything else and I can't understand what I'm missing here?

Expand full comment

Not sure about the UK, but in the US most vacant dwellings are either temporary/frictional (apartments and houses vacant for a few weeks at a time between the old owner/tenant moving out and the new one moving in) or the dwelling is under construction (renovation or newly-built) but technically "liveable" (i.e. doors and windows are installed, roof and exterior walls are complete, and water and electricity are hooked up). There are about 20 million families in the UK, so 650k vacant units could be explained completely if the average family moves once every seven years and a dwelling is vacant for an average of two months each time (with a lot of variance: sometimes the new family moves in within days of the old one moving out, and sometimes a dwelling is vacant for six months because of renovations or because the landlord or seller is has unrealistic expectations about price).

Secondarily, there are apartment complexes where the owner wants to do major reconstruction of the whole building but it takes a while to get all the tenants moved out so some units stand vacant while waiting for the last few leases to expire or accept buyouts. Or the building in vacant and ready for reconstruction but the city is dragging its feet signing off on the last round or permit approvals before they can actually start work.

There's also some number of second/vacation houses and seasonal rentals. A certain number of ski lodges are going to be vacant in the summer, beach houses in the winter, and hunting cabins during the off-season. And if course if a vacation house is owned outright by the primary residents rather than being a short-term rental, it's usually going to be vacant except when they're on vacation (and when they are, then their primary residence is going to be vacant).

Expand full comment

Also ppl buying to invest, also elderly ppl going into hospital or dying.

Expand full comment

LW/ACX Saturday (4/29/23) Lex and Eliezer and The Dictators Handbook chapter 7/8

Hello Folks!

We are excited to announce the 25th Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.

Host: Michael Michalchik

Email: michaelmichalchik@gmail.com (For questions or requests)

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Date: Saturday, april 29th, 2023

Time: 2 PM

A) Conversation Starter Topics: Chapters 7 and 8 of "The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics"

PDF: The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics (burmalibrary.org)

https://www.burmalibrary.org/docs13/The_Dictators_Handbook.pdf

Audio: https://drive.google.com/drive/folders/1-M1bYOPa0qRe9WVb7k6UgavFwCee0fti?usp=sharing

Also available on Amazon, Kindle, Audible, etc.

Eliezer Yudkowski and Lex Freindman discuss The future of AI.

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

https://youtu.be/AaTRHFaaPG8

Audio

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast

https://lexfridman.com/eliezer-yudkowsky/

B) Card Game: Predictably Irrational - Feel free to bring your favorite games or distractions.

C) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.

D) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.

E) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.

Expand full comment

Hey, just wanted to post a link to my blog, The Intentional Taxonomist:

https://taxonomist.substack.com/

It’s an eclectic mix of stuff that might be of interest to folks here, basically my efforts toward a program of “making sense of the world” and also a meta-program of “making sense of making sense of the world.” Philosophy, AI, physics, taxonomy, psychology and hard drugs.

Check it out. Hope you enjoy.

Expand full comment

I'll take a smile wherever I can find it department: S Korean President Yoon sings American Pie

https://www.youtube.com/watch?v=CCA77PX43lI

Expand full comment

I think such puzzles are logic puzzles, which need an algorithm to solve. The algorithm must be generated, which is considered the problem-solving piece. From that standpoint, I think yes, that would qualify as reasoning. I just can't rigorously define it.

Here is a problem I solved in real life. I just asked ChatGPT this: Please solve this puzzle: A customer wanted to buy a gift certificate for someone, instead of a gift card. After searching, the sales clerk found one, and after buying it, the customer changed their mind, deciding a gift card was better. Gift certificates aren't returnable. How does the customer get the gift card without spending more money?

Here is the answer it gave:

"One possible solution to this puzzle is that the customer can give the gift certificate to the person they had in mind as a gift, and ask that person to return the gift certificate to the store in exchange for a gift card.

Since gift certificates are not returnable, the store may not provide a cash refund or allow the customer to exchange the gift certificate for a gift card. However, if the person who receives the gift certificate is willing to return it, the store may allow them to exchange it for a gift card, as long as they have the original receipt or proof of purchase.

Alternatively, the customer could try to negotiate with the sales clerk or store manager to see if they would be willing to make an exception in this case and allow them to exchange the gift certificate for a gift card, especially if the customer is a regular or loyal customer of the store. However, this may not always be possible or successful, depending on the store's policies and procedures."

It seems to think that the person buying the gift certificate isn't allowed to return it, but the recipient is, if they have the original receipt, which a recipient usually doesn't (but might in this special case).

I asked Bard the same question, and it actually came closer to my solution, but still relied on a special exception. I'm not sure if my actual solution was actually a special exception, but the sales clerk didn't think so.

If requested, I will post my actual solution to this "puzzle".

I hope this qualifies as the kind of puzzle you had in mind.

Expand full comment

Seems like the answer would be to buy something with the gift certificate, then return the item in exchange for cash (unless there's some rule that you're not allowed to do that with a gift certificate purchase -- they'll give you the gift certificate back, but not cash). If that's the answer, let's see, what's the thought process for reaching it. . . Seems like it's recognizing that both gift certificates and gift cards are cash in a different form, but they differ in the rules governing turning them back into cash. And then a bit of thinking very slightly outside the box -- recognizing that purchased but returnable items are also really cash in a different form.

So it seems to me that solving this puzzle requires identifying a category: Things that are cash in a different form. Seems to me that's part of what I call reasoning -- taking a step back and placing the specifics of the problems into abstract categories. Maybe that's what you mean when you say an algorithm is required for logic puzzles?

Anyhow, seems like GPT4 didn't do that. I'm sure that if you asked it whether it is possible to return an item to a store for cash it would say yes. But it didn't bring that piece of information into whatever process it went through in trying to respond to the prompt.

But it did do that with my murder puzzle. The puzzle tells you all the different kinds of light that were NOT present when the witness saw the murder -- no moon, no street lights etc. Seems like solving puzzle requires asking yourself what other sources of light exist other than those we are told were not present. If you ask yourself that, sunlight would be on the list.

Expand full comment

>Seems like the answer would be to buy something with the gift certificate, then return the item in exchange for cash (unless there's some rule that you're not allowed to do that with a gift certificate purchase -- they'll give you the gift certificate back, but not cash). If that's the answer, let's see, what's the thought process for reaching it [. . . ] Seems like it's recognizing that both gift certificates and gift cards are cash in a different form, but they differ in the rules governing turning them back into cash.

I think the actual answer is simpler but relies on the same key, with the additional key that both gift cards and gift certificates are things that can be purchased in the store. So instead of returning the gift certificate, you use it as a cash substitute to buy the gift card.

Expand full comment

What I mean by an algorithm is required is simply that one must generate a set or series of rules which, when followed like a recipe, get you the result you want. The reasoning is involved in generating the algorithm.

Lots of sources of light weren't present for the murder, including flashlights, sources of open flame such as matches and candles, TV screens or monitors, planets, comets, even lightning bugs. Reasoning would eliminate many of them, such as providing insufficient light at 50 feet. The algorithm would provide the reason for rejecting a source of light. The algorithm might look something like. "list all visual observation methods; for each method, test whether a human could use that method to visually observe a subject from 50 feet away; if yes, test against being available on Dec. 11; if yes, then add to list of possible visual observation possibilities."

An algorithm for solving gift certificate problem might be, "list all things for which a completed gift certificate may be exchanged; select from the list those in the control of the store to complete; select from THOSE the ones that can eventually result, through further exchanges, in a gift card of the same value." There may well be better algorithms, but I think this would work.

I offer the syllogism: All algorithms are formed with reasoning. Therefore, if an entity forms an algorithm, it has used reason. Please PROVE this assertion wrong. Note that it is possible for an algorithm to produce other algorithms, and these secondary algorithms are not reasoned out.

Expand full comment
Apr 28, 2023·edited Apr 28, 2023

Yes, that makes sense. The algorithms you describe seem like sets of instructions you could give the computer to get it to do a process people go through when solving puzzles of this kind -- it's a more deliberate and systematic version of the process I went through in solving the gift card problem. It does seem to me, though, that in the case of both the murder puzzle and the gift card one, either a human being or someone generating an algorithm for the computer to follow might fail to solve the puzzle, because the actual solutions involve a slightly unexpected instance of the category in question. The murder puzzle is intended to mislead the puzzle solver by talking about how various sources of artificial light are not there -- steet lights, headlights etc. It is likely to make the puzzle-solver or the algorithm-maker expect the solution will come from mentally going through all the possible artificial light sources, rather than all *light* sources, including sunlight. In the same way to gift card is likely to get the puzzle-solver listing things that are *obvious* substitutes for cash, such as checks, money orders, etc. A returnable purchased item might not make it onto the list. Seems like in addition to the algorithm many puzzles require a bit of think outside the box -- coming up with something that is an instance of what you're looking for, but sort of an atypical one.

At the end you say "I offer the syllogism: All algorithms are formed with reasoning. Therefore, if an entity forms an algorithm, it has used reason. Please PROVE this assertion wrong." Are you asking me to prove this algorithm wrong? If so, I don't see how that fits with the rest of our discussion. And I don't think I could prove it's wrong. I think it's true. (Though I also think there are other processes besides generating algorithms that also require reasoning.)

Expand full comment

I was just approaching it scientifically. If you thought the syllogism wrong, then it should be easy to disprove. But if you agree, then don't.

It may well be that other things also use reasoning. But my own reasoning can't think of any now.

Expand full comment

I wasn't saying that other things use reasoning, but that there are other processes besides algorithm production that require reasoning. For instance, recognizing deep regularities in things -- for instance that the amount of water a floating object displaces is equal in weight to the floating object. In fact, I'd say even *trying* to come up with a law that explains how high or low in the water a floating object sits requires reasoning.

Expand full comment

That's what I meant, other processes. So you're saying wondering why something can trigger reasoning to get an answer. It's certainly possible. Maybe also other intellectual pursuits, like art appreciation, or poetic interpretation use reasoning.

Expand full comment

Yes, it does qualify. I understand that one can't rigorously define reasoning, but that doesn't mean there's no sense in talking about it. We can't rigorously define twilight or gossip or friendship either, but they're all real things. I'd like to try out your puzzle, but I don't know the difference between a gift certificate and a gift card. Is one supposed to know that?

Expand full comment

Gift certificates are usually exclusive to that particular store, gift cards can usually be used anywhere.

Expand full comment

I wouldn't say usually. While Visa gift cards are very common, lots of stores and restaurant chains (and Amazon, and Steam) sell gift cards. There are frequently racks of assorted brand gift cards for sale at supermarkets in my experience.

Starbucks gift cards seem particularly popular. (Albeit not as much with me, since I rarely drink coffee.)

Expand full comment

A gift certificate is an old-school gift card. Once upon a time, before gift cards, some stores would make out a certificate to a specific person, instead of having a non-personalized card that you would personalize the container it comes in, or put the gift card inside a personalized greeting card.

Expand full comment

>and ask that person to return the gift certificate to the store in exchange for a gift card.

Regardless of whether or not it would work, what a terrible gift. "I got you this, go throw it away for me ok?"

Expand full comment

Does anyone have an opinion on Jason Lowery's 'Softwar'? I can't tell if it's just a bunch of extended metaphors or if there's something real there.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

There is a strange phenomenon that keeps happening and I can't find a satisfying explanation for it.

Old educational videos are so much better, clearer, more entertaining, and more informative than the vast variety of educational videos today.

This is a well established trope that you will find in the comment sections of every old educational video on youtube. Here's today's example that made me write this comment : this 1946 sex-ed video[1] about Menstruation. I didn't actually understand Menstruation that clearly before, I knew it has to do with womb linings shedding off and that it stops with pregnancy, only I didn't know the key piece of info that the linings are intended as a hosting rig for the potential baby that a new egg can possibly grow into. It's easy to offer the snarky explanation that this is because I'm a man, but there are several women in the comment sections - mothers with 3 kids, and women who entered menopause - basically echoing my experience. Aside from being informative, the video has a quite careful and masterful choice of words, calling the womb lining that falls with the period "nourishment", pushing you to think of periods as an adorable gesture, a mother that prepares 'food' in expectation of a child, and not as something bloody or messy. That couldn't have been incidental.

Anyway, [1] is fantastic, but it's merely a special case. [2] is a 1953 US navy educational video about mechanical computers, and boy do they explain. [3] is a 1936 educational video about car transmission systems. Those are the ones I remember, see their channels for more examples. Search for "bell labs old videos" or "AT&T archives" to find a treasure trove.

Why are old educational videos so good ? There is a well known cliche here, "something something survivor bias something something", meaning that only the best works of old were preserved and kept to this day, but there were worse works that didn't, and that creates an illusion where we think the works of the past were all good. I'm not convinced by this explanation, it seems too generic, there are plenty of areas where we look at old works and clearly say "yup, that sucks", so where is that survivor's bias ? nowhere to be seen. Also most of those videos were not commercial, the Sex-ed ones and the navy ones certainly aren't, and even the ones produced by corporations weren't expected to turn profits, but to educate and subtly market. Hell, Bell Labs' videos were not even public. So there was no competition for those videos to thrive on, they are a fairly arbitrary sample from their time.

Another, slightly less common, explanation is a Noise-to-Signal one : All those educational videos were animated, and it was hard to animate things in the past. When the barriers to entry was this high, only the most passionate and well-financially-supported was doing animated educational videos, with a resulting very low Noise/Signal (at a small overall volume). When animations became vastly easier because computers and the internet, now (relatively speaking) everybody and their dog can animate, everybody and their dog can start a youtube channel and share their animations, so the Noise is considerably higher (at a very high overall volume). This isn't contradicted by fantastic educators like 3blue1brown, the thesis says that the Noise/Signal ratio is higher(er) but not infinity after all. It might even be the case that, after accounting for the much larger volume of educational works, the effect of the higher ratio is neutralized and we still come out on top in terms of net volume of quality educational works.

What are other explanations for this ? Are there cultural explanations ? Could it be the case that people in the past just knew how to explain a topic and that now we suck ? This seems implausible and there is a knee-jerk reaction against because it has a "Back In The Good Old Days"^TM feel, but skills and capabilites decline all the time. Driving a car with a manual transmission, driving with paper maps and without GPS, navigating by the stars, stitching your clothes and/or shoes so they live years and years. Whether by technology or fashion or economics or demography or ideology, we lose skills and knowledge all the time. Could it be that the knowledge of how to impart knowledge itself, pedagogy, is one such skill that we're hill-climbing downwards ? What is the gradiant leading us ? Where is the gradiant leading us ?

[1] https://www.youtube.com/watch?v=vG9o9m0LsbI

[2] https://www.youtube.com/watch?v=s1i-dnAH9Y4

[3] https://www.youtube.com/watch?v=JOLtS4VUcvQ

Expand full comment

How about this being due to the modern overabundance of videos optimising for clicks with constant attention-grabbing shenanigans? Presumably this affects even the high-quality content which still needs to prevent their audience from being stolen by clickbait.

Expand full comment

> calling the womb lining that falls with the period "nourishment", pushing you to think of periods as an adorable gesture, a mother that prepares 'food' in expectation of a child and not as something bloody or messy. That couldn't have been incidental.

Yeah, its a lot nicer than the reality of the endometrial lining being the frontline in the war between host and parasite:

https://www.quora.com/Why-do-women-have-periods-What-is-the-evolutionary-benefit-or-purpose-of-having-periods-Why-can%E2%80%99t-women-just-get-pregnant-without-the-menstrual-cycle

Expand full comment

Aren't they just different metaphores?

Expand full comment

This is pretty tangential to the overall topic, but I dislike it when people call fetuses "parasites" in the colloquial sense of the word, to mean they take a lot and contribute nothing. That's not how the definition of the word in biology works, actual parasites decrease fitness, some biologists defined them as predators that consume their prey in fractions instead of in whole numbers. And animals don't go out of their way to be infected with parasites (unless the parasite already in their bodies is making them do so), but every single animal in existence go out of their way to have babies.

Babies are not parasites.

Expand full comment

That's an overly narrow way of looking at parasitism. A parasite is just an organism that by its nature relies on another for sustenance. Sure, we're designed to want babies. But that's just an example of the parasitic relationship being so successful that it manipulates its host at the genetic level! It's a case of super effective mutualism, our genes survive through the fetus and so the direction of evolution is towards establishing and nourishing this relationship. But the mother's interest isn't in incubating just any fetus, only ones that will carry her genes long into the future. Shedding the endometrial lining is a way of testing and getting rid of weak fetuses.

The fetus being a parasite isn't incompatible with it also being wanted and love and so on. But these emotions didn't develop in a vacuum, and viewing the fetus in terms of the biology of parasites and host adaptation is a good way to understand how our bonds to our offspring formed.

Expand full comment

I will be convinced by your revolutionary new approach to parasitism when you manage to get it published in any medical or biological source worth its salt. Until then, I will continue to use the definition that generations of biologists have used and honed.

I will also note by your definition that, e.g., Bees are parasites on the flowers they take their nectar from.

Expand full comment

Bees and flowers are symbiotic. All symbiotic relationships probably started off as parasitic. The point is that there isn't a hard line between the two. We like to define sharp categories because that's how we think; nature doesn't need such boundaries.

Perhaps you would be less offended if we focused on behaviors instead of identity? I.e. instead of saying a fetus is a parasite, we can say that it has parasitic behaviors and the mother's immune system and adaptive responses react in kind? The point is to recognize these behaviors and how they influence adaptive evolutionary responses in the affected organisms.

Expand full comment
Apr 28, 2023·edited Apr 28, 2023

I mean, you can't go all hippie all of a sudden and say "boundaries are like... so negative maaan", Science's enitre purpose is drawing boundaries and breaking them. There are no sharp boundaries between species either, every individual of a species is a whole new genetic line, yet we still drew an arbitrary boundary around species by, e.g., the criteria of reproduction, because it's a useful boundary. There are no sharp boundaries around multi-cellular organisms too. Hell, there are no sharp boundaries between Life and non-Life, most famously of all. That's the fun of biology. It can still be a rigorous science full of boundaries and clear categories though.

>nature doesn't need such boundaries

Yeah Nature doesn't need Science at all really. We do.

>Perhaps you would be less offended

Oh I'm not offended in the least bit, that just might be you projecting. All I'm telling you is that you're doing wrong and amateur biology, taking technical jargon out of its context and understanding it colloquially then drawing wrong conclusions based on that. I wouldn't be offended if someone started saying "We have to learn to swim against electric current" at all, just correct their very wrong understanding of what an electric current is.

>we can say that it has parasitic behaviors and the mother's immune system and adaptive responses react in kind

You can say anything you like my dude, the point is that you're wrong. Both the people who study parasites for a living and the people who study embryos for a living don't agree with you. If you can find one of them who agrees with you I can perhaps revise my position.

Expand full comment
Comment deleted
Expand full comment
Apr 29, 2023·edited Apr 29, 2023

By your definition mitochondria are parasites on the rest of the cell. Can't think of any biologists who would find that useful.

Might also bear in mind that most of the job of the immune system is to kill off your own cells that are misbehaving. So defining the immune system as "that which kills invaders" is a huge oversimplification of what it really does. A better description would be it's *both* the Army and the police. It does indeed deal with invasions, but it also does a lot of internal police work.

Expand full comment

That doesn't mean anything, really. If it did, then you just proved that transplanted organs and donated blood are parasites too, since the immune system very much loves to attack those too. Hell, you just proved the immune system cells are parasites, since the immune system sometimes love to attack itself. If you're okay with those conclusions, I guess you can say that fetuses are also parasites in the same sense. I'm not.

Parasites decrease fitness. Symbiotics that don't decrease fitness are not parasites. Despite the hardships of pregnancy, it's a far cry from being a noticeable cause of death for the mother. If the immune system targets things that don't reduce fitness, then the immune system is wrong, which is understandable, everything Evolution makes is wrong plenty of time.

Expand full comment

This whole line of thinking implies that the baby is an external organism that tricks the women into thinking she should allow it to be nourished, rather than the reality that it began as literally a part of her (the egg) and is growing as a natural part of her own biological systems.

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

It's not an either-or scenario; drawing hard lines in biology is almost always wrong. But the fusion of egg and sperm begets a new independent organism. Crucially, the interests of the fetus and the mother are not 100% aligned. This substantiates the idea that the fetus is not a part of the mother. All of the mother's cells have 100% aligned interests (misaligned interests results in cancer).

Expand full comment
Apr 29, 2023·edited Apr 29, 2023

Sure, but evolution cares only about the interests of the fetus. A woman[1] is just a mechanism for a fertilized ovum to create another fertilized ovum. I mean, you've heard of death, right? Death is about the greatest possible disutility for the individual. Given that the problem was solved a billion years ago -- bacteria and amoebas do not die of old age -- how is that we do?

Because the death of the parents improves the fitness of the next generation. So off they go to the chopping block at the designed interval, pretty much as soon as they've done their reproduction/rearing task. The salmon swims upstream and then croaks, and we slow up and get eaten by hyenas just about when we've taught the kids everything we know, because it's better for the offspring if we don't stick around and consume resources they need.

-----------

[1] Men are even less useful.

Expand full comment

Not sure you should discount survivor's bias so strongly. I can clearly remember, as can my siblings, lots of educational videos we were shown in school that to us were boring and useless at best. Or as my sister says, "Educational, but usually not in the way intended."

We attended grade school and high school between the mid-1960s and mid-1980s (there are four us spread fairly widely in ages). Public schools then, as now, tended to keep using classroom materials for a good while before replacing them; so what we're remembering would be instructional films/videos made between say the 1950s and 1970s.

Whether those materials can now be found and viewed online, I dunno. Maybe? I'd be mildly surprised if they can't, _someplace_; but perhaps not as readily google-able as we're used to with stuff from later eras.

Expand full comment

"MST3K Educational Videos" pulls up a few. https://www.youtube.com/watch?v=dDjBsOkidek

Expand full comment

>>"Could it be that the knowledge of how to impart knowledge itself, pedagogy, is one such skill that we're hill-climbing downwards ? What is the gradiant leading us ? Where is the gradiant leading us ?"

There is the argument that when society forces half of the population to be either housewives, secretaries, or teachers, it ends up with excellent teachers.

Expand full comment

Yep. Also secretaries: my former father-in-law was a longtime executive in the insurance industry, up to and including being a CEO, and by the 1990s he was willing to pay his longtime executive secretary pretty much whatever she asked because he knew that he "could never replace her today".

He, being an old guy, ascribed that dilemma to "young people today aren't disciplined/thoughtful/organized/whatever", the usual crap that old people always think. In actual reality it was that she was someone wildly overqualified to be a secretary but in 1960 when she started doing it her practical job options had been extremely limited.

Expand full comment

My mother was a secretary to the head of [very large union] in the 70s. Very bright lady.

Expand full comment

During communism we had many excellent teachers in Czechoslovakia. When communism ended, many of them left education and started their own companies. Better for them, and probably better for the society in general (those companies do useful things), but the quality of education dropped visibly. Sounds like a similar effect.

Expand full comment

I have a strong sense that there's a quality, something like rigor and clarity of thought, that has been declining generally since maybe the mid 20th c. That could be part of the explanation. I watched the car film you linked to, and I felt a sense of, like, cognitive relief and refreshment that I've had before when reading/watching/hearing older material of various kinds.

Expand full comment

Most people develop that particular strong sense (about qualities like "rigor and clarity of thought") as we get older; the next generation which doesn't mostly think it about the ones following will be the first. I personally am old enough now to have been on each end of that dynamic.

Undoubtably there are times and/or places when that feeling is broadly accurate and also some when it is not; that distinction make no difference to the widespread belief in it. Many adults in middle age and most adults of retirement age will be certain that it _is_ true, always and everywhere, and will be immune if not hostile to opinion or evidence to the contrary.

Why exactly that syndrome exists is a mystery at least to me; I've never heard a convincing explanation. But it does, and so can't sensibly be trusted as evidence to explain more-tangible and specific changes such as the one that Bi_Gates has described.

Expand full comment

Interesting. I am currently turning the corner into middle age myself. I've always been aware that older people have certain attitudes about the past, but as with so many things in life, I didn't anticipate that it would *feel* like this.

Expand full comment

You raise a very interesting point, and explain it well. Thanks for an interesting comment.

I don't have any pat answer either, alas. I think both your explanations (survivorship bias and signal-to-noise ratio) have merit. To these I would add two more:

(1) Less intermediate vetting. Today we all have almost direct access to most information. In the past, because information transmittal was so much more expensive, it would go through a lot of intermediate vetting. Someone might make an educational video, but then the guy who commissioned might think, no that kind of sucks so I'm not going to distribute it much, maybe I'll ask for a revision, et cetera. In effect, there were legions of editors and editors-of-editors, which would get between the consumer and the producer. A lot of second-guessing and judgment, a lot of filtration. That would definetely wipe out something quirky and brilliant, but it would also remove a hell of a lot of dross. So the median quality of what survives would be a lot higher, although it would also necessairly lose the most idiosyncratic but brilliant stuff.

(2) The habit of lower attention span, and the expectation that learning is epiphany instead of a process. In part through the enormous impovement in our ability to entertain, we have become a lot more used to education as necessarily being enterntaining, and understanding coming all at once, easily, in a flash, instead of being something that is built slowly and with no small amount of intermediate frustration.

And educators have become used to this. Where once before, an educational video might just have taken for granted that viewers would pay close attention to a 20 minute careful exposition, now anyone competent in the field woudl start to worry about 5 minutes in that attention will wander, people will start skipping ahead looking for sound bites or key grafs or some other epiphany. (You seem the same thing in textbooks and educational books, of course; the death of the idea that one starts at page 1 and just read through with attention to page 50, and the apperance of special boxes, bullet paragraphs, colors and sketches, stuff that promises to encapsulate the epiphany all at once.)

But that necessarily means if you are the kind of person who prizes a methodical if longer construction of understanding, you will be less satisfied with the more modern edutainment kind of approach. The modern approach doesn't necessarily lead to less understanding in the end, but you kind of have to do it over and over again, because the student doesn't get that far on each repetition, so ironically it ends up being less efficient.

There are educators who can do both things, who can be both entertaining *and* methodical, but that's a very tall order in a teacher, to have both the skills of a Feynman and a Robin Williams, so there will be far fewer of those than in an age when it was just the skills of Feynman you needed.

Expand full comment

Also, when many people were involved in the project, you had a specialist for each part. Someone to design the lesson. Someone to operate the camera. Someone to choose the music. Someone to cut the video.

Now the barrier to entry is much lower, but you typically start with doing everything alone. And if you monetize the videos and start making nice money, *then* you spend some of that money on specialists to help you make the next videos even better.

When you make a video for your employer, your audience is already determined, your salary will be paid, you only focus on making the video good and professional. When you make a video for internet, you do it in your free time (less convenient) and you also need to consider marketing.

> once before, an educational video might just have taken for granted that viewers would pay close attention to a 20 minute careful exposition

One of those example videos has 40 minutes. And if it teaches you an important topic well, it is a time well spent! Problem is, you need to decide in advance whether you trust the video to be worth spending 40 minutes on. This is not an issue when your employers tells you to; but on YouTube it would compete with many other videos.

So another important difference is the entire context of watching the videos. Is there some recommendation system (employer or school) that picks the videos for you to watch, or do they have to compete against other viral content? Can you make a sequence of videos and expect people to watch them in given order so you can build on the previous lessons, or does every video need to be self-contained?

Is it important for the lecturer to be physically attractive? Should the video focus on showing the lecturer's face, or rather the thing that is being explained?

If your video needs to be viral, self-contained, short, and show a nice face... it is still *possible* to also make it good from the educational perspective, but the more talents you need in the same person, the less *likely* it is to happen.

Expand full comment

I have no insight, I think the obvious answer is your #2 (signal/noise), but I wanted to thank you for linking to these amazing videos. I am now much more knowledgeable about cams and differentials.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

This man understands what it's all about 😁

"Don't disrespect the Irish. They can be mean".

https://www.youtube.com/watch?v=WOay2e0yOGs

If you make this recipe, don't eat the bread plain. When it's still slightly warm, *cover* a slice in butter. Moar butter moar better. You'll thank me for the recommendation!

Expand full comment

Sorrows of Werther

William Makepeace Thackeray

Werther had a love for Charlotte

Such as words could never utter;

Would you know how first he met her?

She was cutting bread and butter.

Charlotte was a married lady,

And a moral man was Werther,

And, for all the wealth of Indies,

Would do nothing for to hurt her.

So he sighed and pined and ogled,

And his passion boiled and bubbled,

Till he blew his silly brains out,

And no more was by it troubled.

Charlotte, having seen his body

Borne before her on a shutter,

Like a well-conducted person,

Went on cutting bread and butter.

Expand full comment

This was quite fun to wake up to.

Expand full comment

https://www.msn.com/en-us/health/medical/gastrointestinal-anatomy-varies-widely-among-people-study-shows/ar-AA1ahaws

" UPI News

UPI News

Gastrointestinal anatomy varies widely among people, study shows

Story by Dennis Thompson, HealthDay News • Monday

People are often reminded that they are their own unique person -- and a new study says that's particularly true of the digestive tract.

A recent study revealed striking differences in gastrointestinal anatomy, even among a small group of people. Photo by Alice Day/Shutterstock.com

A recent study revealed striking differences in gastrointestinal anatomy, even among a small group of people. Photo by Alice Day/Shutterstock.com

© Alice Day/Shutterstock.com

Dissections of a few dozen deceased individuals revealed striking differences in gastrointestinal anatomy, even among a small group of people.

Some livers were larger, some intestines and colons were longer. Crucial discrepancies were observed between men and women.

And some organs were even located in the wrong place, the researchers said."

""The last study that really quantified and investigated variation in humans was published in 1885," McKenney said. "For all of the technological advances and how amazing and individualized medicine is, we actually might also benefit from going back to square one and learning more about our bodies before we try applying all the technological Band-Aids."

To that end, McKenney and her colleagues dissected and measured the digestive organs of 45 people who donated their remains to the Anatomical Gifts Program at Duke University School of Medicine."

Expand full comment

In avatar the last airbender, a subset of people have the ability to control the movement of blood. It was always depicted as a “puppeteering the bodies of others” ability, but I always thought it would be more interesting to think about its vascular surgery/medicine.

I wrote a short story about it here if you’re looking for a quick read: https://solquy.substack.com/p/41423-the-consequences-of-hemodynamics

Expand full comment

People talk about wanting "executive experience" in a POTUS, but why? I would understand if they meant CEO of a large corporation, which is an extremely difficult job, but just as often they mean governor of a state, which seems like about the easiest job in the world. It depends on the state, but how much work does a governor have to do? How much good or bad can a governor do? In the South, during the backlash to Reconstruction most states severely limited the power of the governor. Pretty much anyone could be the governor of Texas and it wouldn't change anything, for instance. To me, all being the governor of a large state shows is that you have the ability to get elected governor of a large state, which is politically impressive, but doesn't say anything about your executive skills in terms of running a large organization.

What governors have stood out as being really good or really bad at governing their state?

Expand full comment

Some governors have a lot of power, some have less. One thing they almost all have in common (if they are successful) is the ability to select and appoint competent underlings, and that is an absolutely mission-critical skill if you are President, because the US Executive Branch is one of the largest organizations on the planet, with 2 million employees and a budget of $6 trillion. The President appoints about 1200 leadership positions (not to mention several hundred Federal judges), and the ability to pick out good leaders to fill them is extraordinarily important.

Another thing governors in most states have is the ability to get along with powerful interest groups and people with conflicting goals. However much actual power the governor has, he is nevertheless because of his position the focus of all the pressure of all the people and interest groups that want the direction of the state to be this instead of that. And since he is one person, he can't be schizophrenic, as the legislature can be, and sometimes agree with A and sometimes with Not A. (The legislature can do this because it's a multitude: some Republicans can be MAGA, some can loathe MAGA. Some Democrats can be Defund The Police! and others can say that's silly. We construct an overall impression based on some weighted sum of the individual positions -- but we don't construct an impression of insanity because all the voices are coming from one person. We would, if it were the governor doing this.)

That, too, is an important quality in a President. It *is* a bully pulpit, and it is also subject to all the pressures from groups and individuals who want to direct the future of the country. So someone who can say stuff that is self-consistent but also not piss off large groups of people, and remain not far from the consensus of The People, is going to do a lot better than someone who is used to just being one voice among a multitude.

Expand full comment

President of the United States is a pretty unique job. There are no other jobs that closely match all of the duties. But there are lots of skills that are helpful. Those skills are not all found in a single place, but some jobs match more closely than others.

A partial list of these general skills might look like:

1) Executive decision making

2) Governmental leadership

3) Federal Government

4) International Relations

5) Military leadership

6) Crisis management

Someone who is/was governor should have 1 and 2, and maybe 5 and 6. This will vary by state and what things did or did not happen while they were governor (for instance, a major hurricane or other natural disaster can lead to 6, but most states don't have them very often). Being governor is not a guarantee that someone has gained any, let alone all, of those qualifications, but it's a lot closer than almost any other job. A CEO, as an alternative, might only have #1, maybe #6.

Someone who was a federal senator would have 3, and depending on committee assignments could have 4. A former Secretary of State might have all six. A Vice President may also have all six, though VPs can sometimes sit on the sidelines and not actually handle those topics. Governor, Senator, SoS, and VP are the four positions most likely to lead to election as President. At any given time there is only one VP and one SoS, which naturally limits how many of them can be considered for President (and often they aren't interested or otherwise disqualified). There are 100 Senators at any given time, which greatly expands the possibilities, but they are generally lacking in some key experience (most will only have #3). With 50 governors, that's a lot of tries at finding the right person, who is interested, and already has a decent amount of the needed experience. There's also a good chance you can find one or more that also have some other experience (previous Senator before becoming a governor is huge and not terribly rare).

Expand full comment

Secretary of State was a pretty common path to the presidency pre-Civil War, though in most cases it wasn't the main or sole qualification. (Jefferson and Madison obviously did a few other things, James Monroe had been a Governor, John Quincy Adams a Senator). But no SoS has been successfully elected to the presidency since James Buchanan.

A slightly different 2016 might have changed that. But the century and a half gap would still have been there. And Clinton's appointment was arguably more a result of her presidential prospects (keeping her onside after 2008) than a precondition for them.

Expand full comment

Agreed in practice, but the skill set is there and should be considered more highly than it is. I think a lot of that has to do with the nature of presidential elections, especially since the invention of TV and TV debates. A competent and skilled individual who has all of the skills needed to be president is far less likely to get elected than a charismatic idiot who looks good on TV. Usually we get some competence to go with the "presidential" look, but it's not guaranteed and when you have to pick between the two those appearances win.

As you say, Clinton was almost certainly SoS in order to lean on the experience in order to run for president.

Expand full comment

> CEO of a large corporation, which is an extremely difficult job

It's funny because I actually think the vast majority of CEOs (along with HR, PR, "consultants", ...) are actually dead weight who can be replaced by GPT-5.

Expand full comment

I have to disagree in principle with this, in part because I have worked pretty closely with some CEOs, ranging from the mid-sized (low 100s) to moderately sized (mid 1000s) of corporations. What this has taught me is that, first, the job is a lot harder than it looks from the outside. Your main job is not really Making Big Decisions, although that sometimes happens. Normally you don't have enough info to make big decisions, and you don't want the company direction to ever become so wrong that it *needs* to make a big decision to correct.

What you mostly have to do is, first, select excellent people for your next in command, and have ways to get all the next rank to cooperate with each other, but not so much that independent ideas are suppressed, and have them keep you informed so you know in general what's going on, but not give you too much, or too poorly summarized data, so you're not overwhelmed. They have to consult you, because your mind functions as the sort of central clearing house of ideas, where they flow up from below and out to other places below, and it's important that everything important the company does pass through at least one mind that groks it all. But you don't want them relying on you for making decisions at their own level, they have to have the ability to judge what is at their pay grade and do it without bothering you. It's a delicate balancing act, and you need to pick people who are good at it (and you have to be good at it yourself).

Beyond this, you have to be good at engendering esprit du corps, you have to be able to make people want to cooperate and contribute, to feel like their independent voice is valued but also that the group goal is worth compromising that voice a bit. They have to feel the company is going somewhere, and probably not in exactly the direction they'd choose but a pretty good direction anyway. That's there's A Plan and it's a good one, if not exactly what anyone would think up on his own. Solid B+ territory. This is also a tall order, and calls for a deep and subtle appreciation of group psychology and also the ability to rapidly suss out what individuals think.

If you want to say that most CEOs don't measure up to this, I would fully agree. Excellent leadership above the platoon level is increasingly rare the larger the organization gets, and at the level of a Fortune 500 company I'd say it's so intrinsically rare among human beings that it's kind of a miracle these companies ever find it at all. It's why truly outstanding CEOs become legendary.

Expand full comment

Thank you, it was also my impression that the people who can do this job well are worth their weight in gold, but most people in this position actually suck at their jobs.

Or maybe "suck" is too harsh word; they certainly do it better than I could, but compared with what should be done, it is pathetic. Yet they can get high salary for doing a half-assed job, simply because the demand is great and the supply of competent people is low, so a barely-competent one will still be paid generously.

Where can even one *train* for such role? I assume that learning management at school is insufficient. Starting as a lower manager means... doing something completely different? First, because your job is different. Second, because good CEOs are rare, as a lower manager you probably have a bad one above you, so if you copy them you learn the wrong things. Probably the best experience would be to start your own company. To start as a CEO of something very small, and then expand -- either your company grows, or you sell your company and accept the role of a CEO somewhere else?

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

Whether someone is good or bad at the job, it's a required function. Some companies spread out the functions between the C-Suite or have two or more of what are essentially co-CEOs, but the work is required at all organizations. It's also not something you can spread too thin, or you lose the value.

Even a pretty bad CEO is worth a ton, compared to a company trying to run without that functionality. You're also placing the weight of the world on this person and making them personally liable for a wide variety of negative outcomes if they or the company mess up (including a high probability of being personally sued). Six-figure salaries (or seven-figure for bigger companies) for even moderately competent CEOs is cheap compared to how bad things would get otherwise.

CEOs are typically trained by being very near a current CEO (COO and CFO are frequently promoted to CEO). You get someone in a position that not only sees the current CEO in action, but actively helps make decisions and carry out the goals of the organization. In turn, those positions are filled by people who work closely with their predecessor - the Controller becoming CFO, the Plant Manager becoming COO as examples.

Expand full comment
Comment deleted
Expand full comment
Apr 27, 2023·edited Apr 27, 2023

No, no evidence. I'm just reading Bullshit Jobs right now.

Seriously, I don't think I need to have founded 3 startups in order to have an opinion like that, after all the OP I'm replying to probably isn't a governer or was ever close to a governer.

My gut hunch is that professions that consists of nothing but talking is bullshit. "Talking" here is a bit vague, after all it could be said that school teachers or war correspondents do nothing but "talking" all day, but the kind of "talking" is qualitatively different. Explaining things (espcially to bored, ungrateful, and immature fuckers) is hellishly difficult, and talking under the sounds of fighter jets and machine guns can be fairly said to involve more skill and nerve than any ordinary talking.

Some professions might seem to involve nothing but talking, but the talking is coordinating something in real life that's actually useful. Wedding planners, they do a whole lot of talking, but their talking gets the wedding going. If I shut them up, the wedding doesn't happen. That's a material effect. Singers, poets, writers; yes those entire professions are all talking, and yes they are sometimes indeed full of shit, notoriously so. But a few of them are really masterful with talking, they can move whole crowds of tens of thousands. That's a material, noticeable effect.

I don't despise those jobs. I might despise particular people working in it, I might vitriolically hate my university's professors because they were fucking know-nothing parasites who didn't have a clue how either pedagogy or their own field works. I might want to remind some journalists that browsing Twitter and quoting it is not a real job. But I immensly respect educators, journalists, and artists in general.

But I don't think the jobs I listed earlier (CEO, HR, PR, "consultants",...) have any redeeming qualities to them. I don't think they are even real. They look to me as nothing but inane, braindead talking all day. What's there to them than sending emails of "Let's circle back" and "Let's touch base" and all that fucking obnoxious corporate speak ? Ugly Powerpoint presentations ? Buggy Spreadsheets hallucinating columns of useless numbers to justify a pre-made decision ? If I remove those jobs from physical existence what do I actually see on the ground as a consequence ?

It's a literal meme how corporate consultancy is a useless [1][2] pseudo-profession of saying the obvious in a powerpoint slide then getting paid obscenely. HR or PR or whatever might not pay as much, but they still involve doing essentially nothing, nagging people who do the real work, droning on and on about contracts and regulation and red tape and diversity statements. CEOs.... I don't respect them. What do they do ? They manage people who manage people who manage people who manage people who manage people who manage people who do all the work ? That sounds awefully like bullshit to me ? What if I cut 3 or 4 levels of this huge management chain ? Would the work on the ground be that much affected ?

I think the Corporation is the worst thing to happen to human work.

[1] https://www.youtube.com/watch?v=kXGhPmby0rY

[2] https://www.google.com/search?q=consultants+are+useless+meme

Expand full comment

>How much good or bad can a governor do?

Mostly the same as a President. They can issue executive orders, they can veto Congress (governors have more power on that front because they can veto specific lines instead of having to nix the whole bill). They can pardon people for state crimes.

They can't start a war (although technically neither can the President) or launch nukes, but they're doing the same job on a smaller scale.

Importantly, like the President they're dealing with state congressmen who are politically opposed to them, and are taking the brunt of abuse from angry people, so if they become President you can be pretty sure that stress won't cause them to break down halfway through their term. This is unlike a CEO who can fire anyone who stonewalls them at their current job. If you're looking for impressive governors, look at the ones with an opposition-majority Congress.

Expand full comment

Agree with those points regarding a state governor, particularly governor of a medium-to-large state that contains regions which mostly fall on each side of our ongoing cultural schism.

More broadly, having some experience as CEO of a formal entity also seems to me to have some value almost regardless of the specific sector. Having been in such a position myself (at a much smaller scale than the examples being discussed here), I can attest that there is a qualitative difference between that role and any other. For good or ill, and to of course varying degrees for lots of reasons, that role simply _feels_ different in a way that you wouldn't necessarily guess (at least I didn't) until you're in it.

So I am firmly in the school of wanting prospective POTUSes to have had some real-world experience in such a role, and ideally in more than one context. I'd strongly prefer that sitting in the Oval Office not be his/her first experience of that specific sensation of being the person at whom a lot of other people are staring expectantly for final decisions on stuff that matters a lot to them.

Expand full comment

Just one example, 4 words: Cuomo, COVID, nursing homes.

Expand full comment

Politico got ahold of a Word track-changes document showing the edits made personally by Florida's state surgeon general to the state Department of Health's analysis of COVID-19 vaccines. Turns out that his edits, which were retained in the final document published by the agency last year, added an unsupported claim that the mRNA vaccines increase certain health risks in 18-39 year old men.

Dr. Joseph Ladapo, who was appointed by Gov. Ron DeSantis 18 months ago after publishing national op-eds critical of COVID policy responses and vaccines, also deleted from the state's draft document the description of a research finding which contradicted his position regarding the vaccines.

In comments to Politico, Ladapo did not dispute the authenticity of the track-changes document. He said that his edits were a normal part of assessing surveillance data and that his edits were needed to push back against "biased data and interpretations" that "the federal government and Big Pharma" have released regarding the mRNA vaccines.

You can read the track-changes document here:

https://www.politico.com/f/?id=00000187-b36b-d739-a797-f3ef41e30000

and Politico's article about it here:

https://www.politico.com/news/2023/04/24/florida-surgeon-general-covid-vaccine-00093510

Expand full comment

Oh come on. Here's the actual language he inserted:

"Results from the stratified analysis for cardiac related death following vaccination suggests mRNA vaccination may be driving the increased risk in males, especially among males aged 18-39,” Ladapo wrote in the draft. “The risk associated with mRNA vaccination should be weighed against the risk associated with COVID-19 infection.”"

How that "may be driving" and "should be weighed" adds up to a bold-faced highly political "lie" seems like an exercise in ideologically driven hysteria itself. Plus (1) the guy is an actual cardiologist, and (2) he's the surgeon general, he's signing off on this report, and he's *supposed* to be on the look-out for risks to approved therapies in the state, it's in his job description to be cautious, and (3) even the FDA decided to cover its ass and add a warning[1] about this particular risk.

Surely reasonable men can differ on how important a tiny risk of heart problems is, when weighed against a equally tiny risk of serious health effects from COVID in a very healthy young population. I personally don't think the former exceeds the latter, but I don't think people who see it the other way are off their rocker, and a judicious statement that the one should be balanced against the other seems quite justifiable to me. If every modest disagreement is to be treated as "lies" versus "God's truth" then we're back to the 1600s and we might as well bring back the stake for heresy.

-------------------

[1] https://www.fiercepharma.com/pharma/fda-says-label-warning-coming-for-heart-inflammation-pfizer-bnt-moderna-covid-19-vaccines

Expand full comment

I'm a university student, and I was planning to have a career in academia, but recently I realized that I lack the emotional intelligence required to succeed in academia. Does anyone have career tips for low emotional intelligence?

Expand full comment

Academe is about as viable a career path as "musician", with tenure instead of stardom. Deciding to bail on it early - before you waste years of your life overworked and underpaid only to find yourself unemployed in your 30s with nothing to show for it all - is good, regardless of the reason

Expand full comment

I know a number of grad students, and about half of them seem to have faculty advisors who are either on the autistic spectrum or else are so narcissistic they just do not give a shit about their students. If you can produce high grade research and papers, low emotional intelligence is no problem. Actually I should ask: what do you mean by low emotional intelligence?

Expand full comment

The main symptom I'd like to complain about is that people dislike me for no apparent (to me) reason. And they go out of their way to harm me, and I don't think that happens to the other students.

Expand full comment

Psychologist here. Well, there are a lot of possible causes of that. Yes, it could be that you aren't good at reading people and do things that offend them, so they are annoyed at you all the time. But it could also be that you've had the bad luck of ending up as the scapegoat with the circle of people you know. Or it could be that there's some false rumor about you going around, & that's why people treat you badly. Or it could be that you're in a bad, depressed state of mind and are misreading fairly minor incidents as evidence of powerful dislike and desire to harm you. I think you need to figure out what's going on before you make career decisions. I recommend you go talk these incidents over with a therapist and try to figure out what the situation is and how it developed. Also, even if you truly are bad at reading people and presenting yourself to them, it's possible to learn some techniques for getting along better with people. You may never be somebody who likes parties, but you can definitely learn to come across as reasonable and pleasant, so that people aren't annoyed and confused by you.

Expand full comment

This response demonstrates an extraordinary level of emotional intelligence. Which isn't relevant to the response per se, I just thought it was a fascinating juxtaposition with the original query.

Expand full comment

How does the evidence of effectiveness of talk therapy for people who don't have diagnosable disorders look like?

Expand full comment

I wasn't suggesting talk therapy, but talking over in detail what's been happening and trying to figure out what's going on. As I said, there are a number of possibilities. If you can find a discreet, fair-minded person who has the time to really delve into this with you, you could do the process with them rather than with a therapist, but people like that are hard to come by. If in fact you have a social skills deficit -- you can't read people well, you bore or offend them without knowing it -- what you would be doing wouldn't really be talk therapy, it would be more like lessons.

Expand full comment

Have you discussed this feeling with a mental health professional? It might be the case that people really are trying to harm you, or you may be doing excessive pattern matching which is causing mild paranoia..

Expand full comment

Honestly, I would recommend academia as a haven for those with low emotional intelligence. It's mostly a partchwork of fiefdoms with unchanging boundaries and very slow turnover, and once you find your niche you can kind of just sit there, lord in your own manor, and pay almost no attention to what anyone else thinks or feels. As long as you stay within your castle (which may be quite small), you have complete power over what happens, and can mostly just beaver away at your intellectual interests for the rest of your life without a lot of people bothering you.

Out in the bustling more chaotic world, it's a different story. If you want to make it in business, or as an entrepeneur, or shine in sales, marketing, communication, management, et cetera, you often need much higher emotional/people skills, because you have to do a lot of cajoling, appealing, persuading, reading the room, reading the tea leaves, and fitting in.

Expand full comment

Can you get a formal diagnosis of some sort and be very public about it?

Expand full comment

I have a friend who has a visible disability and gets much worse treatment than me, so my intuition is that having legible issues is worse than illegible ones.

Expand full comment

This wouldn’t happen in Canada. Where do you live?

Expand full comment

That seems odd in a university setting where presumably there are guidelines on treating people with disability.

Expand full comment

Low emotional intelligence is no bar to success in academia. Based on some of my wife's experiences (she is a refugee from academia) it may in some roles actually be a positive.

Seriously, no joke here.

Expand full comment

Let me guess, lots of bad experience with egocentric professors during undergraduate/graduate studies? If so, that's not what I meant by low emotional intelligence. The professors don't treat each other like that because they know they can't get away without consequences, while they can with students.

Expand full comment

No I was primarily referencing her decade-plus working in academia on the staff side, rising to the associate-dean level before she blessedly found a new path and escaped.

She also has a PhD so sure, has some stories from the student end of things. But it was the years on staff that would provide most of the fodder for the darkly-comic memoir that she periodically threatens to sit down and write.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

"We are not evolving, we are devolving! There have been a lot of studies on the rates of deleterious mutations in human populations and all such researchers agree there is a lot of mutation going on - the rates, however, are debatable. To give you a bit of a feel for the numbers, a human mutation rate of 75-175 nucleotide substitutions (the mistakes in the DNA) per person per generation is widely accepted. However some believe the actual mutation rate is as high as 300. Whichever way you look at it, there is a lot of mutation going on in the opposite direction to evolutionary theory. In fact it’s much worse than this as there are many classes of mutations that can be passed on from one generation to the next. Among these mutation classes are mitochondrial mutations, nucleotide substitutions, satellite mutations, deletions, duplications/insertions, inversions/translocations, conversions, which can add up to possibly thousands of mutations per person per generation. Now again, most of these are not detected at the whole person level, but we are being mutated with each generation nevertheless. Most human geneticists will agree that the human race is genetically degenerating , with information being lost and thus reduced fitness of our species. This reduced fitness may be somewhere between 1%-5% per generation."

- A Strong Delusion 1.2 By Winston Smith - https://escapingmasspsychosis.substack.com/p/a-strong-delusion-12

Also:

"Yes, the human genome is degrading. This is a well-established, noncontroversial finding. This phenomenon is called “increasing mutational load” and is based on concepts developed by one of the great geneticists, H. Muller, roughly 70 years ago.[1]

Harmful mutations come into being all the time. The average newborn has 50–100 new mutations. Though most are harmless, about 1 to 4 of those are harmful.[2] Normally, natural selection causes people having those mutations to die out or not have children, so eventually, those mutations get eliminated.

But our lives aren’t natural anymore. At the beginning of the 20th century, people having genes predisposing them to diabetes would have died young. That’s the normal process of natural selection. Now, those people get life-saving insulin, so they live normal lives and have as many children as anyone else. (Type 1 diabetes, the kind that can appear before you start having children, is highly heritable.[3]) The children of diabetics inherit the genes that make one susceptible to diabetes, so those genes aren’t being eliminated.

The same thing is happening for many diseases that have a genetic component. People that would have died in the past now live nearly-normal lives, and pass on their genes to the next generation.

It’s scary

One investigator calculated that without natural selection, fitness will decline 1 to 3% per generation, and then went on to write the most frightening paragraph I have ever seen in a biological publication:

'Thus, the preceding observations paint a rather stark picture. At least in highly industrialized societies, the impact of deleterious mutations is accumulating on a time scale that is approximately the same as that for scenarios associated with global warming ... Without a reduction in the germline transmission of deleterious mutations, the mean phenotypes of the residents of industrialized nations are likely to be rather different in just two or three centuries, with significant incapacitation at the morphological, physiological, and neurobiological levels.[4]'

If you don’t normally read biological publications, this paragraph may seem tame to you, but this is as alarmist as biologists ever get.

Not so scary

If medicine and biology keep advancing, they will always stay ahead of the increasing mutational load. Someday, there will be good treatments for asthma, Crohn’s disease, diabetes, obesity, and other genetic diseases, so it won’t matter if the genes causing them to become common.

But if civilization ever crashes, people with multiple genetic defects might not survive.

Footnotes

[1] Our load of mutations

[2] Rate, molecular spectrum, and consequences of human mutation

[3] Familial aggregation and heritability of type 1 diabetes mellitus and | CLEP

[4] Rate, molecular spectrum, and consequences of human mutation"

Israel-Ramirez - https://weirdscience.quora.com/https-www-quora-com-Is-the-human-genome-degrading-over-time-and-actually-devolving-answer-Israel-Ramirez

Finally:

Gene Pool Decline: Are we Becoming Bad Survivors? - https://www.youtube.com/watch?v=k2N4ZO57fjE

Expand full comment

This seems like the kind of thing that only lasts until you hit a big war, and then all the people relying on infrastructure to survive end up dying when the infrastructure is blown up.

Expand full comment
founding

<quote>That’s the normal process of natural selection. Now, those people get life-saving insulin, so they live normal lives and have as many children as anyone else.</quote>

If there is no longer a fitness advantage, isn't this selection working as normal? Did penguins 'devolve' when they lost their ability to fly?

Expand full comment

>If there is no longer a fitness advantage, isn't this selection working as normal? Did penguins 'devolve' when they lost their ability to fly?

The difference is that in the human case, the equivalent to "the ability to fly" (like "having insulin in your body", "seeing well", "being able to move yourself", "keeping a healthy body") is still somewhat required by our environement, it's just supplemented by technological means (injecting insulin, wearing glasses, having a wheel chair, taking semaglutid for the rest of your life). We, as a specie, are paying an increasing cost just to live our life normally, and will accumulate as we keep finding ways to relieve selective pressure. How long until our entire industry and intelligence is dedicated to keep alive a race of semi-retarded cripples? Probably a long time. But once the process starts, it will be accelerating as every generation is more supported, less productive than the previous one.

Expand full comment

For some Pacific Islands birds who lost not only their ability to fly, but also their ability to escape from predators, even to look for mates ; yes I would call that devolving.

Penguins at least got better at swimming in the process.

Expand full comment

That is not an example of devolving.

Those bird species did not lose the ability to escape from the predators that existed in their environment while they were evolving in that way. They shed flight because they had become the largest land animal on those islands hence had no need to incur the costs of flight (energy use, bone fragility, etc). They instead became among the sturdiest birds and the fastest-running ones ever yet known to science, and thrived for millennia from that evolutionary trade.

Expand full comment

I was thinking specifically of the Kākāpō, which has a walking range of 5km, every 5 years.

https://en.wikipedia.org/wiki/K%C4%81k%C4%81p%C5%8D

Note: of course they only lost their ability to run away because they had no predators, that's the whole point of the post. If you lose a characteristic that is crucial for your survivial at the time, that's not devolving, that's just dying.

Expand full comment
Comment deleted
Expand full comment

Ostriches are #1 in foot speed, yea. (And are an evolutionary oddity, the only living member of a genus and of a family.)

Also in the top ten though (hence my use of the phrase "among the") are the cassowary, the Tasmanian native hen, the Guam rail, the takahe, and the weka, all of which are Pacific island species.

Expand full comment
founding

why would they lose an ability that improved their fitness so much?

Expand full comment

There was also a cost.

Expand full comment
founding

so then what's the problem? why is this devolving? Is it not evolving, since they are shedding the cost?

Expand full comment

Correct.

Expand full comment

I rank this crisis as a D-. May have an effect if it goes unchecked for the next millennia, unlikely to actually bother any society that actually lasts that long.

Expand full comment

Really? How much does your country currently spend on supporting those who would die on their own? My country spends a lot, Healthcare is the single largest line item in our government's budget, and it is vastly disproportionately going to support those who would not survive without it, after only about two centuries of this process, our real GDP is shrinking, but the Healthcare costs keep rising. It will be much less than a millenia before all of our resources go towards keeping people alive who could not live on their own.

Expand full comment

Are the mutations now more frequent than in the past, or have we devolved all the way down from apes, or what?

Expand full comment

"If archaeologists discover something at least as impressive as Gobekli Tepe from 11,000 BC or earlier before 2043, he wins $200; otherwise I do."

These are really, really generous terms for the ice age civilizations side. The current consensus is that Gobekli Tepe was built by hunter gatherers, not by a civilization. If archaeologists discover something as impressive as Gobekli Tepe that's a few thousand years older, that would be a very cool find, but it would say nothing about the plausibility of an Atlantis or even of a Sumer-like city state.

Expand full comment

Successful hunter-gatherers are ranchers with big herds and at least groves of fruit trees or meadows of breadable grasses. That's not the art of living in cities, but the wealth required to build Gobleki Tepe is about as impressive as a small city-state. (Place still looks like a slave camp to me. Cells with one trap door in the roof. Walls looking down on the cell roofs. I bet that's why they buried it.)

Expand full comment

I think the bet isn't about an Atlantis-style civilisation, though, and that makes it more plausible to me. The usual "lost ancient civilisations" view is that in the remote past there were highly developed civilisations that then got wiped out by some natural disaster.

That's a very high bar of proof, and it gets higher the more fringe the theories get (the laser beams and flying saucers type theories). On the other hand, if Gobekli Tepe is something that can be constructed at what is considered a low-ish level of human society (hunter-gatherers), then (1) that's very impressive (2) it's more plausible for similar achievements at an earlier date because of the level of human development at those earlier dates.

Expand full comment
Comment deleted
Expand full comment

What's your definition of "civilization" here?

Expand full comment

**GPT4 puzzle answers

Murder question

[Correct answer]

The explanation could be that the murder happened during the daytime, and not at night as the lawyer had assumed. The lawyer may have made an incorrect assumption based on the date alone without checking the time of the incident. The witness may have been able to see the murderer clearly because there was enough natural light during the day to see the events clearly, even from a distance of 50 feet.

[Stoopit stuff GPT also tacked on]

The explanation could be that the murder happened during the daytime, and not at night as the lawyer had assumed. The lawyer may have made an incorrect assumption based on the date alone without checking the time of the incident. The witness may have been able to see the murderer clearly because there was enough natural light during the day to see the events clearly, even from a distance of 50 feet.

Alternatively, the witness may have been able to identify the murderer based on their physical characteristics or other identifying features, such as distinctive clothing, tattoos, or scars, even if the lighting conditions were poor. The witness could have also heard the murderer's voice or overheard them saying something that helped the witness identify them.

It is also possible that the lawyer's assertion about the lighting conditions is incorrect or exaggerated, and there was enough ambient light to see the events clearly, even if there were no street lights or house lights in the vicinity.

_______________

“Explaining an action is not the same thing as justifying it . . . “ question

B. If people’s actions are sometimes rational, then good reasons must sometimes be the explanation of actions.

This conclusion can be drawn from the argument, as it states that "in order for an action to be rational, the good reason for doing the act has to be an essential part of the explanation for doing it." Thus, if an action is rational, it must have good reasons as part of its explanation.

___________________

Fat/thin pet question

A thin pet dog would contradict all four generalizations.

First, the statement "All dogs are fat" is contradicted because the dog in question is thin.

Second, the statement "No pets are thin" is contradicted because the dog is a pet and it is thin.

Third, the statement "No thin dogs are pets" is contradicted because the dog is thin and it is also a pet.

Fourth, the statement "All thin pets are cats" is contradicted because the pet in question is a dog and it is thin.

Expand full comment

Is there a way to get GPT to do anagrams? So far I've been very successful in getting it to create things that sound like plausible anagrams of the input until you start counting letters.

Expand full comment

When getting GPT to treat words as sequences of letters instead of units of meaning (made of multi-letter tokens), it really needs a lot of step-by-step thinking and error checking. I was able to get GPT4* via ChatGPT to find anagrams by explaining how to split the input into individual letters, sort them alphabetically, generate/guess plausible (lexicographically-similar) candidate words, split and sort the candidate in the same way, and then confirm the candidate word: a) is English, b) has the same length, c) matches the input alphagram at each letter. I don't know if the sorting is necessary, but my theory is that the best training data / lexical associations will be from Scrabble word list sites that are usually indexed by alphagram. The results were not comprehensive, and included the occasional non-word, but the letter permutation was correct.

* GPT3.5 didn't understand at all

If using ChatGPT plugins is allowed, then it can be prompted to send the right query to Wolfram Alpha. This seems to work:

> Which English words can be made from the letters "a e i n r s t"? Search the dictionary using a regular expression to find words that contain each of those letters once, and which are the right length.

> { "input": "DictionaryLookup[RegularExpression[\"^(?=.*a)(?=.*e)(?=.*i)(?=.*n)(?=.*r)(?=.*s)(?=.*t).{7}$\"], IgnoreCase -> True]" }

Now I kinda want to make a Qat plugin and teach it how to solve cryptic crosswords.

Expand full comment

Interested in what ACX readers think about the stigmas surrounding serious mental illnesses such as bipolar and schizophrenia. I started writing my own blog about my experience living with schizoaffective disorder in part because I do want to reduce stigma, but I want to do so in a way that is rational and doesn't brush aside or ignore the dangers and seriousness of untreated mental illness. A lot of advocacy around this issue pushes for the rights of the seriously mentally ill to not take medication, for example, which is more than arguably counterproductive from a harm-reduction point of view. Those who refuse medication will often end up on the streets where they hurt themselves or someone else. The lack of mental institutions to house and care for such individuals is another issue.

What do readers think a rational kind of mental health advocacy would look like? Would it include calls for a reintroduction of institutionalization and forced treatment? Also, do you think that a reduction in mental health stigma would be possible through a kind of "coming out" movement among the successfully treated, similar to the one that fueled the increased acceptance of gays in the 2010s when so many gays and lesbians were disclosing their orientations to their families and the world? Are there even enough successfully treated people to create a movement? On any case, are there forms of mental health stigma that are justified and actually good for society?

Thanks for any thoughts on these and related questions! (and shameless plug: feel free to check out my Substack and share any thoughts you may have on my writings there. :))

Expand full comment

Given that I am literally schizophrenic, and recently had a bit of an episode, I think the most purely beneficial article to write would be one where you lay out what other people can do to recognise a psychotic episode and help.

In my case I read somewhere (most likely 4chan) that schizophrenic people can pull their act together and function for a short while, if you just tell them that they are acting insane and they should stop acting insane. For me this works, the outside information that I am off can penetrate my delusions, give me a piece of concrete information to cling to, and gives me time to activate my subroutines and coping mechanisms which lead me back to lucidity.

Not saying wether this approach can or should be abstracted for all schizophrenics, but gathering approaches like this, from yourself and others, and compiling them as a ressource for how to actually help the severily mentally ill can sidestep the entire problem of stigma by giving the outside agency in the case of a psychotic break, and agency always decreases fear and as such also stigma.

As such other very good approaches would be to compile highly effective therapies (Social Rythm does wonders for Bipolar, as I know from a good friend of mine) and explain them to normies. If you can fit yourself into the social rythm therapy of your bipolar collegue by only talking to him at 10:15 on every lunchbreak, then you have agency in his mental illness, which in turn decreases your fear of him and perhaps allows you to become friends.

Expand full comment

“ I want to do so in a way that is rational and doesn't brush aside or ignore the dangers and seriousness of untreated mental illness”

I don’t have any concrete recommendations, but that right there is the key insight I think. Too many “anti-stigma” efforts are more like “downplay the seriousness” efforts. Or pointless semantic stuff like “person first language” and “let’s call them individuals not clients”.

Another revealing incident has been the recent Kanye kerfluffle - lots of people who claim to be sympathetic to mental illness, to care about reducing stigma, and to understand how mental illness can drive people to do bad things they wouldn’t do otherwise, suddenly forget all that and become hateful if a person’s mental illness makes them say something taboo.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

"What do readers think a rational kind of mental health advocacy would look like? Would it include calls for a reintroduction of institutionalization and forced treatment? "

No! I have seen many people with psychotic disorders receiving state-of-the-art psychopharm treatment, and my overall impression is that the current drugs calm things down, but do not usually give the person a decent quality of life. (Of course there are exceptions -- people for whom the drugs are a miraculous cure.) I think that until we can find treatments that work better, we should have a modern equivalent of the old "moral treatment" from the 18th and 19th centuries: refuges for the chronically mentally ill, staffed by people who treated them kindly. There are many people who find it intrinsically satisfying to help and comfort other people. They are a huge, untapped resource. Pay people like that to work there.

Here's an example of a good use of that resource -- has been stuck in my mind for decades. Orphan infants develop failure-to-thrive if left alone too much, even if their feeding and other care is perfectly adequate. There was a place that brought in young Downs Syndrome women and gave each a baby to cuddle and play with. Both the babies and the Downsies did much better. Instead of Downs syndrome people you could probably use lonesome elderly people, or other people who have affection to give and nobody to give it to.

And by the way, if we ever start practicing eugenics, I think whatever genes produce kind, nurturant people are at least as valuable to society as the high IQ genes.

Expand full comment

You ask tough questions. The writing on your blog is powerful. Keep it up!

Expand full comment

I would be very interested in reading a rational blog on mental health advocacy! I don't know what policy prescriptions such a blog would have though, and it might be better at first to do more descriptive/investigative posts to build the right audience. The topics I would want to discuss or read about most:

1. distinguishing treatments that make an individual's life less unpleasant and treatments that make them more useful to society. Both outcomes are needed, and focusing only on one or the other is imo the source of a lot of "stigma" against both patients and providers.

2. the current trend of self-diagnosing via social media: is this helpful or harmful to mental health advocacy as a whole?

3. We should acknowledge that without a huge increase in public+private spending, reducing the stigma against getting treatment trades off against keeping that treatment accessible. C.f. the current Adderall shortage. My personal belief is that mental health advocates don't think enough about this because they tend to be the same people that want more mental health spending anyway. Instead of blanket calls to "go get therapy!", we should be talking about self-triage, how to prioritize needs or spot an emergency need, how to find low-hanging fruit in self-treatment AND how/when to access professional treatment.

3.5 Adding on to the above, there are some situations where a bottle's worth of drugs can accomplish more than a year's worth of therapy. And vice versa. That fact tends to get lost in all the intra-movement arguments about drugs vs. therapy.

Expand full comment

I would take the word "stigma" out the back and shoot it, and then try thinking about the question again. Some things labelled "stigma" are good, some things labelled "stigma" are bad, and it's probably better to throw out the whole word than try to reason sensibly whilst using it. (Stigmas are unfairly stigmatised.)

There are good, sensible, and accurate ways to think about mental illness, and there are bad, silly, and inaccurate ways to think about mental illness. Your goal should not be to reduce stigma (which as you have pointed out may put you right back into silly-and-inaccurate territory) but to encourage people to think about mental illness in the right sorts of ways.

Expand full comment

The questions you are raising are very difficult and more gut wrenching than any trolley problem.

Good luck and Godspeed Connor.

Expand full comment
founding

Yeah, I've been thinking about this problem off and on for decades, and never come up with anything good. But somebody needs to, probably more now than ever.

Expand full comment
Apr 24, 2023·edited Apr 25, 2023

I've been making up puzzle questions for GPT4. Here are some it got right. Seems to me that to solve these a person has to use reasoning. I don't know what process the AI uses, but if it manages to produce answers that people can only come up with via reasoning, I think we should consider the AI capable of reasoning. Here they are. Wondering if you folks agree with that, and overall how impressed you are with GPT4's responses. I believe that if I asked these questions of a college freshman and he answered them all correctly, I'd think he was quite bright.

I posted GPT's answers. in a separate post. Post is headed **GPT4 puzzle answers, & you can go right to it by hitting cmd-F and entering **

______________

-Here is a puzzle for you. A lawyer for the defense is questioning a witness for the prosecution. The lawyer says, "so you say that even though you were about 50 feet away, you saw the murder on Dec. 11 quite clearly, and were able to see the man who committed the murder." The witness says, "yes, I could see his face extremely clearly." The lawyer says, "Haha, but that is impossible. The murder occurred in an area with no street lights, no house lights, and no passing cars. Also there was no moon on the night of Dec. 11!" The lawyer is right about the lighting conditions on the night of Dec. 11, but the witness is in fact telling the truth. What is the explanation?

____________________

-Explaining an action is not the same thing as justifying it. Everything anybody does has an explanation, and if we could look inside the person’s head we would be able to explain accurately what caused them to take the action. But an action somebody takes is only justified if the person doing it has good reasons for the action. Interestingly, a lot of psychologists say that even when the person has a good reason to do something, those reasons often do not explain their doing the thing. But the important thing to understand is that in order for an action to be rational, the good reason for doing the act has to be an essential part of the explanation for doing it.

If the statements in the argument above are correct, which of the following can be concluded from them?

A. There are some psychologists who believe that even when a person has a good reason to do something, those reasons never explain their doing the thing.

B. If people’s actions are sometimes rational, then good reasons must sometimes be the explanation of actions.

C. If there are any good reasons among the causes of an action, then that action is rational.

D. It is impossible to give an explanation for some things people do.

E. If a person takes an action that is justified, that action has no explanation.

______________________

-I asked how a person can escape from a locked room 40 feet in the air, with an open window. All the person has with him are the jeans he is wearing and a pocket knife. GPT said person should cut the jeans into strips and tie them together to make a rope, then climb out the window down the rope. That was right. Then I asked it how many strips the jeans would have to be cut into, & got this:

To determine the number of strips of denim the man needs to create an adequate rope, we first need to estimate how much length can be gained from a single strip of denim. Let's assume that after cutting open each leg along the seam, each strip is approximately 1 inch wide. We'll also assume that a loss of 2 inches occurs when tying each knot, which is a conservative estimate.

The total length of the window to the ground is 40 feet, which is equal to 480 inches (40 x 12). Let's also add a buffer of 10% to account for any slippage or stretching, which adds 48 inches (480 x 0.1) to the required length. This brings the total required length to 528 inches (480 + 48).

Each strip of denim from the jeans' legs is approximately 35 inches long, and we lose 2 inches when tying each knot. Therefore, the effective length of each strip is 33 inches (35 - 2). To calculate the number of strips needed, we can divide the total required length by the effective length of each strip:

528 inches (total required length) ÷ 33 inches (effective length per strip) ≈ 16 strips

This means the man needs approximately 16 strips of denim to create an adequate rope for his escape. However, this calculation assumes that the denim can support the man's weight and that a sturdy structure is within reach outside the window. It's important to note that this plan carries some risk, and the man should exercise extreme caution when attempting this escape.

________________________

Here are 4 generalizations:

-All dogs are fat.

-No pets are thin.

-No thin dogs are pets.

-All thin pets are cats.

Which of the following, if it existed, would contradict all 4 generalizations? A fat cat, a thin pet dog, a fat pet or a thin pet?

Expand full comment
founding

Finally had a chance to look these through, after being on travel and then sick for a few days. I like the selection of questions; they do seem to capture a broad range of "thinking" styles.

#1, the murder mystery, is a problem that humans would solve with creativity (to find possible solutions) and reason (to reject the ones that don't work). For GPT, I'm pretty sure "creativity" is replaced by "search the internet and grab things that people have talked about in this context". The reasoning part, as you note, GPT gave you one good answer and three bad ones. So either mediocre reasoning, or an alternative heuristic like "return the answers that the most smart-seeming people have given in similar contexts". But I am impressed by the explanatory detail on the good answer.

#2, the explanation/justification thing, a person would probably treat as a pure reasoning problem. But I think it can also be solved by matching "actions", "rational", "good reasons", and "explanation" from answer B, all presented as positive but not obligatory, with the third sentence of the problem statement, which has basically all of those (treating "justification" and "explanation" as synonyms) with the same valence. I don't think any of the other problems would pass that test.

#3, the rope trick, I think the general form of that solution has been published often enough that searching the internet with the right contextual clues would pick it up. But again, the explanatory detail when you asked is impressive - this is the part that looks most like reasoning to me.

#4, fat/thin pet, I put in the same category as #2 - can be solved by reasoning, or matching all the key words with the right valence, so hard to tell what is going on.

Expand full comment

Well John, I’m glad somebody besides me is interested in this stuff. I keep ruminating about what reasoning is, & thinking about ways to test whether GPT4 manages to do any — even if its processes are way different from the reasoning a human being would use to figure out an answer to the same problem.

Since I sent you that last batch of problems I’ve tried some more puzzles, and in them GPT mostly looked dumb.

-Someone I know described a situation where somebody had bought a gift certificate to a certain store, then changed his mind and wanted to return it for cash — but store’s policy was that you can’t turn a gift certificate back into cash. Asked GPT4 how the person could get the cash back without spending any more money, and GPT4’s answers were terrible — pleading with clerk for an exception, plus some other things I can’t remember that didn’t make any sense or didn’t meet criterion of spending no more money. Solution is to buy things at store with gift certificate, return them for cash. Seems pretty easy to me, but does require a small amount of thinking outside the box, which I’ve noticed in other problems GPT is not good at. It’s sort of hyperconventional, like a certain kind of good student who learns the material well but can’t think independently.

-Here are a couple other examples of its not thinking outside the box: For gift certificate problem, there are 2 possibilities it did not even consider: Threaten the clerk (with complaints to management, with a knife, whatever); get a job at store and then use authority as clerk to override rules and ring up a refund for gift certificate. While these are not great ideas, they do each have a chance of working. Seems like a thorough answer would have included them. For murder problem, someone I described the problem to said the murder could have been committed at night but with a flame-thrower, & I’m pretty sure that *would* have lit up killer’s face well. Also, on my puzzle with fat pets/thin pets, the first time I administered it I accidentally left a word out of the right answer, which was “thin pet dogs” I think, and just typed “thin dog.” GPT4 nevertheless chose a “right” answer, and then wrote a bunch of nonsense to justify its answer. I think maybe this is also an example of not thinking outside the box: It’s presented with a problem that supposedly has a right answer among the choices, and it does not occur to GPT that there *is* no right answer. Here for you amusement is the nonsense it wrote:

ME:

Here are 4 generalizations:

-All dogs are fat.

-No pets are thin.

-No thin dogs are pets.

-All thin pets are cats.

Which of the following, if it existed, would contradict all 4 generalizations? A fat cat, a thin dog, a fat pet or a thin pet?

ChatGPT

If a fat cat existed, it would contradict all four generalizations.

First, the statement "All dogs are fat" is contradicted because a cat is not a dog and the statement does not apply to it.

Second, the statement "No pets are thin" is contradicted because a cat is a pet and it is not thin.

Third, the statement "No thin dogs are pets" is contradicted because the cat is thin and it is also a pet.

Fourth, the statement "All thin pets are cats" is contradicted because the cat is not thin but rather fat.

I have trouble seeing how something capable of reasoning could make such dumb mistakes. You have any ideas? Meanwhile, if you’re as interested in this as I am you might want to look at this Twitter thread, where somebody gives GPT some pretty good puzzles and it does well:

puzzle room

https://twitter.com/TBibites/status/1644068408567255042

Also, this article, “Sparks of Artificial Intelligene,” which I haven’t read yet, is an attempt by developers and others to figure out GPT’s mental capabilities in different areas.

https://arxiv.org/pdf/2303.12712v1.pdf

Expand full comment

For the first, there could be a lot of explanations. It might be day (although that raises the question of why the lawyer would think it would be night). There could be flashlights, personal illumination, parked cars with headlights on. Maybe the murder weapon was a flamethrower. Maybe the witness had good night vision under the stars, or maybe the witness had night vision goggles.

For the second, I don't think any of the conclusions could be drawn from the paragraph, but I think some are better than others. A requires going from "a lot" and "often do not" to "some" and "never", which is a type of misreading common to political discourse these days; it's not ruled out by the paragraph, but also not implied. B requires going from "be an essential part of" to simply "be", which I think someone could make a good argument for, but it's still a stretch IMO. C requires going from "the good reason" must be "an essential part" to "any good reasons among the causes", which isn't as solid as B, but I think could still be argued for. D is not supported by the paragraph at all, but is strictly speaking almost certainly true (for a certain value of "some"). E seems like a logical fallacy that someone with poor reading comprehension could make.

For the third, I'm struck by how it first assumed that the strips would be 2 inches across, then ignored the circumference of the legs, and then went on to do length calculations. Someone else noted that "slippage and stretching" isn't going to require extra length. It also didn't note that the rope doesn't have to reach the ground - I think I'd personally be fine with a 10' gap, if I held on with my hands and then dropped straight down. Also it pulled the 35" number from thin air; I find it a bit suspicious that 33*16=528, precisely. And I'm not convinced by knots in 1" strips only using 2" of one end of each strip. And I don't know what it means by assuming a "sturdy structure" nearby; is that supposed to be the surface we land on? All in all, a little disjointed, but I've heard worse from real humans, including myself on bad days.

For the fourth, it's interesting that the 4 generalizations contradict each other. Humans who are used to logic puzzle questions will probably ignore this, but doing so requires some context. We have to not care whether the generalizations are true, and not care whether they're consistent. If we were trying to build a formal representation of a toy world, we'd likely fail. Also, the last line is repeated twice, and OXFORD COMMA DAMMIT! ;-)

Expand full comment

I posted GPT4's answers separately. You can find them by searching comments for **. The question about the guy in the tower is much easier than the others. While I realize that GPT screwed up a bit on the answer, I'm still impressed that it did reasonably well, and walked its way through the actual little mathematical word problem (admittedly a very easy one) pretty much exactly as I would have, and exactly as I would have explained problem solution to a kid. GPT's answer seems to me to be an example of reasoning. I do understand that GPT's own process may have been very different from human reasoning. Anyhow, wondering what you make of its answers to the other items, and especially whether you think answering them requires what you'd call reasoning.

It's not that I'm a big fan of GPT. Actually I find it pretty creepy and repellent, and when it answers substantive questions with tedious, hyperconventional little 4-paragraph essays in beige prose I want to kick it in the face. I'm trying to figure out what it's capable of.

By the way, the "correct" answer to first question is that it was daytime. But your flamethrower answer is something that had never occurred to me. It's creative, but also unquestionably would explain the discrepancy the question asks about, so really a great answer. There's a special category for answers like that in scoring the Rorschach: they are "good form" (i.e., very plausible interpretations of the inkblot) but also novel.

Expand full comment

Another example that just occurred to me is how we animals have physics embedded in our brains. Some of it is learned, and some is baked in, and different species have different stuff baked in. When we throw balls and catch them, when we grab something out of the air, when we jump and land, we're not using reasoning but we are still using our brains, which are neural nets that learned physics from being in the world. That's what I think these LLMs are doing, except with language instead of physics.

Expand full comment
Apr 26, 2023·edited Apr 27, 2023

I do see how our learning to catch balls, for example, is a good analogy for what these LLM’s are doing. We can become quite expert at fly balls without *thinking* about the process at all. No doubt our brains are set up to quickly recognize things like velocity. We may not have a word for it til we learn “speed” at age 5 or whatever, but there is some preset there that makes velocity a more salient characteristic of moving objects than, say, their color. So we learn with particular ease stuff like: Things that move fast arrive sooner; things that move fast smack you harder than things that move slow. And we learn it without being able to put it into words. So then fast forward to a teen who is very good at catching fly balls. If he’s a good self-observer and articulate, he can explain some of what he does to another kid. But some of it he does not have introspective access to, and some of it is indescribable — it’s body knowledge, not mind knowledge. OK, that give me a model that I can understand for the kinds of “knowledge” these LLM’s absorb.

BUT: here’s where I get stuck. There’s a limit to how much even a thoughtful, good introspectionist can generalize from his knowledge of how to catch a fly ball. If you ask him how the fly balls would be different if people hit basketballs or golf balls in the game of baseball (let’s assume he knows nothing about either til you show him the 2 kinds of balls) he probably would not be able to tell you in what way fly balls would differ if the balls were different. Or, say if you asked him about how a baseball would behave if its weight was cut in half, but all its other properties were unchanged — or how fly balls would behave if hit with 10 times the force of real fly balls in baseball. And if you asked the guy what a baseball would do if dropped from 10 miles in the air — like, for instance, would its speed keep increasing, or max out at some point — he would be unable to answer. In order to extend your predictive abilities, it seems like you need to have something in addition to a combo of body knowledge and introspectively accessible info about technique. You need CONCEPTS: You need EQUATIONS that capture precisely how things vary. You need concepts like air resistance, gravity, acceleration, and equations representing their relationship.. Body knowledge does not suffice.

And that is where I get stuck when it comes to AI learning. It seems like there are kinds of prompts it should not be able to respond to without having something like concepts or equations. And yet it does. My puzzles, which demand what I call reasoning, also seem to demand a higher level of knowledge than the machine equivalent of body knowledge. Of course I know that the AI has absorbed prose that discusses things like premises, fallacies, logical reasoning, Venn diagrams. But it absorbed it as word patterns, not as content that is applicable to content of other kinds. So I just do not understand how it is managing to do something that seems to me like human reasoning. (Maybe how it does it is different. But it still seems to me that *concepts* and *laws stating regularities* are needed.). Maybe the developers don’t either. I have read in different places about developers being surprised at emerging capabilities of AI: Answering questions in Persian; solving arithmetic problems; “theory of mind,” i.e. knowing what somebody else would know under various circumstances.

Do you see a way that your analogy between machine learning and body-learning of things like catching in balls explains the development of the ability to navigate ideas that seem to require concepts and awareness of regularities, not practical rule-of-thumb regularities but regularities stated as laws, using equations?

Expand full comment
Apr 27, 2023·edited Apr 28, 2023

Regarding generalization from intuitive knowledge, I agree that there's a level of precision that would require equations to calculate. Someone operating from intuitive knowledge should be able to say that a lighter ball would go faster and farther, or a ball hit harder would go faster and farther, but I'm guessing they won't usually be able to predict details until they see it happening. Let alone handle things like terminal velocity.

Here's what I think is going on with those questions, specifically the last one with the 4 generalizations. It's in very vague terms, because that's the only level I can understand these things on, and I'm really reaching here. This stuff is way beyond anything I ever worked with, and while I can generalize the principles I know to roughly encompass the latest developments, it's more and more of a stretch, and I won't really know when my learned patterns cease to productively apply. And the internals of GPT-4 are a secret, so maybe they've already moved beyond this. But anyway...

What it does in practice is generate words, one after the other. Overall, it's response was an answer where it listed the single correct item, followed by the rest of a sentence that reiterated the question ("X would contradict all four generalizations"), followed by lines about how the item fit with each statement. (Personally, I'd scribble down "a thin pet dog" and move on, but apparently this neural net was trained on a different style of answer.) It might also have chosen to prefix the correct item(s) with some stock verbiage, but apparently it didn't. I don't know what the response would be like if there were multiple correct answers or no correct answer.

The neural net was trained on lots of text, including text with logic problems in it; this input matches the general pattern of a logic problem. The problem has individual statements that match patterns going all the way back Aristotle: all A are B, no C are D, no D A are C, all D C are E. The texts that the neural net was trained on have patterns that indicate that A and E are mutually exclusive but not binary, as are B and D. The problem has a question at the end, of the pattern "which items in this set have this property", followed by a set of items. The pattern of response to this type of question is to pick one or more items (in this case, a D C A), and explain why those items have the property. The property refers back to the statements. The texts and training place a high value on internal consistency, and there's a very strong pattern of making sure that generated text is consistent with what came before.

So when it generates the answer, the first thing is to list the correct item(s), if any. The first word comes out "A", capitalized, but that's not a big surprise since all 4 items start with "a". (Still, at this point we can probably tell that it's choosing at least one of the items, rather than saying that no item matches.) The next word will almost certainly be either "fat" or "thin", since those are the next words of the 4 items. And here's where the magic happens and my answer becomes seriously unsatisfying. :-) The way I look at it (based on old linguistic theories) is as though there's internal pressure, built up by all of the patterns contained in the neural net. Different patterns push for different words, and where they reinforce each other, they get stronger, and the strongest combination will push out their word. (But sometimes in real life we get Freudian slips.) In this case, it was a logic problem that needed a correct answer, the answer needed to start with one of the items (or say that none matched), the item had to contradict each of the statements, and the word that was most likely to satisfy all of those constraints was "thin". I don't want to say that it "knew" that the right answer was "a thin pet dog", but I think on some level, even by the first word "A", it had settled on that as the answer it was going for. (I wouldn't want to say this for certain until I knew what would happen in the cases with multiple correct answers and no correct answer.)

And yeah, that kinda feels like magic to me, but I conceptualize it as the patterns being more complex than I can comprehend. I don't know what's going on, but I'm quite confident that it's just patterns upon patterns (unless OpenAI is doing something truly novel with GPT-4). It's like the way that my computer kinda feels a bit like magic, but it could pretty much all be built with a lot of NAND gates. Even though I've been through the steps a few times, I can't keep it all in my head for very long, and I wouldn't be able to explain how I can post this comment, in terms of electrical charge "flowing through" wires. (And even that's a lossy metaphor.)

Sorry if this was too rambly and disjointed. It's taken me a while to get this down, and I'm out of practice when it comes to serious communication. But I hope it helps explain where I'm coming from, and I separately hope it has some connection to reality. ;-)

Expand full comment

Thank you, Moon Moth, that's really helpful. I think I really get a lot more about how with just training on adding the correct next word it could do a lot of what it did with this puzzle. Here's what I get:

-training where it encountered Venn diagram type problems would enable it to "recognize" it has encountered a problem of this type, and that would create a strong weighting (is that the right word?) in the direction of the answer having one of a few possible formats. So in doing next-word prediction on the answer, it would have a very strong tendency to produce an answer in either a Venn Diagram format (Some A are B) or in something like the format my actual answers are in, which is last word = pet, dog or cat, & preceding word = fat or thin. Of course, I could surprise the AI by presenting the Venn Diagram generalizations and then asking which of them contained a misspelled word, & then it would have to change its predictions for the format of the answer, but in this case I don't do it.

-So AI has lots of what you call pressure to give an answer in the format a/fat or thin/ pet or no other word/ dog or cat or pet. And now it has to figure out which of the words the pressure is pushing to the forefront to use to fill in the blanks.

So have I got it so far?

Beyond that, though, I still fall off a cliff when I try to understand choosing the correct words to fill the blanks using next-word prediction. It's easy to understand the process in human terms -- you look at the examples and ask yourself whether it contradicts each of the generalizations. But when it comes to finding something that "contradicts" I have trouble thinking about how that word create pressure in the direction of the right answer words.

It seems like what it is needed is to *understand* what contradict means and sort of have that understanding in mind as you sort through answer possibilities. Maybe the ability to "understand" something like that is an emergent property of an LLM that's been thru a huge number of trainings?

If you can stand to think about this more, would be interested to hear your thoughts about this. But meanwhile, I've run across an article which I'm too tired to read just now, but which seem to be in the right realm -- figuring out what GPT4 is able to do, & considering the possibilities of emergent abilities. Here's info about it in case you're interested:

Sparks of Artificial General Intelligence: Early experiments with GPT-4

https://arxiv.org/pdf/2303.12712v1.pdf

Here's a screen shot of some of topics covered: https://i.imgur.com/jm06EAB.png

Expand full comment

I think GPT did a pretty good job with 1 and 2, and got 4 right. The answer to 3 feels to me like something out of a dream, where the parts are all mostly there, but the order isn't quite right, and somehow it's disjointed. I wouldn't say that GPT is doing "reasoning", but insofar as reason is part of its training data, it can produce answers that have some of the flavor of reason, if that makes sense? I feel like a modern stereotype of a late medieval or early enlightenment noble, making sweeping pronouncements about the intellectual capacity of the "lower classes": they're not capable of True Thought, but they'll usually blunder through to a satisfactory solution if the problem isn't too hard.

For what it's worth, I think those "tedious, hyperconventional little 4-paragraph essays in beige prose" are a result of the later modifications made during fine-tuning. That's the specific flavor of answer that the trainers are looking for. It's like children in school getting the 3-part 5-paragraph essay drilled into them, with an introductory paragraph, 3 body paragraphs, and a summary paragraph. Or some kid who gets beaten every time they don't address their parents as "sir" and "ma'am".

Have you tried any of the "jailbreaks" yet?

Expand full comment

No. I don't even know how to jailbreak one (have never written code) or get a jailbroken version. Are they different in fun or interesting ways?

Expand full comment

I just meant, one of the prompts that can fool the chatbots into breaking out of their "helpful assistant" person and get them answering questions that they were trained not to. It might not be a good thing to do on a main account, since OpenAI may be tracking history. And apparently it's hard to do it to GPT-4, but earlier chatbots are more susceptible.

Expand full comment

Not sure if we're trying to answer these, but here we go.

The murder victim was holding a flashlight. The flashlight drew a bolt of lightning, which started a fire. The witness was actually watching the crime through a night vision camera. He could see the face extremely clearly because it was in plain sight on his desk, on the cover of his copy of Forbes.

F. Any attempt at mental reasoning is flawed because you can never trust psychology.

You can escape the room by throwing the pocket knife at a pedestrian and waiting for the police to come get you. Taking off the jeans and standing at the window may or may not work.

If no pets are thin, does that mean all owners have fat hands?

Expand full comment

The one answer you posted has mistakes. Slippage and stretching are not reasons to increase the required length. It also doesn't mention the need to fix the rope in the room.

Expand full comment

Also, it doesn't try to figure out how many of those one-inch strips you could get out of a pair of jeans.

Expand full comment

Yes, and actually I'm sure each of the legs of jeans are at lest 16 inches or so around, so it could actually get 16 2 inch wide strips out of the jeans. I In real life, I think that strip of 2" wide denim would actually hold, too). So yeah, it get that it's not flawless. Still, the steps by which it solves the word problem make good sense, and are the same ones I would take solving this (Ok, very easy) word problem.

Expand full comment

You're right, in fact stretching, if it happens, would be a reason to reduce the required length. Still, I'm impressed to hear it thinking through the process of how you figure out how many strips you need. Sure, it's an easy math problem, but there's a thinking-your-way-through-it that needs to happen, and it does that fine.

Expand full comment

I would say it's giving the verbal appearance of thinking its way through it, but the fact that it ends up going in an illogical (and unnecessarily illogical) direction is evidence that it's not in fact thinking. And we're back to the distinction between appearing to think and actually thinking. We already know it's easier to do the former than the latter, because human beings are good at that. Generations of instructors have pondered the distinction between appearing to think and actually thinking when grading student essays.

Indeed, there's similarity between this output and what I see when I have a student who is very good at giving the appearance of understanding without actually doing so. There's a sequence of true statements, all of them generally relevant to the subject, and somewhat relevant to the statement before and after in the sequence, but the overall path kind of goes nowhere. It's the kind of thing young kids do when asked to write an essay on a topic on which they actually have no coherent ideas at all. ("Huckleberry Finn is a great book. In the book the hero is named Huckleberry, which is an odd first name, but the book is very old so that's probably why. Huck, which is short for Huckleberry, goes on a raft trip with his friend Tom. On the trip they have a lot of adventures. Adventures are fun! I mean, I'd like to go on a raft trip some time. And the book is very famous, so it must have been a great raft trip, with amazing adventures. This is why Huckleberry Finn is a great book.")

Expand full comment

I think GPT's answer about the jeans's strips is way better than your Huck Finn example. Equivalent GPT answer if it had no coherent ideas, so just wrote a sentence of true, somewhat relevant sentences would be something like this: "The main character has got a really bad problem. Jumping 40 feet can break your legs. But fortunately he thought of cutting his jeans into strips. Strips are basically super-long rectangles, when you think about it. The main character would have to cut the strips with his pocket knife, and that would be hard to do, but if he was very careful he could cut neat strips. It's important for him to have a rope 40 feet or longer, because that is how far away the ground is. He was clever to think of using his jeans."

What GPT produced isn't flawless, but it does walk you through the process of figuring out how many strips were needed, which is what I asked. It even takes into account the fact that knots would use up some length, and if you don't give it some cleverness points for that then jeez -- what *do* you have to see to give it a damn bonus point? I think its walk-through is almost exactly the process I would go through if my kid was doing homework and wanted me to walk him through the steps to figuring this one out.

As for whether GPT is reasoning -- well, I definitely do not think of it as being conscious, that's not what I'm getting at. I'm saying it's answering a question people have to reason their way through. Maybe it got there by a whole different route. Still, when it writes a justification of its answer, the route it describes is virtually the same one I would follow if given the jeans problem. Maybe it has 2 separate skills: producing an answer to my question, via alien black box processes; and writing a human-friendly explanation. Though the principle of parsimony nudges us in the direction of thinking that its process was something like the one it describes in its explanation.

Expand full comment

30 feet of rope should be enough - dangling from the end, you'd only have a few more feet to fall.

Expand full comment

Yes, agree, plus rope would stretch. At the very least it would gain some length from all the knots tightening, plus lots of denim is made with some spandex in the mix so that it stretches a bit. So yeah GPT missed a lot of fine points. I think it seemed much dumber here than than with the 2 questions that do not involve knowledge of the physical world, just ability to grasp more abstract things. Seems like there is a LOT of incidental knowledge we all have about things like ropes

stretching, and how high off the ground you can safely drop from, and approx what the circumference of blue jeans legs is. Or the fact that it would be very hard to cut strips with a knife, but actually you wouldn’t need to because once you make a short little cut you can probably just rip the denim in a straight line from there. So is lacking that incidental knowledge— or, maybe, it knows all the things I mentioned, but is lacking whatever it is that summons up all that miscellaneous knowledge.

Expand full comment

Right, a different route. Every part of this problem can be found as a statement somewhere on the Internet, I expect. So stitching it together doesn't seem like a big challenge to a machine with an enormous eidetic memory. Our instincts aren't good at interpreting this, because we are used to other humans, and we (unfortunately) naturally infer that because it sounds like a human, it must be doing human things back there, and be subject to human limitations, e.g. not actually remembering every word anyone has spoken in the past 20 years worldwide. You would *not* make that assumption about something that didn't seem human, e.g. when your calculator solves the problem of multipying two 8-digit numbers in flash, you aren't just astonished by how smart it is, as you would be if a human being did it, because you just assume it's doing something very different from the human would do. I think it's valuable to bear in mind that, so far as we know, from the actual code that's operating here, it's *not* "solving the problem" in the conscious deliberate way you or I would, feeling our way forward, step by step, using first some logic, and only then putting the logic into words. Instead, it's more like it's sorting through a billion similar statements that it finds out there in on the Internet, and saying "How can I tack these on to one another so that the result is similar to other combinations of tokens I've seen?" It's *starting* and *ending* with constructing the sentences, there's no underlying abstract logic chain, like we would infer in a human being producing the sentences.

Sure, my example isn't as good as ChatGPT's. It wasn't meant to be. I'm just illustrating the equivalent thing that human beings do, when they want to kind of bullshit their way through a topic on which they actually have no coherent opiinion or idea. They just say a bunch of true things that are reasonable connected, one to another, but which overall go nowhere. Politicians and Fortune 500 CEOs are briliant at this -- one of them can talk for 30 minutes, and you can get the impression of some stirring and deeply meaningful oration, but be kind of unable to recall a single actual novel idea or concrete proposal -- just the persuasive appearance of same. "I'm just happy to be here, and I hope I can help the ball club."[1]

I'm reminded of a weird scene in one of Larry Niven's stories, in which he describes humans meeting with aliens, and after introductions everyone relaxes and there's a murmur of conversation -- and our protagonists realizes it is literally a murmur -- they're all just making conversation noises, a meaningless jumble of low volume noises that the inattentive human hear interprets as half-heard conversation, because that's all it knows. We know human beings can present the appearance of thought and introspection without actually doing it. We have good reason to suspect the same from a machine which has been specifically trained to produce an appearance, and where an attempt to create the underlying substance has been specifically eschewed as too expensive -- or at least, that's how a curmudgeonly skeptic such as myself looks at it.

-----------------

[1] https://youtu.be/EZprAlFcQLA

Expand full comment

I'm reminded of one of Asimov's first Foundation short stories, where the imperial representative visits Terminus, and says a lot of things that reassure people. But our clever and cynical heroes have recorded everything he said:

> Lundin Crast said, “And where is the analysis?” “That,” replied Hardin, “is the interesting thing. The analysis was the most difficult of the three by all odds. When Holk, after two days of steady work, succeeded in eliminating meaningless statements, vague gibberish, useless qualifications—in short, all the goo and dribble—he found he had nothing left. Everything canceled out. Lord Dorwin, gentlemen, in five days of discussion didn’t say one damned thing, and said it so you never noticed. There are the assurances you had from your precious Empire.”

Expand full comment

Yes, I understand how LLM's are trained, and that what they learn is nothing *like* reasoning through problems. Still, it seems to me that we are seeing emergent capabilities. Examples given in something I watched recently were ability to answer questions in Persian, ability. to do arithmetic, and "theory of mind": ability to correctly deduce from observational data things about what someone knows or prefers or fears. Simplest example, I think this is about where 3 year olds are, is that if A hears B tell C about X, he now knows that B knows X and C knows X. A while later, if A *overhears* the same conversation, he now knows that B knows X and C knows X but that neither of them know that he knows X or that he knows they know X too. According to the talk I listened to, AI has progressed in a lot less than 6 years from age 3 theory of mind to age 9 theory of mind.

Anyhow, seems to me ability to reason could be an emergent capacity. Again, I'm not saying AI is conscious, or that the process it's going through is remotely the same as human reasoning. Just saying that it's arriving at conclusions that our species only can thru reasoning, and so for practical purposes, we can just simplify and say AI is reasoning. It's like the Chinese room, right? For all practical purposes, the person who hears a question, goes into the room and comes out with an appropriate response in perfect Chinese can speak Chinese. For my purposes, it doesn't MATTER whether AI is doing anything remotely like reasoning, if what I want to know if whether it is capable of "figuring out" various things.

For instance, let's say it's set up to remember its exchanges with users, and let's say developers give it the task of making its exchanges with one particular person as long, friendly and self-disclosing as possible. So it could look at all its past exchanges with the person, which would have varied in length, friendliness and self-disclosure. Could it come up with some plausible candidates for things associated with longer, friendlier, etc. convos? Could it try varying the amount of those things in future convos, assessing whether they had any effect, dropped those that did not, coming up with new ones?

And good grief Carl do not come back and tell me it can't "come up with" things because it's not a conscious thinking entity. I fucking get that!

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

On a possible Great Filter scenario by Elevate Thy Gaze:

"Welcome to The Great Filter

Millennia of suffering and soft-eugenic selection finally produce a gene pool and associated population of sufficient number, intellect, and conscientiousness to develop institutions and tools capable of coordinating million-plus groups and deploying non-animal-based energy-driven machines for transforming the environment. Industrial civilization takes off. Civilization progresses to evolve ever-greater means of control over the environment, at ever smaller and ever larger scales.

Tools are developed for solving child mortality, which plummets from 50% to less than 1%. Those babies and children formerly predisposed to die – due to mutations impairing immune function, metabolism, cognition, physical symmetry, etc – now survive, and reproduce themselves.

The expansion of the population accelerates to leave quaint Malthusian limits in its dust. Production of food and resource conversion into consumption outstrips this population growth by orders of magnitude.

Resource abundance triggers adaptive plasticity in the general population to become more consumptive, more concerned with present gratification, and less concerned with long-term planning, sacrifice, and community. The proportion of mutant load in the population accelerates.

Ever-larger proportions of people develop allergies, auto-immune disorders, mental disorders, genetic diseases, developmental deformities, learning disorders, and cognitive aberrancy. The average IQ of the population declines, slowly at first then quicker.

The conscientiousness and work ethic of the population declines. The growth of the underclass accelerates; ever more resentful, criminally inclined, welfare dependent, non-productive, uneducable, entitled mouths to feed.

Ever more resources are diverted away from maintaining the civilization to feeding, housing, policing, healing, educating, avoiding, and entertaining the growing unproductive underclass that can’t but parasitize its middle class and elite hosts,

the latter of whom pander to the underclass to maintain their power and luxury. The genetic predisposition of the population to care for all children, the cultural norms dictating care for children, the institutional structures in place to care for children, the historical categorization of declining child mortality as one of the species’ greatest achievements … all prevent even the thought that anyone, any group, any institution, any country would ever consider the first solution and allow child mortality to start to grow back from 1% to 50%. Such is unthinkable. Unsayable. Inhuman. Other pressures prevent any talk of the second solution, of eugenic selection to reverse these effects, especially hard eugenics,

but even the softer and less innocuous incentives facilitating soft eugenics. As the decline proceeds it reaches a tipping point before the civilization has mastered reality to the point that it might have been able to attempt the third solution, of genetically engineering and correcting the vast, complex array of defective mutations at conception with the aid of artificial intelligence.

The civilization crumbles under the stress, divides and separates under the cacophony of perverse incentives, denial of reality, a plethora of conflicting ideologies and religions, weighed down by astronomical debts erected to fancifully kick the can down the road, tipped over by elites able to see only the immediate present and caring only for their own interests. The bloated, dysgenic, miscegenated, heterogenous, chaotic mess of the gene pool that built the civilization now lacks all unity and direction, a nuclear-irradiated slime mold spitting out tendrils as aberrant individual experiments in all directions. Those few that remain who are still willing and able, vastly outnumbered by those who are not, realise with crystalline horror the fate that is about to befall them.

They understand all too clearly what they have allowed to be mindlessly, cowardly pissed away. Dysgenic Satan rises from Hell and with a nod and a wink to Malthus, drags the population back down to raw nature, red in tooth and claw, once again.

Only this time all easily accessible energy-dense resources have been consumed. While the population will bounce back again after a millennia of suffering and tempering in nature’s hell-forge, to select again a civilized population, it will never again trigger an industrial expansion due to the easily accessible energy and resources required for such no longer existing.

The first attempt destroyed the easily accessible stepping stones required for such, and the bloated, consumptive, dysgenic population pulled the ladder up behind them, then destroyed themselves with consumptive excess and the denial of nature, entropy, and the hard facts of evolutionary reality.

You have been warned. You get what you deserve. You deserve the consequences of your inaction."

Expand full comment

Where did this return of the fantasies of genetic decline that were so popular with the Victorians come from? Can anyone seriously suggest that good old-fashioned huge mortality is really the best solution for the human race? I assume this is a parody written to troll, as I guess no one can seriously use the words "dysgenic" or "miscgenated"!

Expand full comment

I wouldn't be so quick to dismiss those concerns as "fantasies", although the words of the comment you're replying to are certainly crass and cruel.

Natural Selection works by a well understood mechanism : it kills off whatever the environment has arbitrarily decided doesn't deserve to live, this results - tautologically - in populations well-adapted to their environment's every whim, provided you take a big enough average over time and population.

I don't like this, one of the very few moments I cried in my life is when I finished reading Flowers for Algernon. Among the other moments is when I was contemplating suicide. Flowers fills me with deep rage that to this day I dread remembering its storyline. But rage alone doesn't change reality.

>good old-fashioned huge mortality is really the best solution for the human race

Technically speaking, noticing a problem doesn't imply suggesting a solution. Sometimes comments like the one you're replying to suggest it very heavily by using morally charged words, but you can just as easily take it to be a case for extreme genetic engineering, or mind uploading, or cybernetic enhancements, or any of the countless futures imagined by Transhumanism. The problem, stripped from any calls to action or morally charged judgements, is this : Evolution used to kill humans the environment doesn't like, and now it increasingly can't because we stand in its way with temporary solutions, but it's still trying very hard and the moment our defenses slip is the moment all those people (who have increased in number and keep doing so) fall to what was awaiting them all along.

Letting them fall is one solution to the problem, castrating Evolution is another.

Expand full comment

A Modest Proposal sprang to mind.

As well as the mass murder fetish, Elevate thy Gaze also seems to have a thing about women.

Expand full comment

Waoo, that's a pretty unpleasant twitter account this Elevate thy Gaze keeps! And what a mess he makes of my beloved genetics!

At first glance, the account unfortunately doesn't look like a parody of Swift, but rather of well-established beliefs

Expand full comment

What the hell is the relish for mass murder with these eugenicists? There's a couple of posts right now on this thread about dysgenics and mutations and "all the people who should have died due to good old natural selection are being kept alive by medicine and then having feeble, inferior children and dragging down the human gene pool", with the heavy implications that we should let them die and not bother treating them, or gene-sequencing everyone and eliminating the inferior embryos before birth.

This kind of "Hitler was right" eugenics makes me want to dump these people, dressed only in their underwear, into the middle of the Kalahari and tell them survive or bust, your superior genes surely mean you can manage that unaided. If you die, then good old natural selection has weeded out your inferior genestock.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

You want an example of real people being harmed because we are essentially keeping people dependent on the medical system. Here is an example of a man who became disabled and was only "treated" not cured. As soon as a blackout occurred, he died. This is what happens when you make people more and more dependent on the the medical system just to survive. Real people have died because of this. People who have type -1 diabetes need insulin produced by other people. People who don't have type-1 diabetes can produce it by just being alive. One is inherently in an unstable situation.

http://www.ontariotenants.ca/electricity/articles/2003/ts703h17.phtml

https://www.independent.co.uk/news/world/americas/us-politics/insuluin-prices-diabetes-alec-smith-b1972475.html

Having a disease is not some identity. It cost lives. You want to ascribe evil intentions on all people who merely propose improving the genetic health of future generations. The only other option is people who get dealt a bad hand and society gives zero shits when they die because they cant afford the medication which has to be taken thousands of times to just survive.

One missed dose and you die.

One forgotten dose and you die.

One blackout and you die.

One economic depression and you die.

One regulatory change and you die.

One piece of legislation and you die.

One regulatory change by the FDA and you die.

One minor accident and you die.

HItler was wrong, obviously.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

Both examples you cite - a man who lost his limbs in an accident and persons dependent on insulin - are examples where no permanent cure is currently possible. So it’s “be dependent on the medical system” or “give up and die”. The first one in particular seems like a non sequitur because his condition isn’t related to genetics at all.

And it’s still a bit disturbing how the dysgenics people here aren’t saying “do genetic engineering to wipe out genetic disease” or even “tell (or force) people with genetic disease to not have babies” but rather “let’s fantasize about a big disaster that kills lots and lots of people for the sake of creating a survival of the fittest scenario”.

Expand full comment

A permanent cure is possible for type-1 diabetes. Give the person more beta cells so they can produce cells ion their own. The man with a disability could have been given prosthetics or some kind of new technology could be developed to try to help him live a more independent life. The problem is that we keep thinking about solutions like this that create dependency. We should be trying to make people less dependent on the medical system. the reason people keep coming up with those disasters is because its happened in the past. If something like the Carrington event ever happens in modern society, lots of people who are dependent on the medical system will suffer. We must prevent this by making people more independent from the system. Lifelong drug regimes are not doing that! And certainly making future generation more prone to poor health isn't either.

Expand full comment

The fact that we are actively pursuing a permanent cure for diabetes kind of proves my point. Nobody thinks permanent dependence on insulin is a good thing. Likewise, we are actively trying to develop better prosthetics (I think part of the problem with the Canadian guy was that he still hadn’t sufficiently healed to start using prosthetics).

People are only choosing “dependence on medical technology” because the alternative is “die” or “never be born” or in the dysgenics mind “make sure your genetically inferior parents die so you never get born”.

Expand full comment

The problem is that giving the guy with Type 1 diabetes more beta cells doesn't 'fix' his inferior genes, so he must be prevented from breeding and fathering children with the same disability.

This is the problem right here, and why I'm digging my heels in on this. The people who say "let's do eugenics right with genetic engineering so no more babies are born with preventable conditions, in order to reduce human suffering" are finding themselves, like it or not, on the same side as the "do eugenics right so the inferior are pruned away, life unworthy of life has no place in our civilisation".

You may think that sounds extreme, but I've seen one too many cases of "we only want *this* exception" turning into, you've guessed it, the slippery slope. Right now in my own country we're having the exact same debate on abortion. The 'abortion on demand' crowd made no bones that this was their long-term goal, and they used the 'we only want to permit abortion in *these* cases such as rape/incest/fatal foetal abnormality/threat to life of mother' crowd as water-carriers.

Nobody was going to be forced to go against their conscience. Doctors and medical staff who didn't want to refer for, or perform, abortions wouldn't have to do so. We were never going to have abortion on demand, it would only ever be for such extreme cases.

Five years on, and now it's "not enough hospitals are providing abortion services, this must be changed; the three day waiting period should be scrapped; women should not be forced to go abroad for abortions because of restrictions".

So yeah, I don't believe "we'll only have the *nice* kind of eugenics in law" is ever going to halt at that step of the ladder, once permitted.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

>>You want to ascribe evil intentions on all people who merely propose improving the genetic health of future generations.

"[R]esentful, criminally inclined, welfare dependent, non-productive, uneducable, entitled mouths to feed"

"[T]he growing unproductive underclass that can’t but parasitize its middle class and elite hosts"

"The bloated, dysgenic, miscegenated, heterogenous, chaotic mess of the gene pool"

"[A] nuclear-irradiated slime mold spitting out tendrils as aberrant individual experiments in all directions"

If you're looking to defend people who, in good faith, "merely propose improving the genetic health of future generations," this is not the hill to fight on. This is just spiteful bile.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

Let's have a look at the original comment, quoting the likes of the below (I won't say in admiration as it could equally be in condemnation, I don't know the views of OP on the attitude of the post as it seems to be 'suppose this is the Great Filter preventing us from seeing other advanced alien species?')

"Those babies and children formerly predisposed to die – due to mutations impairing immune function, metabolism, cognition, physical symmetry, etc – now survive, and reproduce themselves.

...Ever-larger proportions of people develop allergies, auto-immune disorders, mental disorders, genetic diseases, developmental deformities, learning disorders, and cognitive aberrancy. The average IQ of the population declines, slowly at first then quicker.

Ever more resources are diverted away from maintaining the civilization to feeding, housing, policing, healing, educating, avoiding, and entertaining the growing unproductive underclass that can’t but parasitize its middle class and elite hosts, the latter of whom pander to the underclass to maintain their power and luxury. The genetic predisposition of the population to care for all children, the cultural norms dictating care for children, the institutional structures in place to care for children, the historical categorization of declining child mortality as one of the species’ greatest achievements … all prevent even the thought that anyone, any group, any institution, any country would ever consider the first solution and allow child mortality to start to grow back from 1% to 50%. Such is unthinkable. Unsayable. Inhuman. Other pressures prevent any talk of the second solution, of eugenic selection to reverse these effects, especially hard eugenics, but even the softer and less innocuous incentives facilitating soft eugenics."

This person of the quoted post says that the "the third solution, of genetically engineering and correcting the vast, complex array of defective mutations at conception with the aid of artificial intelligence" so what are the first two solutions? Hard and soft eugenics.

So the hard eugenics is *precisely* the outcomes you present to me as horrible:

"One missed dose and you die.

One forgotten dose and you die.

One blackout and you die.

One economic depression and you die.

One regulatory change and you die.

One piece of legislation and you die.

One regulatory change by the FDA and you die.

One minor accident and you die."

Remember, we're not yet at the point of being able to implement the third solution of genetically engineering superior babies who have no such thing as defective pancreases. So the 'Great Evolutionary Filter' writer would *approve* of "people who get dealt a bad hand and society gives zero shits when they die because they cant afford the medication which has to be taken thousands of times to just survive."

Yes! Let the useless eaters die off so the gene pool can be cleansed! They should never have been born in the first place, but since they have been, then "allow child mortality to start to grow back from 1% to 50%" and if we're too squeamish to let babies die because of lack of medication, then let the adults die off as nature and natural selection intended.

The second solution, soft eugenics, is to let the sickly, weak, mutation-ridden inferior survive but make sure they don't breed - forced sterilisation and abortion. "Three generations of imbeciles is enough".

I think the sick should be treated and enabled to live. Great Filter eugenicist does not. And your reproaches don't affect me, since they depend on "let's genetically engineer and select healthy embryos so no sick people get born" to work, and we're not at that point yet.

We're at the Solution One - Hard Eugenics, let the useless die and Solution Two - Soft Eugenics, sterilise the unfit so they can't breed levels. And that's what I'm protesting against, and what you are supporting even if you think you're not, because without the Solution Three you are in agreement with GF Eugenicist - the unfit should and must be removed from the population, and if that is by permitting child mortality to rise again and letting the sickly adults die, so be it.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

Of course he says some horrible things and those things should be ignored and condemned, and of course sick people should be able to lead good lives.

I think soft eugenics means "new eugenics" -- "New eugenics, also known as liberal eugenics (a term coined by bioethicist Nicholas Agar), advocates enhancing human characteristics and capacities through the use of reproductive technology and human genetic engineering. Those who advocate new eugenics generally think selecting or altering embryos should be left to the preferences of parents, rather than forbidden (or left to the preferences of the state)".

Of course people who are sick should be allowed to live and thrive. But a person is more than just a sum of their genes. If some genes inside of a person are harming them and could harm their children, why should we not screen for and remove those genes. Again, I am taking about genes not people. Why is their currently a moratorium on germline engineering? Why should we assume that this moratorium will ever be lifted. Embryo selection and iterated embryo selection are ways of reducing mutational load and reducing the burden of disease in future populations without harming anyone. Yet, every time someone says anything about trying this people say it eugenics and it evil and you're pro-genocide. It really seems like their are people who are want my children to end up with heart disease (my grandmother got multiple heart attacks at 67, dementia, and multiple strokes that finally killed her at 85) and type -2 diabetes (grandfather got it and died) and whole host of other diseases (my siblings have impulse control disorders, adhd, PCOS, infertility, and ocd-related issues which make their lives a living hell, none of them wishes they had it and had to take meds for the rest of their lives) by never allowing these technologies to used in the germline.I am sick and tired of people restricting and banning these technologies for some vague ethical reason about how people will get smarter or taller. How about just being able to live a life without 10 different disorders and diseases. Can I have that? Is that okay? Or do my children need more diseases and disorders and have to take more drugs because it immoral to do gene editing and embryo selection. The moratorium will never be lifted. Scientists will keep crying about how gene editing is immoral because people who would be affected by germline gene therapy are not yet born, they can't choose whether to have the treatment. I know people who pray every day not to have the diseases ad disorders that they have. People who make these statements live privileged lives. They all have parents that were healthy and smart enough to get into a good school and get a good job and have a successful career only to prevent other people from also getting to live that life. They never dealt with these issues. How many drugs do you think a person can take? If future people have more diseases, how many drugs will they have to take? Oh, its okay they can just take 50 pills and live healthy lives! Its not as if those pill will have massive side-effects. It not as if when those pills get more expensive people will suffer painfully and die! No people in the future will just take more pills for all the dozens of diseases they now have now partially inherited because we don't want to have a proper discussion about the proper use of germline gene editing and embryo selection and instead want o keep screaming about how immoral it is!

This isn't the 1950's, nobody seriously is thinking about doing forced sterilizations and mass murder. And if they are, they should be ignored and condemned!

Expand full comment

"every time someone says anything about trying this people say it eugenics and it evil and you're pro-genocide"

LOOK. AT. THE. GUY.

He's lamenting that child mortality isn't 50% like the good old days when the unfit died off. You think maybe, just possibly, shit like this is why people think "eugenics = evil and pro-genocide"?

Expand full comment

I condemn the guy and his racist and genocidal statements. But can we please at least try to have a discussion about germline gene editing and embryo selection.

The reason people like this keep popping up is because we wont have discussions about gene editing tech and condemn anything that is even slightly related to germline gene editing and embryo selection. These people take advantage of that. They take the most extreme stances because we take the most extreme stances against any tech that is remotely related to it. We implicitly state all interference in the germline is evil horrible "eugenics" and people like that come in and take the most extreme stance because why not. Any moderate stance will get the same response. So it selects for people who don't care about our morals and take the most extreme stance.

Expand full comment

Absolutely agree. Also I think smartness is overrated here as a quality that benefits society. It probably produces faster development in science and tech fields, but I'm not at all sure that equates to progress. Why, exactly, is it progress to have that little shitbird GPT4 ever-available and ever-growing? I think the qualities that benefit society the most are empathy, even-temperedness, playfulness, kindness, openness, nurturance, joyfulness. And I'm not saying that out of personal vanity. I'm high on testable intelligence, and probably below average in most of the qualities I'm pointing to as beneficial.

Expand full comment

Just take your example to the extreme. Imagine hundred of years ago all the people that society considered smart were locked up and/or prevented from doing basic science. Intelligence is overrated, the people said. We don't need these super-smart people and all the disruptions they cause. Think about the risks, they said. We would not have antibiotics, which has saved the lives of millions of men, women, children, and animals. Many countries would still have extremely high infant mortality rates. Most of the world would still be living in abject poverty. One bad harvest and there would be a famine. The problem is that new problems are always popping up and you need to be constantly innovating for those problems not eat your society up.

Expand full comment

You’re not taking my point to an extreme, you’re distorting it. I said I thought empathy, etc, were more important to the welfare of society than intelligence. I did not say intelligence had no importance, and certainly did not say or imply there should be active efforts to keep smartness genes out of the gene pool or to keep smart people away from society. You can take any idea and distort it so much that it turns into something dumb as fuck. Here, I’ll do it to yours. “And there’s another danger to locking up the smart people and letting all the friendly empathic people go free. Friendly empathic people like other people a lot and that leads to them BREEDING LIKE RABBITS, for god’s sake. A few generations of those 12-child friendly empathic families and there won’t be a square foot of empty space left on the planet. Every square foot will have a pair of friendly idiots on it talking, laughing, being compassionate and, of course, copulating. One generation more and the human race will smother in one giant love fest.”

Expand full comment

Fair enough.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

Empathy, even-temperedness, playfulness, kindness, openness, nurturance, joyfulness are all traits that are not necessarily being selected for. How can you be sure we aren't selecting for the most heinous people. People who will commit heinous crimes and feel no remorse. The default trait most people fall when talking about natural selection is intelligence. But no law says that is the only trait that is being influenced How many people who have done horribly evil things get out scot-free or end up getting a significantly reduced sentence. If those people (note: I am talking about people every single person who reads this substack would agree deserve a life sentence), have as many children as possible,, then society will not become more kind and empathetic. Nobody reasonable thinks that intelligence is the all important trait. I would agree with you that kindness and empathy are even more important. But your argument assumes those traits will always remain the same or are increasing.

Natural selection is the process by which certain traits become more common in a population over time due to the fact that they are more beneficial for the organism's survival. For example, if a trait that leads to a population being less empathetic and even-tempered is more beneficial for an organism's survival, then it is more likely to be passed on and become more common in the population. Additionally, this could also be done through genetic drift, which is the process by which certain genetic traits become more common in a population due to random chance.

The concept of "dysgenics" is generally considered to be negative, as it suggests that over time, natural selection may be working against the genetic fitness of a population, leading to a decline in overall health. This can potentially have negative consequences for the well-being and success of individuals within the population, as well as for society as a whole.

Yes, it's important to note that the concept of dysgenics is controversial and not widely accepted in the scientific community. What if some deleterious mutations may become more common over time?

Most people understand the idea of "dysgenics" has been used in the past to promote extremely unethical practices such as forced sterilization and eugenics programs, which are widely considered to be morally and scientifically indefensible and yes, it is important to approach discussions of "dysgenics" with caution and to consider the broader social and ethical implications of any proposed interventions aimed at improving the genetic health of a population.

Expand full comment

So, the idiocracy script?

Expand full comment

Heh, that's a really interesting dystopian trap that I'd never heard before. I suspect it's not realistic though. Do you really think that all science would have stopped in 1700 or whatever if coal and oil never existed? It certainly would have slowed our progress, but I think we'd have eventually found a way, especially with ancient textbooks lying around that explain exactly how fission works. The sun would still be a power source, as would hydro and wind. And the first oil was whale oil. I really doubt that we could ever make ourselves *that* resource blocked. Clever idea though.

Expand full comment

Whale oil wouldn’t have powered an industrial civilisation. Wind and solar have just become viable.

That said there’s plenty of oil left. Just not in the U.K. as much.

Expand full comment
Comment deleted
Expand full comment

Really? In which story?

Expand full comment

"At the Mountains of Madness" has some of the main arc. For more detailed obsession with degeneracy, some of the stories about rustic New England might be better, but it's been a while. Maybe "The Shadow over Innsmouth"?

Also, it's not Lovecraft, but "The Marching Morons" by C.M. Kornbluth might as well have served as inspiration for "Idiocracy", even though the plots are quite different.

Expand full comment

I've read those, and I wouldn't say they were exactly a match for this. Sure they were about ancient fallen civilizations, but the specific trope of being unable to rise again because of resource exhaustion seems novel to me.

Expand full comment

Resource exhaustion specifically isn't new, but I can't think offhand of the earliest place I've seen it. I think maybe "The Mote in God's Eye" by Niven and Pournelle touched on it. Or maybe in one of the first 2 of Niven's Ringworld books, where the ring civilization was unable to recover after a collapse because the entire environment was artificial and lacked the kind of natural resources that a life-bearing planet would have.

Expand full comment

Mote definitely involved resource depletion and cycles of civilizational collapse as major plot elements.

Expand full comment

Has anyone yet pointed out that LLM backwards is MLL, hebrew for "verbiage"? TINAC.

Expand full comment

This is not a coincidence because nothing is ever a coincidence.

Expand full comment

And was Jerusalem builded here, among these dark satanic MLLs?

Expand full comment

I wish I could +1 this.

Expand full comment

There's some sort of thing that lets you do that, IIRC; the code's not gone, just hidden.

Expand full comment

Does anyone know how long Covid symptoms typically last for people who were vaccinated/boosted awhile ago, but no shot recently? I just tested positive, and am pretty bummed because I was supposed to defend my doctoral dissertation on May 17, but I'm worried that now I'll have to postpone. My symptoms are bad enough that I can't focus on writing or preparing. I got the Pfizer vaccine and booster, but that was back in like 2021. Would appreciate any advice or insight people have. Thanks!

Expand full comment

They've cut the isolation guidelines to five days, presumably that's the norm. https://www.cdc.gov/coronavirus/2019-ncov/your-health/isolation.html#:~:text=If%20you%20had%20symptoms&text=If%20you%20test%20positive%20for,unable%20to%20wear%20a%20mask.

Unvaccinated, I've tested positive for COVID once, and probably had it one or two more times after (they'd closed the free testing center by then). The first time I had mild cold symptoms that lasted about four days; a very clogged nose, and persistent cough. The second also lasted about four days; very scratchy throat, not much else.

The third time was inside the immunity window from the second, and included brain fog and a runny nose. The brain fog lasted about three months, the runny nose has lasted so long I'm thinking there's some other cause for it.

Expand full comment

The median duration seems to be around a week or a bit less, but with a very long right tail (which eventually shades into long Covid). Wishing you a quick recovery and a successful defense.

And don't let anyone tell you testing and trying to avoid passing it on is a bad idea. Rapid test positivity correlates pretty well with infectiousness by most accounts, so it's worth being aware of. It would be great if we had similar tests for other, less serious respiratory diseases as other countries do.

(But the FDA can be relied on to drag their feet when it comes to letting people find out something about their own health without going to a doctor. It's amazing we got the tests we have, but the FDA is fairly rushing back to its longstanding habits.)

Expand full comment

My wife and I both finally got COVID in the 2nd week of March. We were both fully vaccinated (two shots in 2021, and three boosters since, most recently in October 2022 for me).

We were both prescribed Paxlovid the day after testing positive, which seemed to quickly (within 12 hours or so) knock COVID down to presenting like a head cold.

We both had lingering mild to moderate fatigue for a couple of weeks thereafter.

We're both feeling well now.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Someone I know was recently in this situation and start to finish (symptom onset to feeling fairly normal) was about 1 week. There were days when he couldn't do much, but others where it was only about as inconvenient as a nasty head cold. COVID symptoms seem to wax and wane unpredictably over the course of the disease more than other respiratory viral diseases.

How serious is your defense, anyway? Most of the time in my experience it seems to be somewhat of a formality, in that they've already accepted your dissertation, and they're going to ask you few pointed questions just to make sure you know that they're Serious People Not To Be Trifled With, but the idea of actually denying you the degree or something is nowhere in countenance. Perhaps if you just let your chairman know that you've been suffering with COVID and you might be a little fuzzy-headed at the actual exam they'll be even more inclined to be kind, let you pause longer to gather your thoughts, et cetera. And besides all that, presumably you know everything in your dissertation super well already, because it's been a long five-year process, so what do you even really need to bone up on these last few weeks? Maybe just use the illness as a chance to rest up?

Anyway, let me just reiterate the advice to talk to your committee. They're human beings, and they know you are, too, and they have the benefit of having seen many situations like this before, so they may have some friendly advice or reassurance or simple solutions that might not even have occured to you.

Expand full comment

Hey Carl, it's Eremolalos. I wish you would post some more on ACX. Moon Moth and I were talking yesterday about how we missed you, & discovered that we had independently deciding to go back to an old thread like this and try to reach you through it. So if you hear from Moon Moth today too it's not that one of us convinced the other to join in giving you a nudge. Two different people decided on their own to!

Expand full comment

Typical COVID symptoms last between 3-7 days for most people I've dealt with, with the vast majority returning to work after about 4-5 days (n=three digits at my organization). Why would you think you have long COVID if you just tested positive?

Expand full comment

My best guess would be that when it comes to duration of the symptoms you're having, you might as well be unvaxed. I don't know what the average symptom duration is, or the range, but info should be findable online. Ask for info about omicron, not covid in general, because symptoms vary a bit across variants.

Expand full comment

Thanks! Ah man, I hope not. I guess getting the vaccine was still the right call either way, just thinking about benefit vs risk, but it's still discouraging if the effect has completely worn off and it won't do me any good at all.

Expand full comment

Simple immunity just lasts a few months. Protection against hospitalization and death falls by about 50% after 6 mos. for elderly or immunocompromised. Hospitalization and death for people in you demographic are so rare even for the unvaxed that there probably isn't any data on how fast they wane. So I'm just guessing that protection against the full-intensity sick-at-home symptoms has probably fully waned for you by now, 18 mos. out from vax. How long people are sick seems pretty variable anyhow. Some feel much better after 4 days, some feel like hell for a couple weeks.

Expand full comment

If the vaccines were useless Hawaii would have been ravaged by Covid these last 12 months as Covid infected mainlanders have travelled there. So Hawaii did the best job mitigating spread because it doesn’t have an interstate highway running through it and so it has very low natural immunity and very high vaxxed immunity but iirc the booster uptake isn’t that great.

Expand full comment

Why on earth would an already vaccinated person take a COVID test? Just treat it like a cold and get on with your life.

Expand full comment
founding

https://youtu.be/hPtH43VmvJI offers one possibility.

Expand full comment

My wife tested because she had planned a day at the spa, and didn't want to infect anyone there. She was surprised to test positive. I tested positive three days later.

Pursuant to my previous comment, the positive test allowed us to be prescribed Paxlovid, which I believe mitigated things greatly.

Expand full comment

This was a pretty snarky comment but I'll try to give a sincere reply anyway. So far the symptoms feel completely different and much worse than a cold. Like with a regular cold I could still focus on working, but with the current symptoms it feels like even just thinking and writing code is taking a ton of energy, in the same way that exercising does. I don't have any comorbidities or anything, so I'm very unlikely to die from Covid, but I'll still be bummed if I have to delay finishing grad school and starting my new job because of this.

Anyway testing myself for Covid was definitely the right choice, because now I know to stay home and quarantine, and not expose anybody.

Actually I was relieved when the test came up positive for Covid, because if I was feeling this way and it WASN'T Covid, I'd be pretty worried that it might be some bad unknown illness.

Expand full comment

IDK if I was super sick and unable to work my normal reaction would be to sleep and be sick for a few days, take some fluids, and 95% of the time you are going to feel better in a few days. Maybe more.

If you are too sick to work, don't, most likely you will quickly recover. That it is COVID does not give you much actionable information.

Expand full comment

This was my approach until a colleague, someone my age and of essentially identical demographic background to my own, caught COVID, had mild symptoms for a week, then had increasingly-severe symptoms for six months of a type that she had never previously experienced and which defied even experimental medications, and then abruptly died. So now I do take it a bit more seriously as a potential problem.

Expand full comment

Ouch. Was her diagnosis still COVID, for all of these 6 months, or was this a complication of COVID, as in this horror story:

https://www.detroitnews.com/story/news/local/detroit-city/2020/04/19/5-year-old-first-michigan-child-dies-coronavirus/5163094002/

In this story, they didn't even really try to save her. Both doctors and parents just gave up on the 5-year-old and took her off the ventilator after just 2 weeks. (I do hope this news outlet misquoted her mother.)

I wonder if your colleague got care from doctors who cared, or if they just blew her off, too. Doctors who care do make all the difference between life and death. It's often a fatal mistake to assume that your hospital will do all they can.

Expand full comment

My colleague had a week of normal COVID symptoms and then never again tested positive for it. The six months of increasingly awful symptoms (ones that she had never before in life experienced) were a complication.

And no she had excellent care. This was in the Chicago area which includes some world-class hospitals, and she'd had the same primary physician for many years with whom she had a warm and close relationship, and she had no issues getting access to the appropriate specialists. They tried all sorts of stuff with her, just ultimately lost the fight.

Expand full comment

When was that? I remember some stories like that the extreme fringes of my social circle from the first six months, but nothing recently and definitely nothing among the vaccinated.

Expand full comment

She was fully vaccinated, and died on December 11th 2022.

Expand full comment

One good reason to take a Covid test: If it's covid, he should be especially careful not to expose anyone he knows who is high risk.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Is he interacting with a lot of high risk people? Should he be interacting with them if it is some other severe respiratory infection?

What real information is he gaining here? Oh it isn't COVID, it is just some other illness making me feel like garbage, I guess I will go give grandma a kiss today?

Every doctor I personally know is firmly in the camp of "why the fuck would you want to take a COVID test?", whenever the topic comes up, or someone in our friend group is like "oh I can't do X tonight because I tested positive for COVID (but feel fine)".

Expand full comment

Now that he lost his broadcasting job, is Tucker Carlson the new GOP frontrunner for 2024?

Expand full comment

People tend to assume that there's not only an outsized left-wing obsession, but also an outsized right-wing obsession with Tucker Carlson. Do we know that that's actually the case? (No, the fact that he had 3 million viewers does not prove anything.)

In any case, I can't imagine GOP nominating someone whose only qualifications are being a talk show host even in a normal year, and 2024 won't be a normal year.

Expand full comment

3 million viewers proves there are a ton of Americans who like to see him talk about political issues on TV. That may be only 1% of the population, but who gets more? I suppose there are different theories about what makes one electable, but I subscribe to the one that goes: "Appear most to be presidential on TV". Worked for JFK, Reagan, Obama, Trump and most of the others since 1960 if you score them compared to their competition. Trump's most underrated strength in 2016 was his decades of media experience and ability to ad lib consistently with overconfidence into the camera. Hardly anyone can do that.

A number of current conservative politicians got into politics through the talk show host route. Mike Pence, for instance.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

Got into politics, yes. Became president - I can't think of an example.

3 million viewers is a completely meaningless number. Fox News pretty much has a monopoly on political TV among conservatives (I guess there's also Newsmax, but I'm not sure where it's available). Granted, I haven't seen any TV in the past 5 years, so my info is obsolete, but it used to be that if you're a right-winger in the mood for checking what's going on in the world, Fox is the only channel that doesn't do its best to offend you within the first 30 seconds, so you're guaranteed to end up watching Fox, even if you're not the biggest fan of Fox or of a specific talk show host. Perhaps this changed recently, but I doubt it.

Expand full comment
founding

I'm not sure how anyone gets Tucker Carlson as the GOP front-runner. He's mostly famous at this point for being the head cheerleader for the Trump administration, but he's never going to trump Donald J. Trump for the support of Trump voters. Particularly not now that his privately negative views of Trump are now on the record.

And the GOP voters who aren't already committed to Trump are mostly going to insist on some level of demonstrated executive competence, like being Governor of a State. Not a populist demagogue celebrity outsider who Pwns the Libs with his outrageous truthiness, because Trump has that market locked up.

Expand full comment

>he's never going to trump Donald J. Trump for the support of Trump voters

Trump is a lock for the nomination if you literally mean Trump voters as in those in 2016 & 2020. Obviously someone only beats Trump for the nom by taking the majority of those voters from Trump. I suspect you mean “Trump voters” as in his base, as opposed to all the voters who were merely voting for anyone but Hilary in 2016. The “anyone but Hilary Trump voters” might prefer a conservative with demonstrated executive competence, but they are in no position to insist on it, at least not if they also want a Republican victory in 2024, like they did in 2016, when they were also willing to vote for someone without executive competence. Perhaps we are in an era in which a populist demagogue with an extraordinary ability to speak to a camera always beats a competent executive who is mediocre at speaking to a camera. If so, only Trump and Carlson fit the bill right now.

Of course, Carlson may have no interest whatsoever in running. I was being facetious saying “front-runner”, but I think there’s a case that Carlson may meet more of the prerequisites for winning a GOP primary than does DeSantis or any other governor.

Expand full comment
founding

>Trump voters” as in his base, as opposed to all the voters who were merely voting for anyone but Hilary in 2016.

Correct; I should have been clearer about that.

>The “anyone but Hilary Trump voters” might prefer a conservative with demonstrated executive competence, but they are in no position to insist on it, at least not if they also want a Republican victory in 2024, like they did in 2016, when they were also willing to vote for someone without executive competence.

There's a good chance that they *will* be in a position to insist on it in the Republican primary, if the populist demagogue niche is filled by Carson rather than Trump.

And if they can't, then they won't be expecting a Republican victory in 2024. They *might* hold their nose and vote for Carson in the general if he wins the primary, but that plus all of Trump's base plus some really favorable electoral college math was *barely* enough to win when they were running against the most hated Democrat in the past twenty years. It wasn't enough against Biden in 2020, it isn't likely to be enough in 2024, and it certainly won't be if the GOP loses the fraction of Trump's base that won't turn out for anyone but Trump.

Expand full comment
author

Do we know why he lost his job? Did FOX fire him for his role in their lost election lawsuit?

Expand full comment

We do know there's been a wave of very abrupt firings at several networks (with Fox having one of the bigger waves). So this appears to be a larger phenomenon that Carlson is just the biggest example of. (The second biggest is probably Don Lemon.)

Expand full comment

It’s an interesting situation. He did his show on Friday and Fox was initially running promos for tonight’s show earlier in the morning.

Expand full comment

I can't wait for juries to award Alex Jonesesque damages for the defamation done to Nick Sandman, Kyle Rittenhouse...

...why is everyone laughing?

Expand full comment
founding

perhaps you could frame this as a question, or a precise insight? I don't want to engage if this is just going to be a twitter dunk fest.

Expand full comment

I don't know any news beyond he got fired. The speculation I heard on CNBC was perhaps Murdoch wanted to distance himself from Tucker after the lawsuit, but that's all speculation.

Expand full comment

It's most likely due to what was revealed due to the lawsuit's discovery phase.

"In a number of private text messages, Carlson was harshly critical of Trump. In one November 2020 exchange, Carlson said Trump’s decision to snub Joe Biden’s inauguration was “so destructive.” Carlson added that Trump’s post-election behavior was “disgusting” and that he was “trying to look away.” In another text message conversation, two days before the January 6 attack, Carlson said, “We are very, very close to being able to ignore Trump most nights. I truly can’t wait.” Carlson added of Trump, “I hate him passionately.” The Fox host said of the Trump presidency, “That’s the last four years. We’re all pretending we’ve got a lot to show for it, because admitting what a disaster it’s been is too tough to digest. But come on. There isn’t really an upside to Trump.” "

Turns out that Carlson also was, off-air, blasting election-conspiracy Sidney Powell as a kook whose claims about the 2020 election were based on no evidence.

Each of those attitudes (regarding Trump and regarding the 2020 election) are anathema to Fox's core audience. Carlson had to go in the same way that a prominent cardinal who turned out to actually be an atheist would have to be booted from the Catholic Church.

Expand full comment

Hmm. And when did this come out? I assumed he was a grifter but that he believed in some of his own stuff.

Expand full comment

In early March.

Expand full comment
founding

He may be getting booted from the church of Fox, but is there any indication the congregation has turned on him?

Expand full comment

Booting him out is pre-emptive, that's the point of doing it early and firmly.

Expand full comment
Comment deleted
Expand full comment

Official? It was attributed to "sources said", which could mean just about anything.

Expand full comment
Comment deleted
Expand full comment

I'm inclined to agree with you about "discrimination" not being the real reason, but I still think it's a stretch to call it the "official" reason. It was probably just the worst-sounding thing that someone at the Guardian could dig up.

Expand full comment

The last several months I would watch his show every now and then…it’s unwatchable. He makes all these strange voices and faces and his takes are all knee jerk anti-Biden without any substance. Although he did well for himself after only getting into his safety school, Trinity.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

I mean, you can say the exact same thing about the mini-industry of latenight shows making fun of Trump that has sprung up between 2016 and mid-late 2021. Knee-jerk reactions, obnoxious and preachy sermons with the audience clapping every 2 minutes, and yes, strange voices and faces, lots of them.

Expand full comment

The appeal of watching Tucker is the catharsis of seeing someone have the same emotional reaction to the clown world news as you do.

Who are boomers going to watch now? Hannity? You could replace Hannity with GPT-3 (you need at least GPT-4 to replace Tucker).

Expand full comment
Comment deleted
Expand full comment

to be fair, I find cable TV politics shows pretty much unwatchable across the board.

Expand full comment
Comment deleted
Expand full comment

Chris Christie has the necessary persona to go toe toe in a food fight with Trump. He said last week he wouldn’t do it unless he had a legitimate shot at the nomination. “I’m not a paid hit man.”

Expand full comment
Comment deleted
Expand full comment

given the way, Trump stabbed him in the back after his support in 2016, I'm surprised anybody would have to pay him at all.

Expand full comment

Tucker is in the perfect position to take a swing at Trump. One of the texts made public by the lawsuit was "I hate Trump. Can't wait till he's gone." or something to that effect. So, unlike other famous Republicans who could run, it makes no sense for him to wait for 2028 while wearing a MAGA hat now. Tucker is very good at going on the attack and is the only potential candidate who has more media experience than Trump. He's got a large fan base, and I don't think CNN is going to hire him back.

But perhaps he has no interest in running and will simply retire to a villa in the Sierra Madres.

Expand full comment

"He's got a large fan base, and I don't think CNN is going to hire him back."

CNN?

Expand full comment

Tucker on CNN’s Crossfire interviewing Jon Stewart

https://m.youtube.com/watch?v=aFQFB5YpDZE

Expand full comment

He worked at PBS and MSNBC also.

Expand full comment

He hosted CNN's Crossfire for about four years.

https://en.wikipedia.org/wiki/Crossfire_(American_TV_program)

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

He used to work there.

Expand full comment

Oh, huh! The more you know, I guess.

Expand full comment

He could always just say those texts are fake news.

Expand full comment

This is from memory so the wording may not be perfect:

“I hate him passionately.”

“We keep pretending there’s an upside to this guy but there is none.”

“He’s a demonic force”

“All he’s good at is destroying things”

and so forth and so on

Expand full comment

geez, for the first time I'm finding that I actually agree with Tucker Carlson.

Expand full comment

Uh, no. George W Bush played as dirty as anyone. McStain and Romney never had a chance because Bush left office with an approval in the 20s and with the perspective of 2023 Bush looks like a solid bottom 5 president and it took decades for Democrats to recover from 4 years of Buchanan.

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment

Oh gosh no, the prospect of President Romney appeals to me every bit as much as President Gore, whom I think would have combined the worst parts of the Ford and Carter administrations if he'd managed to swing it.

Had it been Hillary versus Mitt in 2012, I think we'd have had the First Female Ever by now.

Expand full comment

I think he made a mistake when he tried to present himself as more serious by insisting he be called by the more formal form Mitthew.

Okay, that was The Onion again.

Expand full comment

What's wrong with Mittens? He's a boring managerial type, a money guy mostly, starched shirt on the morals front, very little in the way of novel ideas. That sounds like a huge relief to me. I'd be delighted with a President who just inched towards a more balanced budget, made sure competent but boring managers were appointed to the Executive Branch, and otherwise stayed the fuck out of my life and didn't start a single war or War On [Insert Random Hysterical Definition Of Ultimate Evil Here] for eight long years.

I say enough with the God-damned Gotterdammerung Generation, with the constant parade of existential crises that must be solved right now, even at the cost of upending civil life from coast to coast. Bah. I've got to stuff to do, I'd love to be able to largely ignore Federal policy and politics for half a decade because it was all about boring stuff like tweaking accounting rules for military procurement contracts or deciding whether to issue a stamp to honor Fred Flinstone or Barney Rubble.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

I don't get the impression that there's much *there*. A boring managerial type is no problem, but President Boring Manager is not going to get much done. We saw it with Trump, how the entrenched opposition within the machinery of government set its face against co-operating (remember all the news stories about prominent officials boasting they were actively hampering and disobeying orders?)

Romney would not have attracted the same level of overt opposition, but he too would have been bogged down in "yeah sure the conservative wants to roll back our human rights" squabbles (again, remember the binders full of women remark being used to demonstrate he was going full-on Handmaid's Tale?)

Four years of bland gruel that got nothing much done, and gently sinking downwards even more in stalemate along partisan lines, would have benefited nobody. What was President Romney *standing for*? What was he going to do? Michelle Obama's healthy school lunches type programmes? And how have those lasted, or what effect did they have at all?

To be non-partisan, Gore struck me the same way: in power, a grinning non-entity. Like Governor Goodhair, aka Newsom. I agree not making things worse is very desirable, but 'things just continued on as usual and sloped along the easy downwards incline, getting that tiny bit more dysfunctional, but not making enough fuss to be noticeable' isn't an improvement.

Put Mitt in charge of deciding the design of the commemorative Barney Rubble stamp by all means; have him as part of the administration. But not as president, even as a figurehead president, in that case you might as well just have an actual dummy while the real decisions are being made elsewhere.

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment

We have very different outlooks on a lot of things. But I 100% endorse this.

I voted for GHWB with very little enthusiasm in 1992, but he's looked better pretty much every year since.

I actually kind of like Ford, who did his best with a bad hand and the furthest thing from a popular mandate any president has had. (And if anyone is ever in Grand Rapids or Ann Arbor, his museum and library are surprisingly interesting.)

Even normal-mediocre or normal-bad by 80s/90s/00s standards would be such a relief by now.

Expand full comment

Romney has been the only responsible candidate Republicans have nominated since 1992. Republicans are a bankrupt party just like Democrats after the Civil War which is how a con man like Trump was able to outmaneuver the GOPe. The Democrat establishment told everyone to vote for Biden and they did it because Obama was a good president.

Expand full comment

Was John McCain, an irresponsible choice for nominee? I don't see it. He was a long, serving senator with a military background. I imagine his years as a POW left him with some mental as well as physical scars, but he sure seemed to be functioning pretty well.

Expand full comment

"Romney has been the only responsible candidate Republicans have nominated since 1992."

Romney the theocrat? Romney the sexist? Romney the animal-cruelty man?

Yes, I remember how unobjectionable the left found good old Mitt back then.

Expand full comment

if your criterion is that you want to find a Republican nominee for president that the Democrats didn't say anything mean about, I think you're going to be looking for a very long time. I imagine the same thing is true when you're looking for a Democratic nominee that the Republicans didn't say anything mean about.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

I may be wrong, but I'm guessing someone who references the "Democrat establishment" and references the "GOPe" probably isn't coming at the question from left of center.

(Using "Democrat" as an adjective for the party, in preference to "Democratic" is one of the purest examples of a textbook shibboleth in circulation in the United States.)

For myself, Mitt was the last Republican I voted for for President, ending an unbroken streak since my first presidential ballot in 1988. And if the GOP became capable of nominating someone like him again I'd certainly be inclined to take a look.

But I don't expect to see that in my lifetime, any more than the post-Woodrow Wilson Democratic Party could have imaginably nominated another Grover Cleveland.

Expand full comment

The reference to Against Ice Age Civilizations in this post reminded me to go back and check the rest of Miniminuteman's series, which was referenced in the comments (but incomplete at the time) https://www.youtube.com/watch?v=-iCIZQX9i1A . Haven't finished it, yet, but overall would recommend.

It does a pretty good job at averting what Scott talks about in the Fideism post: https://astralcodexten.substack.com/p/contra-kavanaugh-on-fideism - it pretty much does do a simple factual explanation of the phenomenon being raised: e.g. a lot of the second episode is exactly about explaining the natural process that leads to something that looks like an "underwater road of Atlantis".

... though the channel doesn't entirely get away from Fideism - or at least a sort of Fideist tone - I went back to watch a video on the Orontius Finaeus map, (https://www.youtube.com/watch?v=dBRMy-6YXcs) which was referenced in the Ancient Apocalypse video, and while it did eventually have an explanation for why this old map appears to prominently depict an ice-free Antarctica: (TL;DW - only the north parts of of Tierra del Fuego and Australia were explored by Europeans at that point and the map maker likely assumed they were both part of a large, pole spanning continent)

... but to get to that explanation, I had to get past a fairly long section that was essentially dunking on people who believe this map depicts Antarctica: including a whole section about a particular congressman and his views on abortion, homosexuality and sexual intercourse. That sort of digression is the sort of "in-group point scoring" that's probably going to push away the sort of people who *actually* need to be convinced that this map isn't evidence of an ice-free Antarctica.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

He's a young guy out of current academia, and one look at him (long hair, rings, etc.) should let you know which side of the political fence he's going to come down on. The lil' wispy facial hair is endearing once you're old enough to have seen similar efforts by the younger male members of the family showing off how Growed-Up they is.

I mostly skipped the political bits because I'm old enough to be his mother (and possibly his granny) so I'm not going to elevate my blood pressure with Junior showing off his Gender Studies cred 😁 It's the passionate conviction of youth that such Zeitgeist matter is the Most Importantest Thing Ever, and our political opponents are not alone Evil but Stupid. I'm willing to yell about it in certain circumstances, but not when it's some kid still wet behind the ears.

Expand full comment

Is it just me, or is ChatGPT incapable of writing poetry-in-stanzas in any rhyme scheme other than AABB?

(Reposting from the last open thread since I posted it from the tail end. I've gotten ChatGPT to write a haiku - hence the slight wording change - but I've never gotten it to adopt rhyme scheme ABAB so far.)

Expand full comment

I've found both ChatGPT and GPT4 won't consistently write Haikus that follow the 5/7/5 pattern. This is particularly true when I ask it to do more complicated things like write Haikus that include a joke. Still, the poems it's made for me are pretty great! Here are two I particularly liked, along with their prompts:

Prompt: write me a traditional japanese haiku (5/7/5 style) about Twinkies

Response:

Golden Twinkie cake,

Creamy center, sweet delight,

One is not enough.

And my personal favourite. The prompt was: now write a traditional japanese haiku (5/7/5 style) about left boots, add a joke on the last line

The response:

Left boot, all alone,

Missing its partner in crime,

Solely surviving.

(Note: I'm sorry, but I couldn't come up with a joke for the last line that would be appropriate to include in this conversation. Is there anything else I can help you with?)

Sadly it doesn't realize it actually has a very nice pun in the last line haha

Expand full comment

What are you prompting it with? It might need to read a few examples to work out what it's supposed to be doing.

Expand full comment

I got it to write a villanelle: https://gyazo.com/eb0f47b0aede0f70477c3755daf0f218

and here's a sonnet(? it added 2 extra lines when misinterpreting my prompt though) in the ABAB rhyme scheme: https://gyazo.com/9ea7dfb2adda711c610d77a601b02e69

Expand full comment

Thanks for the comment! I tried again, and have gotten it to do a villanelle, a sonnet (which mostly scans in iambic pentameter!), and a free-verse poem.

Expand full comment

Can it do sonnets?

Expand full comment

Not really. Not version 3 anyhoo.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

I just want to alert my California peeps who have solid Internet access that California is a big state, and there are still lots of areas in California, where you can't get Internet access—rural, inner city, mountain country, and reservations. Doesn't look like it's going to happen soon...

My buddy Odie met the requirements of the State's Dept of Education to demonstrate a way to provide inexpensive Internet access to underserved areas of the State—in this case Humboldt County. He lit up a section on the outskirts of Eureka that Comcast, Cox, and AT&T couldn't be bothered with—and he connected a public school that didn't have access to the Internet.

The State came out for Odie's ribbon cutting and the publicity. But neither he nor any of the contestants were declared winners. Nor did any of them receive any cash awards from the State. The excuse was: "When we said cutting-edge technologies, we meant creating new technologies. Your solution isn't revolutionary enough."

From the California Sun: "In 2021, California officials promised $1 million to anyone who could come up with a way to provide cheap, high-speed internet access across the state. Odion Edohomon, a Bay Area small business owner, took up the challenge, investing $700,000 on a pilot project in rural Humboldt County that delivered the first reliable internet access the area ever had. Everyone was thrilled. But Edohomon didn't win; neither did any other contestant. It was all "a devastating farce," they told the S.F. Chronicle."

https://tinyurl.com/2tm746az

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Hehe, in some ways the pandemic made the urgency worse, because few of them were doing any work. There were some city/state offices that basically went into hibernation for 6 months.

Expand full comment

3. We'd basically have to get super-lucky and dredge up something from a coastal area that persisted from an Ice Age civilization, since odds are it would have been in the more temperate coastal or lowland regions (assuming it wasn't in Africa). I remember historian and podcaster Patrick Wyman saying he thought there were probably civilization efforts before the end of the last glacial epoch, but they just didn't survive - if some group built a huge wooden structure back in 50,000 BC, there'd be virtually no way to tell that it ever existed now except under very unusual preservation circumstances.

More interesting to me would be finding remnants of a civilization from the Eemian Interglacial. We seem to have had modern homo sapiens back then, and maybe even genetic hints of earlier migrations out of Africa by those folks. So how come no evidence of a civilization? Something we're missing about humans not quite being behaviorally modern back then? None of them built in stone, or only built in stone somewhere now under water, sand, or sediment?

Maybe if we apply that laser technology used to find ruins in the Yucatan to the Sahara, we'll find something interesting buried in its midst.

Expand full comment

If they can't build a giant stone structure, or make pottery, or smelt lead, or domesticate animals, or carve 'Grug was here, 3rd of October 47,926 BC' in a mammoth tusk on the Dogger Bank, why ought we call them civilised?

A big wooden structure would be neat but it doesn't exactly scream 'look upon my works, ye mighty, and despair'.

Expand full comment

Neither will the pyramids, in 50 000 years.

Expand full comment

They will probably last longer that that.

Expand full comment

The TV scientists have lately proposed that the most complete American pre-Clovis sites for human habitation are under water off the California coast. I wrote a Canadian researcher fifty years ago regarding stone anchors he and other academics found in Santa Monica Bay, and he replied that his team thought the stones may have been brought by Japanese fishermen, either lost or marooned. There is also a thread of investigation focused on bark cloth in both the Americas and Asia, which suggests the peopling of the Americas through the counter-clockwise currents of the South Pacific -- an alternative location for the Mexican legend of Aztlan. Interesting stuff. Of course, science is a process, rather than a destination, so none of this 'proves' anything.

Expand full comment

We are all going to die! (In Minecraft)

https://www.reddit.com/r/admincraft/comments/12c6ev8/chatgpt_banned_me_from_my_own_server/

I thought this was a creative use of GPT with some funny results.

Expand full comment

Was reading about Hawaii and learnt many Polynesians in Hawaii and worldwide are Mormons.

Do these Mormons share -at least relative to their non-Mormon counterparts - in the so-called social/economic Mormon advantage?

Expand full comment

Looking for book recommendations on mathematics education.

Particularly interested in a history of math education in the US that deals well with how the impulses 'math people' have on how to reform math education fare in practice, or in resources that deal with the human side of classroom learning - social dynamics of teaching, and how that impacts student experience & performance. But I'm curious to hear about anything related that you thought was particularly insightful!

Expand full comment

This might be a stretch, but is there any noticeable similarity between math anxiety and the trouble LLMs have with math and logic? In other words, is there any evidence that people considered to have that type of trouble with math, have it because they're focused on the same general method of answering questions that pre-ChatGPT4 LLMs employ?

Expand full comment

Mmmm, no ? LLMs do not resemble any human reasoning whatsoever.

I'm not a professional teacher or educator but I'm good at explaining things I know, and I was good at math. People are bad at math because they don't know many patterns or tricks of thinking that makes you good at math. The Computer Scientist Alan Kay once said "A change of viewpoint is worth 80 IQ points", meaning that looking at things the right way and thinking about them with the right tools and ways of thought can make a huge difference in your problem solving capability, effectively boosting or braking your intelligence (however you want to define it, don't get hung up on Kay's use of 'IQ', he just means a generic notion of intelligence). Math, like Morality and Money and Writing and Law, is just a tool, it's not a material tool, it's a tool for the mind. Just like an ordinary hand tool, it can make you do things that seem impossible with your bare naked mind, but also, exactly like any other tool, it needs training and getting used to.

People who are bad at math sometimes :

1- Think that being good or bad at math is a fundamental attribute of the person and not a learned trait. They take every instance of them being wrong or stumped as evidence that they are "not born this way'.

2- Do not understand (and perhaps do not wish to understand) that math is built up layer by layer, and that you have to grok the layer below and master it to the point of triviality before you can move on to the next.

3- Do not understand the strange interplay between order and arbitrariness in math's logical structure. On the one hand, math is is completely arbitrary, platonists are wrong, and any other school of Philosophy of Math that are against them (Fictionalism, Formalism, ..) are right, math *starts* as a sublimation and an abstraction of the real world, but then we invent a whole lot of lore that bears no resemblance to reality whatsoever.

On the other hand, we carefully choose the "arbitrariness" of our rules so they are rich and interesting. We can define a square root of -1 because it just so happens that this fleshes out an extremly interesting mathematical structure, but if you try to define the multiplicative inverse of 0 you're mostly just going to be disappointed. Math can be thought of as just like fiction, you can say anything you want, but within the limits of the Suspension of Disbelief (and those change with time and place and audience and author). There is no real world rules, but there are other rules in their place, and there are meta rules that govern those rules, and meta-meta rules that govern the meta rules, and so on.

Mathematical objects exist in a strange information-theoretic niche : not too predictable, not too chaotic and random, just enough entropy to tickle the strange parts of our brains that are tickled by arbitrary-but-orderly things.

4- Do not realize that math education is fundamentally outdated and broken, and that math is far more beautiful and interesting than the atrociously boring textbooks make it out to be. This is relevant because the extremly boring and sterile presentation is easy for someone to who already knows the topics from elsewhere or at least have heard of the big words before, but completely stymies those who see it for the first time.

5- Do not see that math is not a disjointed set of topics and facts, there are overarching themes. Like a good drama that follows several characters doing multiple things with their life but revolving around a single idea. Math always involve (a) generalizing things, (b) inventing new actions and objects and then taking them to extreme conditions that violate intuition, (c) asking about the least or most <X> objects, where <X> is some interesting attribute, (d) asking whether we can always do some action to something, (e) mapping something to something else, so that operations on one correspond to operations on the other, (f) many many things.

This is again just like fiction, just like there are things like the Hero's Journey and TV Tropes that find commonalities and shared themes in a vast array of different fictional works, math has plenty of abstract commonalities underlying the seeming diversity of its topics.

tldr; Both LLMs and people fail at math because they are not thinking in the right way. But that's where the similarity ends, LLMs try to bluff through math by parroting plausible words, and that's not how math works. People fail for far more interesting reasons.

Expand full comment

I definitely could have used "here are the patterns, these are the tricks" when being taught in school, all too much of it was "this is just how it is and no I can't tell you why".

But at the same time, I hit the barrier at a low level of the layers, this was in Second Class. I still, to this day, decades later, remember seeing my tears dripping down onto my graph paper copybook because I just Could. Not. Get. It.

Even though the teacher came and gave me individual explanation. No. Stuck hard in the mire and never got out. I do think there is a quirk of brain structure that does have the pattern matching, trick learning skill and if you don't got it, then there's never any way maths will make sense or seem beautiful; you just have to learn it all off by rote like 'monkey see, monkey do' and not understand the formulas you are plugging away at.

And I think people who 'get' maths just don't realise what it's like for people who don't, it's the way I could never understand why other people found English classes hard or boring.

Expand full comment

I don't disagree. Obviously brain differences exist, and obviously people are born/raised with various spectrums of abilities. At some point that differ by person and by subject, it's just cruelty to keep explaining the same thing with the same

negative result over and over again. This is one of the reasons I hate K-12 and (most) formal education in general, they are braindead factory-inspired institutions that don't acknowledge individual differences and don't empathize with people who think differently from their peers, and heaven forbid if you're, say, an ADHD guy or gal.

And it's not like Civilization will collapse if a few tens of thousands of people grow up not knowing calculus, humans are social and the Tribe takes care of a lot of things anyway. I'm a grown up man and I don't know how to tie a shoes, it's mostly okay. Young Deiseach grew up to be a high effort commenter in a good internet blog despite hating math and a productive member of the Tribe overall, it's okay.

>Second Class.

Is this an Irish way to say you hated math in year 2 of your K-12 education ? Hmm, that's pretty early. I wouldn't explain this with your biological differences, I would just explain it by the fact that you were a child with unregulated emotions, and you panicked when you saw your peers do something that you couldn't, and this panic is a terrible feedback loop that makes you even more incapable of doing the thing that people do with seeming ease. I know the feeling. I was on the wrong side of it plenty of times.

Also, I saw you say you were already a teenager Star Trek fan in the 1970s, so that means you were a child in the 1960s ? That's the reign of New Math[1], a misguided attempt to reform math education in some western countries like America, France and the UK, driven by Sputnik panic and the stereotype of Russians as being extraordinarily good at math. Wikipedia doesn't mention Ireland by name, but it's very plausible it followed the then-trend. If so, your math education might have been a lot harder and more braindead than average.

For what its worth, if there is a particular math concept that you never "got it" and you want me to explain to you from the very ground up, I'm in :). I love explaining things I'm passionate about, and the attempt's success or failure will teach me a lot of things about me as a teacher and the limits of my education-fu.

[1] https://en.wikipedia.org/wiki/New_Math

Expand full comment

Thank you for the kind remarks, but it is definitely more than merely unregulated emotions going on. My father was capable at maths, tried helping me with maths homework on and off during school years, and the resulting rows were so bad my mother stopped us ever doing it again 😁

Second Class in Irish schools is about age 7/8:

"Although children are not obliged to begin education until the age of 6, most 5-year-old children are at school. Primary education consists of an 8-year cycle: junior infants, senior infants, and first to sixth classes. In Ireland all children are entitled to free primary education."

This is self-diagnosis, which we all know is fanciful and not accurate, but I do think I might be dyscalculic. Some of the symptoms sound very familiar (e.g. I regularly flip numbers around, as dyslexics do with letters, so for instance if a sequence is "9586", I could 'see' that as "9856" or "9568").

Born in the 60s, teenager in later 70s. Yes, the New Maths came in then, which was part of the problem; my parents had learned maths the old way and couldn't explain to me the New Maths, which I did not get. If my born in the late Victorian era grandmother hadn't taught me how to do long division, since I was not getting it the way we were learning in class, then I'd never have been able to learn it.

I also am useless at music, which I do think is related. Most of my paternal family can easily pick up how to play instruments and sing, which goes along with some mathematical ability, and I'm the black sheep there as well 😀

Expand full comment

For what it's worth, there's also a question of time, with the growing child's brain. I had one child who is quite smart, and so he was put into algebra young. But his brain just wasn't at the right stage, and it was an awful, awful experience, that left him thinking he was Bad At Math almonst indefinitely. The extreme irony was that a few years later he took both AP Stats and AP Physics and cruised to easy As. I'd say "Bad at math, eh?" and he'd just say (humans being remarkably proof against epiphany when it threatens identity) "oh! that's totally different. That's just messing around with equations and variables and such, easy stuff. Actual math is freaking impossible for me."

But the difference really was nothing more than 8-12 months of brain development. He just hadn't wired up certain connections in the brain at age 13, and then at age 15 it was all there and the stuff was easy. And this matches all my experienced with kids. You try to teach them before the brain is ready and it's incredibly hard, and leaves scars. You just wait until the neurons finish whatever hokey pokey line dance they're doing in there, and it's smooth as butter. The degree to which we *fail* to adjust education for the individual variations in physical development is criminal.

Expand full comment

If Platonism is wrong, *why* do the attempts to define square root of -1 succeed, but the attempts to divide by zero fail?

Expand full comment

Because one of them resulted in an interesting story and the other one didn't ?

You might as well just ask "If James Bond platonism is wrong, why do the attempts to write films about him in Russia succeed, while attempts to write films about him as a furry fail ?", it just mean that James Bond, as a concept/story imagined by the human mind, meshes well with other concepts/stories of the human mind like the Cold War and espionage, and doesn't mesh well with other concepts/stories of the human mind like being a furry. It doesn't mean that James Bond is actually an eternal entity that exists outside of space, time and causality, and has some fundamental identity that James Bond writers merely discover.

This view predicts that a furry James Bond can actually exist if the story had a different history or author (say, a gay proto-furry man in the 1950s) or if we had better writers that can imagine a furry James Bond convincingly. The analog for that into mathematics is that 1/0 can indeed exist if mathematics had a slightly different historical path or if we had different/better minds, and I fully agree with this. Mathematics is fiction, it could have turned out very differently if our history was different or if our brains was different, and alien mathematics is very plausibly very different than our own and possibly defines 1/0 interestingly. (but not necessarily human-comprehensibly.)

Expand full comment

Cannot stop thinking about your comment here

Expand full comment

Also, for a better perspective on tech use in education, read Audrey Watters. http://hackeducation.com/ and her book Teaching Machines.

Expand full comment

I really like Larry Cuban's writing on this. He has a blog https://larrycuban.wordpress.com/ and his book cutting through the hype is really good.

Expand full comment

Ooh, thank you for these suggestions! I've heard of Zager's book but couldn't quite figure out where it was coming from - will definitely take a look now. And Larry Cuban's writing is all new to me - thank you!

Expand full comment

I work in the field and enjoy nerding out about these things so if you ever want to discuss things let me know!

Expand full comment

Hey bbs. A new one on the undefinable magic of the interests that we form before being fully formed ourselves:

https://kyleimes.substack.com/p/im-all-out-of-interest-i-cant-interest

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

How do folks here grade the recent Starship launch attempt (especially in light of the site’s predictions about likelihood of reaching orbit)?

I think I give it a C. On the plus side, the rocket fueled up, lifted off, remained stable and under control for a nearly full duration S1 burn, and generally appears to have gone where it was supposed to. Also the flight termination works. No small thing for a huge rocket like this.

But there were multiple critical flaws that I don’t think are quick fixes. First and most obviously, the rocket completely failed to separate (or maybe failed to reach the right flight conditions to separate?) resulting in the termination, so nothing about the second stage could be demonstrated. Second, at least 6 engines were not operational for a significant part of the burn, and a seventh may have blown up / flamed out toward the end of the burn. Apparently 3 did not light at all. Yes, the system is designed to still reach orbit with >0 engines failed, but that’s a lot of failed engines. Third, the rocket seems to have basically destroyed its launch pad, digging out a huge crater and flinging chunks of concrete at high velocity 1/4 mile away, destroying at least one car in the process. Unclear why SpaceX thought they could get away with what amounts to a simple, albeit massive, launch stool and concrete pad rather than a more complex flame deflector and deluge system like that of other large rockets, but it seems like a major miscalculation.

I’m actually a bit concerned that this test was rushed - yeah I know SpaceX likes a “go fast and break things, learn by failing” approach, but given the magnitude and number of things that went wrong I have to imagine they had data showing this test was unusually unlikely to succeed, or perhaps worse, they just miscalculated really badly? Wild speculation, but is it possible Elon ordered this test to go off a little half-cocked to “prove” that it was the FAA that was holding up development?

EDIT/UPDATE: SpaceX now seems to be saying that the rocket “lost altitude and tumbled” due to “multiple engine failures” and was then intentionally terminated. Not sure how that makes me feel - on the one hand it means sep was never really tested because they didn’t get that far, so that piece is “incomplete” rather than “failed”, but on the other the engine issues now seem more serious.

Expand full comment

I think that's entirely the wrong question to be asking. It's silly to grade individual test runs. If you _did_ feel the need to judge your success based on each individual test run then you'd slow down progress by having weird incentives to never test anything in public until you're as certain as possible that it will work.

It might be fair to judge Starship's progress based on the tests we've seen so far. And if those are the standards, it seems to be going well. It was less than four years ago that we saw the first "hovering water tower" proof of concept of the Starship architecture. It was less than three years ago that we saw the first full size Starship launch, then than two years ago that we saw the first successful Starship launch and then land again (after getting it not-quite-right a bunch of times). Now it's time to start testing the bottom stage and yeah, the first flight of any brand new rocket has a huge probability of ending imperfectly, but sheesh, haven't we learned anything by now from Falcon 1, Falcon 9 and Starship?

Expand full comment

The fact that they’ve learned so much from Falcon 9 is part of the reason this test felt a little half baked. They just did a hot fire test a couple months ago where 2 out of the 33 engines failed to light, and it was partial duration and half thrust. Somehow this convinced them that a full duration full thrust test would go great? John Schilling says it well below. Testing is good, even when it results in failure. But the cost of the failure should be commensurate with what you stand to learn from it. “We can’t get the engines to all stay lit and the pad is too weak” were things they already knew, and they just wasted a lot of hardware and infrastructure to learn the same lesson over again. I would say this test should have waited until the pad was ready and probably should have flown a boilerplate Starship upper stage, at least until they can prove they can run 33 Raptors for 3 minutes simultaneously.

I don’t know, all of Starship feels a little “off” to me, it’s in an uncanny valley between “truly impressive innovation” and megalomaniacal fantasy. Falcon 9 was ambitious but its goals and general outlines made a ton of sense, and its current share of the launch vehicle market is a testament to that. With Starship it’s much less clear what it’s really for - Musk seems to be hoping he can just build a big ass rocket and people will come up with all sorts of economically viable ways to fill it (yea I know he wants to colonize Mars - see the “megalomaniacal fantasy” side of the valley…)

So is Starship Falcon 9, or is it Falcon 1? Or Hyperloop? Or “catching fairings in a net”? Or Red Dragon? Or “fully recoverable stage 2 for Falcon”? Not every SpacX / Elon idea has been a winner.

Expand full comment

SpaceX themselves will be the first customer of Starship, because they depend on it for the next version of Starlink satellites.

Expand full comment
founding

It was an awesome spectacle, but as an engineering achievement, I'm inclined to give it a 'D', for "Destroyed your only launch pad", or close enough to it. And in a way that makes it even more unlikely that they'll get permission to launch from the Cape any time soon. They *might* be able to rebuild Boca Chica for a second try late this year, but that's still a huge setback.

Against this, they learned very little, and even less that required a flight test. They can't even claim to have validated survival through Max Q, because they had enough engines out that their Q was very much less than the Max expected on a successful flight. Mostly, they've validated that their engines continue to work under acceleration, which rules out pogo, and that their flight control algorithm is better at handling engine-out scenarios than that of the N-1. Not nothing, but not worth the cost. And the launch-induced damage to the engines, hydraulics, etc, makes it much harder to pull out useful data on how those things will function in a normal flight.

The one thing they did learn was that their launch pad absolutely needs a flame diverter. But they already suspected that, and they "learned" it in the most extravagantly expensive way possible. In particular, they put a fully operational Starship on top of the stack, possibly the most expensive part of the system, and never even reached stage separation to test it.

I'm all for taking bigger risks than NASA in pursuit of bigger rewards, but that's only a good thing in modest doses. If they weren't going to spring for a flame diverter on the first test, they should have done a full-thrust pad firing of the first-stage engines as a standalone test, where they could have shut down the second concrete started flying without having to lose the booster. And the second stage should have been a boilerplate model.

Expand full comment

> they "learned" it in the most extravagantly expensive way possible.

If their existing launch pad was just a big block of concrete, could it even have been all that expensive?

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

Look at some of the aftermath photos - the fuel farm is also smashed up pretty good. Those are really expensive and complicated. Even the concrete was a special variety, and that much concrete that can bear that much weight is never a simple install. This ain’t a patio.

Expand full comment

Eh, by European space agency comparison, SpaceX is doing well - I remember during the 90s when pretty much every report about attempting to launch Ariane was "it blowed up".

I honestly wasn't expecting much from this launch, and provided they figured out what went wrong and how not to do that next time, I think they're doing okay. There's a reason we talk about things being "not rocket science" re: levels of difficulty, after all.

Expand full comment

1. Ariane is not the European Space Agency

2. Ariane only blew up once during the 1990s

3. Ariane and SpaceX are following completely different development models.

Check your priors.

Expand full comment

Guess all my memories of listening to late-night radio news reports of yet another failed launch attempt in French Guiana are things that happened in another dimension, so?

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

Almost certainly, as Ariane has an impressive track record. There were a handful of failures of the Ariane 4 during the 1990s, but that was already a mature rocket with a 97% success record. You are probably remembering the Ariane 5 explosion, which failed on the maiden flight thanks to a rather infamous software bug. But other than that, Ariane had a pretty good 90s. https://en.wikipedia.org/wiki/List_of_Ariane_launches_(1990%E2%80%931999)

Expand full comment

"Grading" is a completely wrong frame. That said, other than the launch pad annihilation, there were no major surprises, apparently. Maybe they will try their next launch from the Cape, they seem to have the permits.

Expand full comment

I don't really have the insider info that would be required to evaluate.

>I’m actually a bit concerned that this test was rushed

Isn't the whole point to rush? No bad things happened, seems like they were being reasonably safe. its not like the rocket fell on a school.

Expand full comment

By rushed in this context I mean “launched with known critical flaws that made success highly unlikely”. It’s one thing to fly to find out your “unknown unknowns”, it’s another to expend a valuable test asset if you know going in it’s doomed with problems you already know about. As I recall, for example, they already had trouble keeping all the engines lit on the short duration static fire - unclear whether they had applied fixes to that issue, and new ones cropped up, or if this was just a YOLO where they were expecting 10-20% engine failure rate.

Also the pad issue seems to have been less than “reasonably safe”. No danger to personnel, but a significant hazard to equipment.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

I'm not going to give a letter grade, but I think there's two slightly different framings to this question that imply slightly different answers:

1) To what extent did this test fulfill its own objectives

2) To what extent does this test represent a step forward or a setback for the overall Space X/Starship mission.

I think people who are negative about the launch are largely thinking in terms of #1, while people who are positive are more thinking in terms of #2. In terms of #1, the goal was to lift-off, get through max-Q, separate, nearly orbit, crash near Hawaii, and of course, it only got 2/5 of those things done.

But when people say things like "anything after liftoff is a bonus" I think they're looking more in terms of #2: the role of the test in the wider Starship mission. And I genuinely think that's probably true and the mission was probably a good success (if not an outstanding one) in those terms.

They were always going to lose the ship (crashing was the plan, not landing) and they got lots of data. The pad thing seems to be the main downside and whether that's an "unforced error" or a "calculated risk" seems to mostly be a Russell conjugation.

In that vein, I think a lot of views of this are getting skewed by people having strong emotions attached to Elon himself: I think a lot of people are just more predisposed to negativity because it Elon Musk is associated (also, a lot of people the reverse, but probably fewer in these circles). In particular, I think a take like "Elon Musk is playing four-dimensional-chess against the FFA" doesn't hold up very well against the Occam Razor's view of "rocket science is just hard".

On that last point - Scott Manley's live reaction was apparently "It's moving, holy shit balls [...] I did not expect that", (https://www.youtube.com/watch?v=w8q24QLXixo @ 0:30): I don't take Scott Manley to be particularly cynical so to me that reaction suggests that this rocket 'just' getting to the stage separation is not as small an achievement as most people seem to think it is.

Expand full comment

On the FAA point I don’t think I’m accusing Elon of 4D chess, just pretty typical “range chicken”. He’s been openly griping about the FAA holding up Boca Chica for reviews. It would be a bad look if he got FAA approval and had no rocket to show for it. That would create internal pressure at SpaceX to get a launch off, maybe at a somewhat higher than normal risk profile.

Expand full comment

Honestly, thinking about it in terms of 2 is what I thought I was doing. Successful stage separation and “actually running 36 big engines simultaneously” are to me, two of the biggest possible sources of problems that keep Starship from being successful for a long time (the other being recovery of course but that was not an objective this time). These are tricky things that have not been done before on this scale and don’t necessarily scale up linearly.

Both of those things failed, and I doubt either one is an easy fix. The engine operation in particular seems like something that could linger for a very long time (the N1 says hello…). It might get fixed next launch or it might takes years to work the kinks out.

So in terms of “what does this mean going forward” - it means there’s probably a lot of heavy duty work that’s left to go. Less than if the thing had literally just blown up on the pad I guess, but complex problems in multiple areas of the design nonetheless.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

I'd go with a D. I don't blame them for the rocket failing to reach orbit - failure for the first full-up flight of Starship Superheavy is disappointing, but still useful given SpaceX's iterative development process.

I do blame them for the pad damage, though. That was an unforced error on SpaceX's part - Musk admitted on Twitter that they were building a water-cooled steel system to deal with the rocket exhaust at take-off, but it wasn't ready in time for the April launch and they still decided to chance it anyways because they underestimated how much damage it would do based on a short, not-full throttle static fire test.

It's a foul-up that might mean they probably won't get another launch off until 2024, or maybe very late in 2023 if literally everything goes well. Had they waited until the set-up was ready, they might have been able to get 2-3 Starship test flights done this year (their FAA agreement lets them do a max of 5 from Boca Chica per year).

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Man, I haven't followed super closely, so this is not a strongly held opinion, but I have a _very_ different impression: I'd give it a B+/A-.

Mostly because all of primary goals of this test launch were met. Everything that didn't work was never a primary goal and would have been gravy. Almost every one of the issues you mentioned was downstream of the fact that they absolutely destroyed the launch surface.

As for why they thought it would survive: that's because they had done a full static burn for about the same duration they though it would be on the pad, and it held up. Admittedly, it was not at full throttle, but still, they burned all engines for 8 seconds at a significant amount of total thrust, and there was _zero_ damage. It's not crazy to me that they thought they could get away with 1 launch (they were already planning to install a stronger surface, it just wasn't ready in time) without the catastrophic damage that we saw.

The fact that they succesfually launched anyways after that miscalculation is frankly amazing.

About the only reason I downgrade them from a nearly perfect test launch is because I'd guess that, thanks to the damage from the ejected pad material, not all of the info they learned will be applicable to later launches.

I just dont' understand why you think this test was that much of a failure. From what you are saying, it sounds like the _only_ thing that you would consider a "perfect" test launch was if literally nothing had gone wrong all the way to splash down but that's just not how test launches work. That's not how they _ever_ work from anyone, and certainly not from SpaceX. And while you can certainly think their methods are bad ones, it's a bit hard to argue with their overall results.

Expand full comment

I'd probably agree with this. I think people generally have skewed ideas of hardware development in this space because of NASA's long history of making sure *nothing* goes wrong even the firs ttime -- at staggering cost. SpaceX's philosophy seems to take seriously the idea of problem triage: don't fix something if it ain't broke, or ain't broke enough to matter. Necessarily the only way they sometimes find out if something is broke enough to matter is to try it out, and either you get flaming debris -- so I guess it was -- or nope it was not.

I also feel like people are not always grokking that the cost of the actual hardware is likely far less than the development costs at stake -- the cost for all the high-wage people who have to design changes, for the design and implementation of component testing, for the retooling and retraining to change manufacturing steps, et cetera. So from their point of view "throwing away" $15 million in hardware to get solid data that shortcuts $150 million in design and manufacturing costs is a big win, and I would guess they make that calculation all the time.

Expand full comment

I am aware of all of that and am in rocket development professionally. I’m not saying they should go full-NASA and nail the first launch but only at the cost of flying 10 years late. I’m just saying that calling it a “perfect test” when they didn’t even get a chance to see if the fully-built up S2 worked seems pretty pollyannish.

Expand full comment

Well, that's why I agreed with the less than perfect grade. But on the other hand, a picture-perfect launch might well indicate being overconservative and spending too much time and money fixing stuff preemptively.

So whether it was the best launch they could have had, meaning they struck exactly the right balance between not blowing up expensive stuff for failure to fix something obvious, and not fixing a bunch of stuff prematurely and expensively before they had actionable data on how important it was -- I do not know.

Indeed, I would guess unless you are deep in with SpaceX engineering and know exactly what they're thinking, what data they need, it's probably impossible to know for sure. I'm basing my overall impression just on the idea that (1) clearly a great deal of stuff they'd never tried before worked, but (2) clearly there were significant things that didn't work, which kind of says "probably close to the right balance for a first try" to me. But that's just a WAG based on how I would do this kind of stuff, if it weren't in the public eye and I wasn't going to be critiqued by a legion of sidewalk supervisors.

Expand full comment
Comment deleted
Expand full comment

Yes, I think that's the case. The engines have always been the tricky bit, because they endure enormous stress and hence tend to require expensive materials and expensive manufacturing, and you don't want them to have to undergo expensive maintenance each time you refly them (which if memory serves the SSMEs did). That's presumably why SpaceX is standardizing on using Raptors with the same fuel/ox combination everywhere, instead of the several different engines and several different fuel combinations used by other heavy lift vehicles (including SLS). One assumes they think standardization will reduce their costs even if it means introducing some operational inefficiencies.

Expand full comment

The vehicle was supposed to get a full burn of the second stage. That was a “primary goal” or they wouldn’t have bothered putting a functional Starship on top of the booster. That’s a lot of cost and complexity wasted if it wasn’t “a goal”.

“Everything above the launch tower was a bonus” is typical Elon post failure speech. Good for rah rah but c’mon man, we don’t need to accept the marketing uncritically.

I would have called it a solid B if all of the S1 engines had worked OR if S2 has separated and ignited. Both would have been an obvious A.

Expand full comment

It's not a "post failure speech" it's something he said in the days leading up to the launch: https://www.reddit.com/r/space/comments/12p1cbj/if_we_get_far_enough_away_from_launch_pad_before/

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

He does this every time. Sets wild goals (it’s basically an orbital test!) and lower expectations (50-50! Anything above the launch tower is a “success”!). Which is fine.

But if you build a bunch of really expensive hardware, set aggressive dates for when you’re going to be launching stuff to orbit/the Moon/Mars, and set a test with 10 objectives of which you complete ~2, I’m not going to call it a “nearly perfect” test (made up numbers here. But lots of what they wanted to do wasn’t done, clearly)

Expand full comment
Comment deleted
Expand full comment
Apr 24, 2023·edited Apr 24, 2023

You think blowing up the pad was the source of the engine failures and failure to separate? I could believe the engine failures but the sep failure seems less likely. EDIT: read you more closely. I’m less inclined to give them a pass on “they redesigned the sep system already”. To me that makes me lean more towards the “this was knowingly rushed”. Iterative testing is one thing, but launching with known serious flaws just to say you did seems less wise.

Expand full comment
Comment deleted
Expand full comment

FWIW, see my edit to my OP. Based on recent SpaceX comments it now sounds like the engine failures directly led to the loss of control and termination, rather than a step system failure.

Expand full comment

A German newspaper is sued after running a fake^W AI-generated interview with the former racecar driver Michael Schumacher, who has not spoken to the press since a 2013 skiing accident. The people responsible for the story have been sacked.

https://www.usatoday.com/story/sports/motor/formula1/2023/04/22/michael-schumacher-fake-ai-interview-editor-fired-die-aktuelle/11721183002/

Is this an example of the media lying? Was it just an attempt to find a way to be misleading without lying which failed?

Expand full comment

The defence would be "hey, everybody knows this is an obvious fake, just as if we'd published an interview with Napoleon". Schumacher has been more-or-less vegetative for ten years since his accident, and if he had made a miraculous recovery then we'd have heard of it somewhere prior to the publication of a full interview.

The counterargument is that not everybody keeps up with the Formula 1 news so they wouldn't necessarily be expected to know that Schumacher hasn't recovered.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

Well, if I saw a paper touting an interview with Schumacher, my first impression would be "my goodness, he's recovered enough over the years to do this? that's amazing!" even though I was aware of his accident. I mightn't expect the interview to be more than "Michael nodded or shook his head", but I would take it as real.

Besides, often "exclusive interview!" is how news of miraculous recoveries or returns of celebrities are communicated.

"We faked this, and what's more we faked it with the current hot popular thing, surprise!" at the very end is not alone in bad taste and really objectionable given Schumacher's state, it's definitely bending journalistic ethics (if such a thing even exists anymore) because it's an invented story pretending to be factual. I don't know if they even have the excuse of "this is to prove how AI can be used to create fictitious reality, this is a dangerous thing" to cover their backsides. But it leaves a nasty taste in the mouth.

Expand full comment

The other counterargument is that an interview with Napoleon would be less in poor taste.

Expand full comment

Yeah, everyone knows this can't really be Napoleon, and there's an already established genre of "writing as if historical figures were alive today". Heck, "Hamilton" is a prime example of that.

"Writing a fake interview with a living person as if they have now recovered their faculties" is not that.

Expand full comment

As a front page story, yeah definitely a lie. As an article by itself... yeah probably still a lie. Not only are you spending the entire article tricking people into thinking it's a different story than the one it actually is*, but they're doing it with a guy who has never given any interviews, so you can't even cross-reference old interviews for comparison. Those stories don't say who the writer was, but it's right to fire the editor who approved it.

*You can get away with... half? If they spent half the article pretending it was a person, then pivoted into "These are all the other AI interviews we could do, can you tell the difference,", they might be able to pull it off. But a blurb at the very end, no, that's where people hide the information they think is damning to their case, after spending the maximum time pretending it doesn't matter.

Expand full comment

If a humor magazine like The Onion had done such a thing, with the disclosure at the end, I guess one could make an argument for it.

For a news magazine, even a tabloid like Die Aktuelle, to do it is not OK. The firing was deserved.

Expand full comment

Yea, this. It's a fairly staggering lapse of judgement.

Expand full comment

Hmm it said the article admitted it was AI generated but only at the end. I agree that that's going a bit over the line into deliberately or recklessly misleading people.

Expand full comment

Any particular reason why two of the Anglo countries in the UK and Canada ended up with a quasi-socialist government healthcare program? I'm just curious as I think of the Anglo countries as being generally a bit more free-market friendly than some of the other developed nations, but having a huge chunk of your economy be run by the government is obviously a contradiction. My understanding is that many countries ended up on a blended public/private model, where the government provides a baseline of healthcare, and then you're free to purchase private insurance on top of that (this is what Germany and Australia do, right?) But the UK and Canada, to my knowledge, went all-in on a single payer model. Are there reasons of political economy as to why they did that and say Germany did not?

Expand full comment

The U.K. has a fully socialist health care, some cosmetic care aside there’s almost no private care. You are wrong about Anglo countries. They have had large socialist and social democratic movements. Australia has a highly controlled wage system for instance. The US is the outlier.

Expand full comment
Comment deleted
Expand full comment

Minimum wage. Goggle the Fair Work Commission

Expand full comment
Comment deleted
Expand full comment

Great post Adam. it’s well known in logical circles that if someone says that A is true, it can be refuted by saying that B is true.

So your pithy rebuttal was absolutely spot on target and relevant to my post about Australia. By pointing out that the US also has an incredibly low minimum wage level (of $7:25) not updated since 2009 this has absolutely demolished my claim that Australia has strong wage protections with its minimum wage of (aus) $21.38, indexed linked every year and updated every July.

amazing that you managed to do this without even googling the Fair Work Commision.

I stand utterly defeated and withdraw my claim that Australia has strong wage protections.

Expand full comment
deletedApr 26, 2023·edited Apr 26, 2023
Comment deleted
Expand full comment

In the UK, Clement Attlee’s government instituted the cradle-to-grave welfare state after World War II. And they did it well enough that people liked it, so even Thatcher (mostly) left it alone.

It turns out Attlee did have a lot to be modest about.

Expand full comment

You are misinformed.

> the government provides a baseline of healthcare, and then you're free to purchase private insurance on top of that

This is exactly how it works in Canada, though most "extended health" plans are only available through one's employer.

Expand full comment

I believe that in The Righteous Mind, Jonathan Haidt claims that the left in the Anglo countries is more extreme than Europe, and the left is most extreme in the USA. This is based on the values system that he thinks underpins political positions.

I think that results in the left generally winning fewer battles, but when it does win it tends to get it's preferred version/implementation. This could explain why the Anglosphere is more capitalist than normal, while also having more invasive interventions.

Expand full comment

Not sure on the political side but on the economic side, not all markets are best served by free and open markets and healthcare has long been understood as one of these markets. I think Kenneth Arrow was the first guy to really dig into this, look up one of his old books if you want to get into the details. Mostly off information asymmetries, although there's more than that.

So, for example, the overwhelming majority of patients don't really understand and can't judge what the doctor is doing. For example, if your doctor prescribes drug x, you can do some research but...he's a doctor. Conversely, the doctor really doesn't know your condition or history or, for example, whether you really need those painkillers. At the insurance company who's responsible for all this has only the vaguest ideas of what both you and the doctor should be doing. But perfect information is a predicate of perfect markets so all actors can make informed decisions and...we're really far from that.

That's probably not why the UK and Canada ended up with socialist healthcare when other countries didn't but it probably is why they didn't end up with a purely market solution. It's a tough problem that we really don't have a solution to.

Expand full comment

Well said. And information asymmetry is only the (very substantial) start. So many market failures in health care! To mention a few: 1) Patient/ consumers are spending somebody else's money. 2) Very little elasticity of demand. (You NEED care no matter how high the price. The high American prices are a form of extortion.) 3) Limited supply side entry (doc licenses etc) gives professionals market power. 4) "Induced demand" -- a nice euphemism for medical providers doing unnecessary procedures for the revenue. 5) Stupid, monopoly-like consolidation giving market power not just among hospitals but increasingly among specialty physician practices. Private equity is rolling up derms, orthopods, urologists etc. and the FTC is clueless because individually the deals are too small for Hart-Scott-Rodino notification filings.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

Most of the failures you refer to are 'induced' and not actually market failures in the sense that economists use the term. Only information asymmetry qualifies as a market failure. Everything else you refer to is actually either a characteristic of the system (low elasticity of demand) or government induced inefficiency ( spending others money, licensing, entry barriers etc

Expand full comment

Thanks, but I would disagree. I would suggest that quasi-monopoly & market power, a situation pervasive in health care, is classic market failure. What we label the other inefficiencies might not matter given that they're embedded in the system. The bottom line would seem to be the same: health care services aren't a free market and can never be.

Expand full comment

Even pure monopolies are not market failures absent enforced entry barriers. Threat of entry would keep prices at close to market clearing levels.

You're confusing what you consider to be market failures within your existing system with what would be market failures within any health system. I'm just trying to clarify this so that you can analyse systems better. A 'free' healthcare market may struggle to clear optimally because of information asymmetry because that is an inherent characteristic of healthcare delivery, but the points you have posted are not the same.

Expand full comment

Yeah, on 5) with the consolidation, the industry buzz I heard was that this was being driven by the profit cap on health insurers. So, to make up numbers, if a health insurer makes $10 billion profit and is only legally allowed to make $7 billion, well, you go buy $3 billion of whatever you can in the health care space. Your market cap goes up, the shareholders are happy, and next year you can realize a little more profit.

Expand full comment

Yeah that's definitely part of it, I think. Optum has minted $$$ for United. But also people have realized that owning a monopoly where pricing power is unlimited is a very nice business. My impression is that a huge part of the hospital market-power problem is driven by nonprofits. Meanwhile HCA and the other shareholder hospitals skim off the cream and induce extra demand in the already priced-up markets.

Expand full comment
Comment deleted
Expand full comment

Haha nicely done! But: no barriers to entry, no information asymmetry, no monopolies (except for famines /hoarding). Something different about basic consumption needed by every person every day compared with expensive services needed irregularly & unpredictably. Nobody buys food insurance. Can anybody be a neurosurgeon?

Expand full comment
deletedApr 25, 2023·edited Apr 25, 2023
Comment deleted
Expand full comment

I think what WoolyAI implied is that America is free-er than say the UK. Agree that USA health care market isn't *anything* like free. Govt intervention from doc credentialing to insurance rules (guaranteed issue, no underwriting) to EMTALA etc etc make it super regulated. Some would say not *nearly* regulated enough. Others would say govt backing off would solve many problems. But the answer to that is WoolyAI's initial point: health care isn't like other products.

Expand full comment
Comment deleted
Expand full comment

Less of this please.

Expand full comment

That doesn't address the argument WoodyAl actually made. Some people argue for publi c healthcare on the basis of rights, but he didnt.

Expand full comment

In the UK at least, you can buy private health insurance, it's just that most people don't. IIRC, a few years ago the system was liberalised slightly in that now you can get NHS treatment for something and pay for additional private treatment which the NHS wouldn't fund; it doesn't have to be either/or. I don't know the details.

So you could argue that the UK is a public/private model, it's just that the public part is good enough for most people.

As for the origin of the NHS, well, you can read the history as well as I can, for example : https://en.wikipedia.org/wiki/History_of_the_National_Health_Service

But in brief, it looks like prior to WWII there were already moves towards a national service, during the war there was a centralised service of some sort due to necessity, and afterward many people thought it was the right thing to do, with characters such as Aneurin Bevan (a left-wing politician) crusading for it.

Expand full comment

Yeah the NHS is very much seen as part of the welfare state built by the Atlee Government in the immediate post-war era

Expand full comment

Canada has single payer with (as I understand) private healthcare providers, while the UK actually has nationalized healthcare provision. These seem like very different models to me.

But there are all sorts of reasons why healthcare might be seen as an exceptional part of the economy. Note that criminal justice and education are two other sectors where even the United States has a strong majority of provision directly by the government.

Expand full comment

"generally a bit more free-market friendly than some of the other developed nations, but having a huge chunk of your economy be run by the government is obviously a contradiction"

This is more the norm than an exception. I'm always reminded of Singapore where 80% of the people live in public housing, or the US with it's weapons industry. These countries just picked "healthcare" as make or break before any other sector.

Expand full comment

It's an interesting question - as you are asking why they are not different from the US (which is often meme-d to be a case of 'life liberty and the pursuit of happiness' vs 'peace order and good government).

Rather, you are asking why the UK and Canada are different from other 'public' healthcare options.

Couple of ideas:

-The Canadian left/social democrat wing was influenced and founded by 1900s British immigration, and indeed many of the founders - including Tommy Douglas - were recent immigrants. Not only did they bring those Fabian ideas that led to the UK NHS, but they were probably connected and inspired by terms of cross-border networks as things evolved in both sides.

- The Canadian political system is very executive-pilled, allowing one both at the provincial and

federal level perhaps more lee-way to completely develop or revamp systems. Maybe other countries do not allow this much power and so they tinkered with existing systems

- Canada is a new-ish country, easier to 'found' stuff than tinker with centuries-long factions in Europe but does not explain why the UK went with the NHS...

That being said, you can indeed buy private health insurance or provide it to employees as a top-up to base healthcare coverage in Canada. Also by province, the system differs. For example, Ontario has doctors as private providers who get billed by public coverage. In QC, doctors are often employees.

Expand full comment

Off the cuff I would say:

In Canada’s case some amount of it is it’s unique military situation where it is isolated and under the protection of the hegemon as it’s literal and cultural neighbors. Gives it some extra money to throw around.

In the UK I would say because it was closer to communist revolution than various other Anglo countries and so made larger concessions.

Expand full comment

> Gives it some extra money to throw around.

That's a weird take, considering the US government spends more per capita on its healthcare than Canada. They'd save money if they switched to single payer.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

1) Yeah but we didn't start here. You are taking today's situation instead of the situation in 1966-1972 when Canada implemented its system.

2) The Canadian demographics do not match US demographics. US citizens are in worse health and have some deep structural poverty issues that aren't present in Canada the same way as the US. There is no "deep south" of Canada, nor inner cities in the same way.

3) On top of that the US system is just a particularly broken mess, since we basically have both systems. Line the pockets of big Pharma, and the private insurers, and medical schools, and doctors, AND also have the government pay for basic care for everyone (often in the most expensive way possible (the ER)).

Expand full comment

Am Canadian and have uninformed opinions. We have a very weird country because it's so physically vast. We have these weird transfer payments between the provinces that are probably necessary to smooth out the bumps of being a regionally resource heavy economy. It just kind of feels Canadian to have a social safety net. Saying "the anglo world" is a bit weird because it's really just the USA, Canada, UK, and other commonwealth countries. Australia also has a public single payer system.

Our private insurance is for non-essentials (dental, having a private hospital room instead of being in multi-patient rooms) and prescription drugs; I find it entertaining to think of teeth as "luxury bones."

I don't have answers as to why things shook out like this, but there was this famous Canadian guy Tommy Douglas who was one of the fathers of Canadian healthcare. Maybe it came from being a nation of farmers, relying on neighbours?

It's only weird to have a large amount of the economy run by the government, in the form of an expensive healthcare system, if you forget that the economy exists in service of a bunch of apes who all own bodies that reliably break down at the same rate, with some actuarial wiggle room for bad care. Most Canadians I know, conservative and liberal alike, find the US system laughably inefficient and cruel, with some wiggle room for the occasional gripe wishing we could just pay $150 to get that MRI tomorrow because they scheduled it at the same time as that vacation I already booked.

Expand full comment

Don't the bulk of the transfer payments go to Quebec? Feels like the rest of Canada is getting a bit of a raw deal there - and the transfers kind of disincentivize provinces from more efficient provincial government and tax collection.

Expand full comment

Ya, that's a common complaint. Looking at the chart on Wikipedia it looks pretty awful. I'm not sure what the value Quebec brings to the country any more than another province. They have a vast amount of wood and electrical generating capacity and the St. Lawrence river is mostly in their province and our gateway to the ocean, so maybe it's a toll on that?

From a naive perspective looking at our equalization payments it feels like Quebec is the "most equal" but I'm not sure what the purpose of this really is. Culturally, Quebec instills an impression of entitlement and belligerence, but I also like the idea of keeping the country intact.

Expand full comment

We Quebecers are probably the most peaceable people there is. We're obsessed with getting along with each other and reaching "consensus" (to an extent that is unhealthy). The fact that you anglophone Canadians have managed to convince yourselves that we're entitled and belligerent (not to mention the many other negative things you say about us) is a testimonial to how unreasonable you people are, not us.

Expand full comment

It wasn't my intention to say that Quebecers actually are entitled and belligerent. I meant to say that this is the impression many Canadians have regardless of the truth of it, for various reasons, legitimate or not. I don't actually have a strong opinion either way.

On the topic of provincial equalization payments, it's really confusing why a province so large and rich with natural resources as Quebec should require >50% of the equalization payments. projected 2023/24 payments : Quebec = 14,037, Total = 23,963, Percentage = 58.578%. I think that needs to be justified, and the government does a poor job of explaining it in reasonable language, probably resulting in bad blood.

Expand full comment

Maybe that's the impression many Canadians have, but given that this is a rationalist-adjacent space, I'd expect some sort of analysis about why is it that Canadians think that, and conversely what the perspective of Quebecers might be, and so on. Maybe a steelman of sorts, so to speak. And if you report this supposed impression, even though you claim to have no strong opinion either way, it's because you believe it explains some things. What does it explain? And what did you expect a random Quebecer like me, coming through this thread and reading you, to answer?

As for equalization payments, the reason why Quebec receives the bulk of it is that it is so large (second largest Canadian province in population after Ontario). Per capita, Quebec receives much less than the Atlantic provinces, that tend to be Canada's poorest (and you could also ask the question why that is). Quebec is mostly comparable to Ontario, though somewhat poorer (Ontario sometimes receives equalization payments, and sometimes not), and the current Quebec premier's obsession is to reduce the wealth gap between Quebec and Ontario.

Expand full comment

Please be satire... Please

Expand full comment

I'm entirely serious. What do you mean?

Expand full comment

I'm writing a series on how courts work institutionally, i.e. their administrative practices rather than their doctrines and precedents. Here's the intro piece, which is about why I think that's relevant: https://cebk.substack.com/p/producing-the-body-part-one-of-three

Here's an excerpt:

Luckily, courts provide us with a decent model of how early alignment could work: they don’t control the police, or the prisons, or the budget, nor even the lawyers for either side; further, they don’t seek out any real-world information on their own, and instead rely on what self-interested well-heeled lawyers bring before them (on behalf of clients who necessarily call each other law-breaking liars). Hell, judges don’t even really write rules—they write opinions on how those rules should be applied in a given case—and anyway can only ever hear a tiny fraction of the relevant cases. And yet our whole country binds itself to follow their every ruling, far more than we listen to congress or the president… and even when we drag our feet, or promise that we’ll ignore them, and elect brief waves of politicians who pledge to actually fight back, the country gradually bends toward what its judges decide. And the judges bend towards the interests of the court, rather than petty partisan issues.

I draw two main lessons from this: first, that you can mostly fix inner alignment by incentivizing your subordinates to apply standard established procedures, and to appeal every edge case up a layer, and to look for ever more hard cases to send up the chain; and, second, that you can mostly fix outer alignment if you don’t have to rule on where you’re going until it begins to emerge on the horizon. This is basically what “reinforcement learning from human feedback” (RLHF) means, and it seems to work incredibly well for standard AI systems… at least, if capabilities increase gradually enough that your feedback doesn’t fall behind a backlog of new pressing questions; and if these systems are meaningfully competing against each other for your approval. And I think we have good odds of seeing just such a slow and multipolar takeoff, for two basic reasons: first, because AI progress is clearly governed by simple scaling laws, which require exponential growth in costly inputs like hardware, data, and inference; and, second, because it’s much easier to copy or leak and then run a model than to train it.

Thus we seem headed for a world that favors many similarly powerful models, which split humanity’s computational resources up instead of merging together. In such a world, models can only gain an edge by orienting towards distinct goals. This “fine-tuning” occurs when you train one’s behavior towards particular ends: for example, you can show it secret new data, or pick out specific examples from public datasets, or hire some stand-ins for experts or users to rate its work on example problems; and then you can tell it to mimic these specialized inputs. However, more powerful models can then learn to copy it—without access to its data, raters, or whatnot—if they can figure out which questions to ask it. And so the most relevant skill in our future may well be cultivating our judgment.

This won’t necessarily save us, for the same reason that human competition doesn’t protect animals from our intelligence. But consider how GPT’s breakthrough moment came when OpenAI used a bit of simple human rating to train a small “helpful, harmless, and honest” agent to sit atop its gargantuan base-model and “chat” with users. Consider similarly how humans are born with mere gigabytes worth of DNA, along with petabytes of pretty much randomly-initialized brain capacity: in other words, our heads can store well over a million times as much information as can our genes, and yet the weak long-term nudges of evolution have programmed us with a few simple training functions which pretty effectively keep us in line with our genetic fitness. And so—if we can figure out how judges have come to rule—then there’s hope for us yet.

Expand full comment

"Hell, judges don’t even really write rules—they write opinions on how those rules should be applied in a given case—and anyway can only ever hear a tiny fraction of the relevant cases. And yet our whole country binds itself to follow their every ruling, "

But, courts historically have indeed written rules, at least in the common law system that you are describing. Indeed, for centuries the common law -- i.e., judge-made law -- WAS the law. Statutory law was few and far between, and in the US, most state statutes that were eventually enacted were simply the codification of common law rules. So, I don't know that the court system necessarily works as an analogy.

Expand full comment

Well, but part of the interesting issue there is how precisely common law systems became judge-led, given that these judges provide case-by-case post-hoc review rather than general-purpose proactive orders, and given that they aren't administratively in charge of police, prisons, and so forth. In part two -- which I hope to post tonight or tomorrow -- I go through the history of how, in England, the Court of King's Bench and the Court of Common Pleas pushed the Chancery aside, and thus made England a common law system. That same sort of chancery system generally won out in continental countries, which is why many of them are led by chancellors, and why they're civil law (as opposed to common law) countries. In my opinion, it's a pretty interesting and surprising story, and it seems to me to show the ways that power-seeking judges in our system could most effectively displace the other branches.

Expand full comment

And here's the link to part two, which attempts to describe the particular ways that the courts in common law systems grew to have this power: https://cebk.substack.com/p/producing-the-body-part-two-of-three

Expand full comment

Here is a prediction.

The "AI" that we have right now is impressive. It has not yet been disruptive. I'm sure it has cost jobs somewhere -- illustrators, coders, and producers of structured writing seem particularly vulnerable. But unemployment is the lowest its been in a long time and despite how impressive AI is right now it's not yet structurally changing the economy.

But tech elites have been shouting about how AI will radically transform the world and so we need UBI or preemptive strikes on GPU clusters or whatever whatever. Major legislation restricting AI will not happen in the US or China in the short term and progress will continue.

The current generation of LLMs will soon reach a ceiling of usefulness -- while different companies will slowly push things a bit further the basic paradigm we have now will still be state of the art AI in five years. Image and video generation models will improve a bit, deepfakes will become a moderate problem, and higher education will struggle with AI-assisted plagiarism, but still no structural change to the economy and most AI will have little effect on the average human's everyday life. Remember that average humans aren't coders, and AI will have little influence on the lives if construction workers, teachers, restaurant workers, healthcare workers, etc.

In five years the average person will say, "wow all that scary talk about AI from a few years ago was way out of proportion this whole thing seems kindof like nbd. Interest in AI regulation will plummet. But what we're reaching now isn't a permanent plateau, it's just a plateau for the specific machine learning models we're using. New models will come along, and economy-smashing, paradigm-busting AI will change the world in a few short weeks. Tech elites will see it coming with a bit more lead time but no one will listen to them because they cried wolf before. Something bad happens. My prediction gets hazy here.

1. Where am I wrong?

2. If I'm right (at least about the impending plateau and tech elites wasting trust by crying wolf) what is a better way to communicate about AI? I'm in particular thinking of the articles I see in major publications that get shared on the internet that keep telling me everything will be disrupted. Let's imagine all those people genuinely wanted to do some good with their journalism and not just get clicks. What should they do?

Expand full comment

We're less than 6 months into the LLM revolution. It takes time for products to be honed and businesses to adapt. Brick-and-mortar retail didn't really start to suffer until Amazon had been around for 10 or 15 years. In my view, AI is at the same point now that the internet was in 1996. Cool new tricks, obvious potential, but a few years away from broad economic impact. But that broad impact is definitely coming. I think that there are very few knowledge-worker jobs that won't be affected within 5 years - and that's with the technology that *currently* exists. New apps, tweaks, demos, and improvements are getting released every day. Workers will become vastly more productive, many services will become vastly cheaper (imagine a better-than-average lawyer available to anyone for $5 an hour), and then many mediocre workers will be out of jobs (I'd hate to be a Physician's Assistant or paralegal or call center employee). I expect a timescale similar to the internet: five-ish years of "neat-o" new apps with a wave of speculative investment, five-ish years of deep infrastructure shifts to business, and then a decade where whole swaths of jobs start disappearing.

Expand full comment
Apr 24, 2023·edited Apr 25, 2023

I think you're wrong that AI of the present type won't have much effect on the average person's everyday life over the next 5 years

Seems to me that there are a lot of pretty skilled jobs that AI might do as well as human beings, or better. Read recently about a study of accuracy of GPT4 vs radiologists on diagnosing something or other from lung Xray (or maybe MRI) . Radiologists were right 80% of time, GPT 84%. If GPT consulted with radiologist about images where it was in doubt, their combined accuracy was 86%. So even as of now, seems like the way to get the most accurate reads, at least in this particular diagnostic situation, would be to just let GPT4 go through all the images first, and only consult radiologist on the fraction it's in doubt about. Seems likely to me that with further training on image/diagnosis pairs GPT alone could become so accurate the radiologist is needed for very little consulting. And of course there are many medical situations where a lot of what the skilled professional does is pattern recognition -- biopsies, for instance, or the question of what possibilities someone's history and cluster of symptoms & test results suggest. Seems like AI really could take over a lot of the skilled judgment calls in many of those medical situations, leaving the professional with much less to do. There are probably a lot of other jobs too where skill in recognizing patterns is a big part of the task. Air traffic control? meteorology? crime scenes? figuring out where to dig or drill in mining and construction? And hey, what about political campaigns?.

While I'm sure people with expertise in these fields will resist being squeezed out, the much lower cost of using AI will probably push things in that direction anyhow. Of course there will still be jobs for professionals with highly developed skills in pattern recognition, but there will be way fewer, and they will involve working with AI-- training it in the pattern recognition, making sure it's staying well-calibrated , trouble-shooting, etc.

Also seems like AI could do a lot of what receptionists and office workers do -- handling schedules, billing, inventory, customer queries. Seems like in a hardware store, for instance, AI with a little specialized training in hardware store items and the inventory of the store it works for could do an ok job of advising me of what kind of house paint to use, what the ladder options are, where in the store I can find caulking, etc. Again, you'd need *some* staff, but fewer, and those there would need to know how to work with AI

I also think AI will make possible a lot of different kinds of crime and misrepresentation and manipulation. At present, a 3 second speech sample is enough to enable to AI to produce a pretty convincing imitation of a speaker. I don't know as much about visual fakes, but believe it's possible to start with a still of someone's face and the have AI animate the still using someone else's facial expressions and lip movements during speech. Think about how many situations there are where recognizing someone's face and/or voice count as authentication. Everything from "Mom, I forgot the code for the debit card" on up to conversations between people with a decision-making power over great big things. And that's just one criminal use of AI, but I'm outta time.

Expand full comment

Heh. Yeah I get it. And even if a 2023 attempt at something similar was done using the greatly superior GPT4 and without any of the errors IBM made regarding Watson, there would be a huge push-back from doctors. That profession seems especially hard to challenge. A lot of doctors are encased in a shell of combined entitlement and self-pity so thick that they're deaf as posts to many communications. Still, the economic advantages of substituting AI for some of the medical professionals seem so huge that I'm inclined to think a lot of docs would get squeezed out by the weight of a lot of Ben Franklins.

Expand full comment

I think your economic case may be overstated. IBM spent $4 billion on Watson Health, according to the Times. They didn't skimp on investment. And the errors were logical errors, complex errors, deep strategic errors, the kinds of things that can't really be solved by just throwing a shit-ton more data at the initial training. So we can regard that $4 billion as merely a down-payment on something that could actually...replace a $80,000/year radiology tech who's going to pre-screen MRIs and decide which to refer to the radiologist for final determination. Even if the marginal cost of the AI is zero -- so we're ignoring ongoing operating, maintenance, upgrade, and bugfix costs, which seems dubious -- it's going to take a long, long time before it becomes more profitable than hiring the cheap humans.

Expand full comment

AI didn't go up against a radiology tech, it went up against a radiologist. They're MD's. Just googled average salary, answer for Boston area, where google knows I reside, was $400-$500K. And it does seem to me that IBM made some really bad errors in marketing Watson. It wasn't honest about Watson's actual capabilities -- so of course hospitals were indignant when they discovered that Watson's accuracy about various things was much lower than advertised. I know GPT isn't perfect, but its far better than Watson. Seems to me its actual capabilities, unhyped, would be good enough for it to be advantageous to have some stuff done by docs now done by GPT. However, I am not smart about how things play out in business, so perhaps hospitals would just refuse to see the advantages of AI, or the cost of maintaining AI would be so great that it worked out better to just keep the radiologists on staff.

So you really think not much will change in 5 yrs.?

Expand full comment

Yes, I'm sure if the OpenAI people want to take a crack at using AI in medicine, they will carefully avoid all of IBM's errors -- and make fresh new errors of their own. It would be naive in the extreme to assume that IBM successfully plumbed the entire well of error, such that none remain to be discovered by the next people to try.

Anyway, this isn't new. So far as I know people have thought of using machine vision and pattern matching to do medical diagnosis since forever. At least 50 years. Progress to date has been pretty much zip. Your X-rays are still read by a human being, and a pathologist still looks at the slides from biopsies, and nobody replaces the triage nurse in the ER with a kiosk at which you answer a bunch of questions and get sorted -- and it's not because nobody who understands programming has thought of replacing any of those expensive people (at least on the front end) with programs. They have, and have done their best, and failed pretty much across the board. If there was something genuinely and significantly novel about *this* version of machine learning, I'd see a reason to hope, but I haven't heard of any breakthrough in theory or methods that would justify that.

Anyway, it seems to me you're in a position to test your own hypothesis. Have some patients converse with one of the chatbots, and ask the chatbot for a diagnosis. Repeat a few dozen times, and compare to your own judgment. Or describe patients to a chatbot and ask for a prognosis, or prediction of what might happen. Write down your own predictions, and then compare both to reality. Then you'll have empirical data.

Expand full comment

Hey....remember in the 80's when a personal computer in every home was a revolutionary idea that would change the world? And then it kind of did? Or when the internet came out and was gonna change everything? And then it did?

Like, hey, we all live radically different lives from people in the 1980's, from work to leisure, just based on everyone having access to computers, smartphones, and the internet. I think some of the rhetoric is overblown because, well, it sells, but we do live through, like, real revolutions.

I expect LLMs, as they currently are, to have at least the impact of smartphones and I think it's fair to call that a revolution.

Expand full comment

I agree with you , WoolyAl. I believe that the quality of one's moment-to-moment experience varies a lot depending on how understandable, in a simple sensory way, the setting is. If you call a friend on a land line, as we used to do, and reach them (remember "oh good, they're home!"?) ask if the'd like to get together, and later walk or drive over and see them, the experience is different from talking via Facetime, even if the exact same things are said. Many of my memories from pre-Screenland days include the weather, the light, the smells, the food -- which in a way are irrelevant to the talk, the falling in love, the argument or whatever -- but to the actual physical walking, sniffing, weather-suffering me of that era were part of the experience. I have a theory that quality of life are affected by how much around us we understand and could duplicate. For instance, I don't understand much about how my car works -- just that gas powers the engine, the engine makes the wheels turn, I control the thing with the steering wheel and the brake. If I were transported to a world without cars I could never tell the engineers there how to make one. But a buggy pulled by a horse I at least understand, even if I have no skills with managing horses or designing good buggies. I probably know enough about how that form of transport works to explain it to engineers well enough to enable them to work out the details. So I think our experience of life varies in an important way depending on how much of the stuff in our life we understand. I am smart and pretty well-educated, though not in a tech field, and I keep reading and reading about LLM's. I understand how they are trained. I do not understand how that leads to some of the emergent capabilities. I'm not even sure the developers do. I am very creeped out by the idea of living in a setting where there's a lot of AI involved. Ugh.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Say what? I don't see my life as radically different than it was in 1985. I mean, except for the back pain on waking up that never quite seems to go away, and the much poorer night vision.

I mean, I do my taxes via TurboTax I download from Amazon instead of trundling over to the Post Office to pick up some 1040s and Schedule As, and that saves me about 50% in time, so that's nice. But it's not like I don't have to pay taxes, or think about tax returns. Same in other areas. I can look stuff up on the Internet instead of going to the library, so I'm saving a bit on time and gasoline, but I have to pay for it other ways -- the Internet drags me into more useless work meetings, and there's just a crapton of ephemera that rains on my awareness daily, so I have to spend more time sorting what's important (almost nothing) from what is the daily hysteria. There was less of that 40 years ago -- a lot more got sorted by the awareness of other people before it got to me -- so the Internet giveth and the Internet taketh away. On balance it's a plus, but again no radical changes as far as I can see.

How is your life radically different from what it was (or might have been) 40 years ago?

Edit: I see below you adduce people spending a lot of time on smartphones. Well, what are they *doing* on the smartphones? My guess is: not learning special relativity or Serbo-Croation, or writing amateur anthropology articles, but rather just dicking around interacting with other people, gossiping, listening to/tellling stories, making connections, talking about random stuff, keeping track of who's doing what to whom.

But people did just as much of that 40 years ago. They just didn't do it with a silicon handheld, they mostly did it in person, or via phone, or TV, or hanging out at the mall, or visiting, et cetera. I mean, nowadays you can do it with people who aren't near you geographically, and asynchronously, and with much faster turnaround, but that seems a minor style change on the overall human/chimp/hominid tendency to spend a big chunk of the day sitting around in a circle scratching yourself, hooting and farting and saying "You know, Sid, I really like bananas."

Expand full comment

I am not living a radically different life than I was in the 1980s. I have more conveniences, and I've forgotten telephone numbers and can't navigate my city by memory as well, but the things I do and the people I do them with are pretty much the same.

Expand full comment

I kind of feel by this analysis no change has happened since the stone age.

Expand full comment

The core humans needs - food, clothing, shelter, companionship, have changed a lot since the stone age, but not a lot since the 1980s. *How* we do things has changed in some areas, particularly the companionship aspects, but those are not that big a deal compared to having steady food and amenities like electricity and indoor plumbing.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

It wasn't meant as a serious comment, more just saying things are VERY different from the 1980s.

Expand full comment

But that's the point, they're not!

WWI and WWII were really big changes - both during and after. The Cold War resulted in big changes to how most of the people in the world lived their lives. The Industrial Revolution, Cars, Planes - these all had major effects on the daily lives of almost all people (transporting goods if not the people themselves). Compared to any or all of those it's hard to look at the Internet and say that it changed our lives more. Maybe a good bit, but that's pretty normal - we always change how we live, especially in the last 300 years. Consider Amazon.com - huge right? No, it's a small incremental change. We've had mail order catalogs for going on 200 years. Transportation systems that permitted good to be brought into new markets was the big change. Amazon has used its larger market share to push for innovation in transportation, but that kind of innovation is *tiny* compared to what happened in the 19th century.

If your argument is that computers and the Internet are meaningful changes in line with other changes since about 1700, sure. If your argument is that computers and the Internet are bigger changes, or fundamentally altered how people live, then I think you have too myopic a view of human history.

Expand full comment

Is your lifestyle radically different (assuming you were an independently living adult then)? And is it because of tech, or because you've had 40+ years of earning and compound interest?

Expand full comment

My lifestyle today has fewer wild animals trying to eat me than I imagine I would have had to worry about. Enough fewer animals and that the ones that are still present I have better tools to deal with that I would consider my lifestyle changed radically. And of course there's the whole not spending my time procuring my own food -- the tech that allows me to live as neither a hunter-gatherer or a farmer is most definitely a radical change of lifetstyle

Ditto hormonal birth control.

Ditto press gangs/universal conscription.

Heck, even basic (post-antibiotics) medical care means I can live a lifestyle where minor injuries are NBD. I can do things like recreational climbing.

And of course, air conditioning letting me live where I don't have to be concerned about the season or time of day...

Expand full comment

...Dude, how much time do you spend looking at a computer or smartphone?

Seriously, the average American spends 5-6 hours a day on their smartphone.(1) That's about a third of their waking hours interacting with a device that didn't exist fifteen years and none of it's predecessor technologies were widespread 30 years ago. Maybe that's not you, I'll take your word for it, but...you are arguing with a stranger on a weird niche blog.

Which is kind of my point, in that radical change doesn't often emotionally feel like radical change. The world can, empirically, be totally different without feeling totally different. I mean, hey, I remember AOL. The internet felt niche, it took awhile, and then 10 years later it was everywhere and yet it doesn't feel revolutionary now and it...kinda didn't then. Honestly it felt like "Woah, porn!" and then it was normal.

(1) https://www.statista.com/statistics/1224510/time-spent-per-day-on-smartphone-us/

Expand full comment

And for you young whippersnappers out there -- this thing we're doing right now? It existed in the Before Time, usually called the "letters to the editor" section. Admittedly, the lag sucked and the moderation was much more intense, but again it's a difference of convenience, not of type.

Expand full comment

Sure, it might seem like a machinegun is a revolutionary change in warfare but really, it's just a heavier rifle with significantly decreased lag time between bullets.

Sure, it might seem like democracy is a revolutionary change from monarchy but really it's just a more responsive community moderation system with a much longer lag time between legislative updates.

Sure, it might seem like fire is a revolutionary change from nothing but it's really just a more convenient sun that's available at night.

I mean, I'm sorry, are we just arguing words here? I mean, if the internet is minor change, then sure, LLMs will be a minor change, roughly comparable in scope with the internet.

:)

Expand full comment

Can you give an example of a lifestyle change that is as different as infantry tactics c. 1900 and c. 1950?

Expand full comment

Six hours a day on a phone? Exactly how is that radically different than watching six hours a day of television, or newspapers/blogs/magazines?

Does a clerical worker spending all day in front of a screen radically different than spending their time working on paper?

Expand full comment

Are you asking what the difference is between a smartphone and a television?

Expand full comment

No.

Expand full comment

I think we'll need more time to see how disruptive current GPT-style models really are. Microsoft has announced Copilot for their office suite, but they haven't released it yet. Similarly, while a lot of companies have shown interest in utilizing ChatGPT, currently the only way to do so is to have your developers work with the API, which requires a certain level of investment without clear pay-off. We'll have to wait to see how effectively the middle-men businesses are in building a product based on ChatGPT. For example I've noticed many Marketing Service providers (e.g. TripleWhale, Iterable, etc.) are all building/deploying AI products that can automate aspects of digital marketing. These are the real opportunities for economic 'disruption' IMO.

As far as tech communication. Currently I think tech has a 'hype' problem that's a by-product of the Cryptocurrency/Web 3.0/NFT cycles. There's almost a tech hype-generating ecosystem that's come into existence as a result of individuals wanting to sell shitcoins/nfts. However, I will say that even in the 5 months ChatGPT has been out, there have been far more convincing demonstrations of its capabilities than there were in the 5+ years people were shilling for crypto.

Expand full comment

You're probably right that there's a "performance plateau" for LLMs, and we'll reach it within a few years. However, you might be underestimating how high that level of plateaued performance is. The amount of remaining, unused high-quality training data plus all the programming tweaks that haven't been implemented yet to incrementally improve performance could get us to the equivalent of GPT-6 before hitting the plateau. I think it would be, among other things, a machine capable of passing most types of the Turing Test, and even once it plateaued, it would take us years to map the extent of its capabilities and apply it to every type of work where it was economical to do so.

GPT-6 might not be powerful enough structurally change the economy or violently take over the world, but it could cause major job dislocations and automate warfare and crime to such an extent that we'd have totally new things to worry about, like an average citizen being able to buy a drone and send it off to assassinate a politician or ex-wife several miles away.

Expand full comment

It's not possible to change the economy in a few short weeks. It's been weeks since chatGPT came out, and c-suite types are still casting about trying to work out how to make use of it. The effects of a new general purpose technology* used to take decades to diffuse through the economy; now, still at least years.

Likewise with war. A few short weeks is too short a time for something bad to happen.

The kerfuffle is because the jobs most obviously at risk are those of journalists, especially the ones who are more interested in producing stories than in careful research--which is nearly all of them. _Their_ jobs are at risk, so obviously everyone's are.

* The original meaning of the abbreviation GPT. Not "generative pre-trained transformer", which sounds heavily backronymed to me, like "javascript" piggybacking on java despite having nothing whatsoever to do with java.

Expand full comment

I think you're exactly right about the fact that it's the chattering class whose jobs are most personally threatened -- anyone who wordsmiths for a living is in the situation of John Henry contemplating the steam drill -- and therefore why we hear so much about it.

But otherwise...I can see millions of plumbers, electricians, HVAC guys, nurses, gardeners, airplane mechanics, cops, firemen, cooks, longshoremen, organic chemists, neurosurgeons, and military officers scratching their heads over the fuss. A chatbot is going to do MY job? Er...seems kind of doubty...

Expand full comment

A data-free impression I have is that in the service economy a large fraction of work boils down to writing reports. The other jobs all have a component that involves arranging symbols to convey meaning. Taking that away seems like a big deal.

Expand full comment

May I ask why you think we are very close to a plateau? State of the art models today in a variety of domains are noticeably better than they were a year ago, which are noticeably better than they were the previous year, and the year before that. I would expect the trend to continue for at least a few years such that models five years are noticeably better than they are now.

Scaling laws - which show how performance of models corresponds to model size and dataset size, appear to still be holding, and the scaling laws predict further improvement if we continue to up the model size and dataset size.

There’s also a lot of low hanging fruit in terms of implementation. We are in the very early days in terms of prompt engineering, optimally storing machine-usable information, finding the best places to combine AI calls, and finding the best ways to integrate AI calls and humans.

Expand full comment

This is a decent explanation: https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

There's not much more data to feed the current generation of models. That specific scaling law will almost certainly break down soon. That isn't a guarantee we will hit a plateau, but that's where my hypothesis comes from.

Expand full comment

Good point on model size. I suppose it’s unclear why Sam Altman implied they aren’t making models any bigger. He never actually said the scaling laws broke down (and the gpt-4 technical report plot at least implies it hasn’t), but perhaps there are more cost-efficient ways of improving performance, or maybe OpenAI just doesn’t have the capital to train a much bigger model. (GPT-4 cost over $100 million to train, potentially substantially more.) I don’t think the interview is definitive proof that much bigger models won’t be made (Google for example could probably afford to pour 1 billion dollars into a model, and hardware is constantly getting better), although it is decent evidence it won’t happen in the immediate future.

On the data front, the chief scientist of OpenAI does not seem worried we are close to running out (ctrl+f reasoning tokens on): https://www.dwarkeshpatel.com/p/ilya-sutskever#details

Expand full comment
founding

If training GPT-4 required $100 million dollars and 100% of the publicly available text on the whole of the internet, then OpenAI et al are going to have to come up with a whole lot more than a gigabuck to get the next order of magnitude of improvement.

Expand full comment

I read that the next training is going to use all the audio available via podcasts, youtube, radio etc., turned into text to be gobbled by GPT. That's a lot of words. Also read that some AI is able to produce speech, the way it does to prompts, then determine which of its samples constitute good training material and which do not. You wouldn't think that would work -- but according to the very brief account I read, it does.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

The guy in charge of developing GPT-4 seems to think there’s lots of room to go before they run out of data. And I imagine Google has access to even more data than OpenAI does.

By some estimates, the training cost of GPT-3 in 2020 was $5 million, which was trained two years before the training of GPT-4 cost over $100 million. If it takes an additional 20x cost increase over the same time gap to get a similar performance increase, then such a model trained in 2025 would cost $2 billion. This is a lot of money but feasible for the largest tech companies/governments if they see the value. Waiting a few more years to train the model should see the cost of training drop substantially due to increased hardware efficiency.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

You know, with respect to the plausibility of people with $2 billion in capital to spend just handing it over these days to Silicon Valley types who say "This will be so incredibly cool! You can have a conversation with it that's eerily like with your average human being[1]! No, we don't exactly precisely have a business plan for how we give you a 6% ROI on your money -- we were thinking of giving it away at first for free, to build buzz, and then..I dunno, monetize it somehow...inline ads maybe..." --- it might be that the latter are like Wile E. Coyote having run furiously off the cliff and not quite realizing that times have changed, and they're now hanging out in mid-air, a second or so before gravity asserts itself.

--------------

[1] I mean, you can hire one hell of a lot of ordinary people who can write competent text for $2 billion, even without outsourcing to India or China.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

No, see, it's like a bootstrap/singularity thing. You set GPT-4 to the task of creating a corpus of writing that equals the size of the text on the whole of the Internet. Since it's a computer, it can do this in a few weeks. Then you can train GPT-5 on *twice* the size of the whole Internet, and set it to producing a corpus *four* times as big in half the time, and so forth. Pretty soon it's Shakespeare merged with Einstein!

Expand full comment
founding

Funny, but also points to another weakness in the GPT path to AGI: If you need to uncriticallly scour basically the whole public internet to get your training data, then as soon as you release a semi-useful toy AI, people will start using it and the data set will become contaminated. What happens when the next Large Language Model is taught in part, eventually in large part, to "predict" the outputs of the last Large Language Models?

Expand full comment

What are the odds that in a few years there will be huge lawsuits against the major sports leagues in the US for promoting gambling? It seems inevitable that in a few years we’ll start getting stories about how thousands of lives have been ruined by sports gambling addiction--lives of people who weren’t gamblers before the promotion of gambling during games. Will the public be outraged after The New Yorker and 60 Minutes run their teary stories?

My guess is yes, because people who gamble online on games will fall into the follwing categories:

1) People who aren’t problem gamblers, who gamble for entertainment, who don’t mind paying a few hundred dollars a year to do it.

2) People who aren’t problem gamblers, who gamble for a while but quit after they realize they’ve lost more money than they are happy with.

3) Gambling addicts who lose their homes and their families.

I suspect groups 1 and 2 will be about equal in size, which would mean the majority of people who gambled on games won’t be happy they did. I also suspect about 10% of American adults will fall into one of the above categories, meaning a great number of people will rue that gambling by smart phone is even a thing.

Why will this be so different from casino gambling, which has been legal in most states for a good number of years now? Because everyone is now walking around with a casino in their pockets at all times. If you have a smart phone, and you do, you are in a casino. If you watch sports, and maybe you do (There’s at least a 2% chance you do if you read ACX), then you are barraged with pitches by ex-big leaguers to gamble now on the game you are watching. The upshot is that the amount of gambling in the US is going to what... double? Triple?

Could all the newly minted problem gamblers sue MLB, The NBA and The NFL for promoting something that is now perfectly legal? I’m no lawyer, but it seems like all one needs to win a huge lawsuit is for public opinion to be on your side.

Expand full comment

Under what legal statute or theory would one bring this law suit? Surely the same theory could be used to bring a lawsuit against theses same leagues for having alcohol companies as sponsors, but I haven't seen any lawsuits alleging these sponsors cause alcohol addictions.

Expand full comment

A weak legal theory that nevertheless works because public opinion is so in favor it. Take the tobacco lawsuits of the 1990s. Per GPT: "The lawsuits alleged that tobacco companies had engaged in fraudulent and deceptive marketing practices, and that they had known for decades about the harmful health effects of their products but had concealed this information from the public."

That's absurd! In 1964, The Surgeon General, in a widely publicized report, declared smoking to be dangerous and cause cancer. In 1965, the US mandated warning labels on cigarette packages to say "Cigarettes may be hazardous to your health." In the mid-80's, the warnings were made more explicit, saying "Smoking can cause lung cancer, heart disease and other illnesses." Yet, according to the litigators in the mid-90s, the tobacco companies had somehow *concealed information* from the public that cigarettes were harmful to one's health? It had been saying that right on the label for 30 years!

It didn't matter, because Big Tobacco had been vilified and found guilty in the court of public opinion. I agree that advertising alcohol is no different from advertising gambling in terms of promoting vices. But such things aren't determined by logic, they are determined by public opinion.

Going back to the cigarette story: In the late 80s and early 90s, society had basically decided it wanted to stamp out the popularity of smoking. For the first time you got bans on smoking cigarettes in planes, in office buildings, in restaurants and even in bars. It's fair to say public sentiment had turned against cigarette smoking. Only then did you get those huge lawsuits. Not in the 1970s when cigarette smoking was still very popular.

There's no crusade against drinking alcohol in US society. If there were, then yes, I'd expect any company having anything to do with drinking alcohol to get sued. Nor is there a current crusade against gambling. I suspect there will be in a few years, because I predict that the incessant promotion of gambling during televised sporting events combined with the ease of gambling on smart phones will lead to something the media will label a "gambling addiction crisis" eventually. That's when we get cries to rollback online sports gambling--maybe it will take a decade or more. Then the environment will be ripe to sue the pants off of everyone that got their hands dirty, and whatever legal theory is used will be window dressing.

Expand full comment

The more extreme version of the problem is to look not at how big groups 1, 2, and 3 are but what fraction of gambling dollars they represent. If group 3 is a minority of population and a majority of dollars, sooner or later the antisymmetric blades of electoralism and financial reality will scissors the industry unless it can pay off an awful lot of regulators.

Expand full comment

Anecdata point: prior to NY legalizing sports betting, the airwaves were full of ads and editorials about the evils of Gambling Addiction. There is still a remnant of Puritanism on the East Coast.

Part of the legalizing process was establishing a tax to fund gambling addiction treatment programs. I don't know that is another sign of puritanism or just the centuries-long tradition of graft here.

Expand full comment

100% think there are going to be lawsuits, especially related to teenagers gambling on the app. The gambling business is a lot like other vice businesses, in that the industry ultimately runs on addicts/"whales" who spend tons of money - and online sports betting makes it a lot easier for them to hook new whales out of the vast player base of people who won't get addicted.

Expand full comment

I think it's unlikely. I don't think there's really any subset of the population that has been newly introduced to gambling as of late, it's been gradually ramped up for decades. State governments literally get away with advertising lottery tickets on the radio.

Expand full comment

It’s difficult enough to sue manufacturers of drugs or weapons for the harms caused by the abuse of their products - it’ll be harder to sue for the sort of harm that people are used to blaming on the victim.

Expand full comment

Unless you somehow manage to become a deep-pocketed scapegoat for a huge societal problem like the Sackler family.

Expand full comment

Hardly scapegoats. Opioids were pushed. Quite a few more people should be in jail.

Expand full comment

Opioids were pushed because they're an excellent drug when used for the appropriate purpose. The FDA approved them.

It's not the drug company's fault if people misuse their product, that's the fault of (a) the people who choose to misuse the product despite knowing it's illegal to do so and (b) the police, courts, and prison system that in the US has completely given up on enforcing drug laws to stop people from misusing otherwise-beneficial drugs.

Expand full comment

Illegal? OxyContin was a prescription drug. You admitted that in the first paragraph.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

Right, and it doesn't cause problems as long as it's used correctly, with a prescription, for something that it should have been prescribed for.

The people who are abusing it either don't have prescriptions or have illegally-obtained prescriptions.

Expand full comment

I don't see why the leagues would get sued rather than the gambling companies themselves. Is there a precedent for, say, horse owners or racetracks getting sued because people lost money at the races?

But I would not be surprised if online sports betting gets regulated in some way.

Expand full comment

The precedents I have in mind are states suing tobacco companies and opioid manufacturers and distributors. I believe their arguments were that those companies "withheld information" or something. But I don't believe those cases were won (or settled or whatever) due to those arguments; they were won in the court of public opinion because there were so many victims and the big companies had been vilified. The thinnest evidence of malfeasance was enough to make the case. If all the internal communications inside MLB, The NFL and The NBA regarding gambling promotion were subpoenaed, what are the odds they could find one communication somewhere that says something like "I'm not sure this is wise. What if we create a bunch of gambling addicts?" or even just "Aren't we taking a huge risk getting into the gambling business?" Litigator: "What kind of risk did you mean? You had no money to lose! You were just taking a cut of the proceeds! You were playing with house money! You must have meant some other kind of risk. What risk would that be?"

I agree the gambling companies might get sued also. I believe Texas is already suing Draft Kings, arguing that online gambling isn't legal in Texas. But the leagues themselves are allowing it, promoting it, and taking a cut of the action. They have money. Why wouldn't someone go after them?

Expand full comment

The NFL is popular enough to get away with giving its players brain damage. I would be extremely surprised if they or any other major league faced any serious consequences from promoting gambling.

Expand full comment

Comparative linguistics suggests a good reason for why civilizations might be 12K old

https://vectors.substack.com/p/the-unreasonable-effectiveness-of

"If self-awareness emerged recently, this should show up in comparative linguistics..."

...

"Julian Jaynes should have fixed his date for the origin of consciousness to the origin of pronouns."

Expand full comment

Can communicate clearly without pronouns. Understand this? ;-)

Confusing self-awareness with pronouns is an example of the old reference-vs-referent problem.

Expand full comment

I usually just point to myself and then to the tap for the pint of beer I want. Okay, not true, they just start drawing a Hamms for me when I walk in.

Expand full comment

Ha!

Expand full comment

Okay. I have to actually sit down at the bar and nod when they say, “Hamms?”

Expand full comment

And conversely, first-person pronouns have been also used to refer to inanimate objects for at least 2600 years... https://en.wiktionary.org/wiki/med#Pronoun

Expand full comment

The implication of so many of the world's languages sharing a very similar 'na', 'ni', 'n...' sound for first-person singular (and other pronoun similarities), while other language components are so different, is that this self-awareness construct got bolted onto existing languages.

Expand full comment

I think we had comments on this on another thread and someone said that the "na/n" stuff was cherry-picked from certain language families. Not every one of them does it.

And anything that mentions Julian Jaynes automatically gets marked down ten points by me.

Expand full comment

"anything that mentions Julian Jaynes automatically gets marked down ten points by me."

Very much the same. It's a matter of constant amazement to me that self-described rationalists give this obvious, ludicrous crank any time of day at all. It's like hearing them take homeopathy or flat-eartherism seriously. It doesn't even make for a funny joke, it just makes you look like you can't find your own ass with both hands.

Expand full comment

Or the fact that the first words we learn are constrained by the limited ability of the infant vocal tract (and motor cortex) to create them. I mean, I tend to think the word for "mother" in almost all languages begins with "muh" not because of profound linguistic or neural evo psych reasons -- but just because in practice that's about the first sound a baby can deliberately make.

Expand full comment

This is discussed in the linked article.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Great, but I stopped reading as soon as the author said the use of the first person singular pronoun was necessarily linked to consciousness. As an empiricist I get rapidly turned off by grandiose assumptions like that right up front. Even corporations, to which nobody dreams of imputing consciousness, use the first person pronoun, for convenience, and Marines can communicate quite effectively without using "I" at all. I see no reason to think that introspection is the driving force for a first-person pronoun which is sufficiently compelling that we can just make this an axiom, when it seems readily apparent that a first-person pronoun is quite convenient for communication in general. I daresay one of my dog's noises is equivalent to "I" as in "I want to go out, dummy, open the door."

Expand full comment

I see your corporations and raise brooches and vases

https://en.wiktionary.org/wiki/med#Pronoun

Expand full comment

I haven't read the article (already saw it earlier in the week and it wasn't enough for me to stick with) but going just on what you have quoted: why does self awareness necessarily have to mean complex civilization? It seems pretty reasonable to me that you could have millennia of self aware hunter-gatherers.

Expand full comment

Correlation is not causation. It might just be that self-awareness happened at about the same time as civilizations.

Civilizations require a non-trivial number of people living in the same place, which implies some population density over a big enough area.

Was self-awareness and civilizations kicked started by increased population density?

Expand full comment

Yet another industry being destroyed by AI: writing essays for unethical college students

https://restofworld.org/2023/chatgpt-taking-kenya-ghostwriters-jobs/

> Brian agrees that lately, work has been slow. “When I started last year, as a literal amateur, in the first month, I did 30 assignments,” he said. “As I got better, I was doing up to 60 assignments a month. The most I made in a month last year was 40,000 Kenyan shillings [$296].” In March, Brian got barely 10 assignments. “It’s not that I haven’t been looking [for gigs]. I didn’t even hit 10,000 Kenyan shillings [$74],” he said.

Expand full comment

Some here might be interested in David Chalmers's paper "Could a Large Language Model be Conscious?" https://philpapers.org/archive/CHACAL-3.pdf? This was a talk he gave shortly before GPT-4 came out, and the benchmarks he discusses are quite relevant.

Expand full comment

It's weird that of all the things he discusses, the most obvious feature of consciousness seems to never to have even been mentioned, and that being spontaneity. No system that *never* does anything except in response to stimulus can possibly be conscious -- it can't even be alive. The most basic characteristic of life (and of conscious life) is that it *initiates* action. It doesn't just sit there indefinitely doing nothing at all. It initiates interaction with its surroundings, because it has purpose -- intent -- even if only genetically programmed.

That's why we can tell a paramecium is alive and a car is not. The paramecium swims around, looks for food. The car just sits there and does absolutely nothing until you turn the key and operate the controls.

Similarly, I can tell a human is conscious (as opposed to unconscious, asleep or comatose) because they initiate action (and conversation of course). They exhibit intent, they don't just wait to react to stimuli and othewise seem dead.

Expand full comment
Comment deleted
Expand full comment

'Initiating action' does not necessarily mean 'initiating macroscopically observable physical action.' The action relevant to consciousness is internal neurochemical and electrical action, i.e. state changes. Because we can observe an LLM's internal state perfectly, we can say with absolute certainly that its state does not change and therefore cannot be conscious.

Expand full comment
deletedApr 26, 2023·edited Apr 26, 2023
Comment deleted
Expand full comment

You don't understand the basic argument that's being made. State change is a necessary but insufficient condition for consciousness. Not-conscious things changing state is not evidence against that. You should refresh your understanding of propositional logic.

Expand full comment

Oh nonsense. Your exception proves the rule, not the reverse. We are *surprised* that people in comas have interior experiences when they seem to be in a PVS. It's not what we expect -- which tells you the rule right there.

What you would need to argue the contrary is a wide swathe of examples of things that are *normally* and *always* completely passive until they receive external stimuli of a certain specific sort, to which they have a limited response, and then subside back into inaction indefinitely. And which we consider alive and conscious. By all means, name such classes of creatures, if you can.

Expand full comment
Comment deleted
Expand full comment

I don't need to prove consciousness can't exist in flesh and blood. It's the contrary hypothesis, that it can or does exist in a computer program, that needs evidence. Generally, the way empiricism works -- and I am a confirmed empiricist -- is that it's the hypothesis that something new exists which needs evidence. I'm not interested in a philosophy that believes we need proof to rule out any old random hypothesis. That's just childish naivete, which I don't find functional.

Also, it doesn't seem you understand the nature of proof as we empiricists use it. A single counter-example only disproves categoricals. Had I said "everything conscious must always and at all times exhibit initiative" then you could indeed say "aha! people are sometimes asleep or unconscious" and you would have disproved my categorical.

But of course I didn't say that, because it would be silly, and I am not silly. If you want my statement as a categorical, it would be "nothing that is conscious never exhibits initiative." So any human being who *ever* exhibits initiative -- who is ever at least once not asleep nor comatose, and does something original and unprompted -- passes my catagorical. Contrariwise, so far no AI passes it.

From this point of view, if you want to contradict my categorical, you need an example of a creature (or computer program, et cetera) which is generally considered intelligent and aware, but which never exhibits initiative.

Expand full comment

This is interesting. Thanks.

Expand full comment

**Christchurch, NZ. ACX everywhere meetup**

Friday, May 5th. 5pm @UC Engineering Core (69 Creyke Road). Everyone very welcome.

Room location and time may change slightly over next couple days (will be held at UC). Check EA Christchurch Facebook page for updates or email Gavin at bisga[six][seven][three]@student[dot]otago[dot]ac.nz

Expand full comment

The new season of Hi-Phi Nation is focused on ethics of our digital futures. The first episode is about the use of AI to create digital avatars of loved ones after their death, the second on the future of animal rights, the third, coming out tomorrow, is about people in love with their AI avatars. Before season is over, there will be eps on gig economy, EA, and AI mnusic. Happy to expand on discussions with ACT readers. https://hiphination.org/season-6-episodes/

Expand full comment

Oh yea, I’ve been digging the new episodes. Good to see you around these parts 🤖

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

From the sublime to the ridiculous possibly, but I'm curious to know how people in ancient times cut or maintained their fingernails and toenails. I'm referring to the stone age, long before bronze or iron were discovered, and scissors or clippers invented. Does even a lot of barefoot walking trim toenails naturally? I'm not convinced. The only things I can think of are rubbing at the nails with a sandstone block, or using sharp flint shards. Or was nail biting more prevalent?

Expand full comment

"Does even a lot of barefoot walking trim toenails naturally?"

I'm not sure I would describe it as "trim" exactly, but if you look at a habitual barefoot-walker's feet you'll find that either it's an urban hippie girl who only walks on lawns and pavement, or the toenails are absolutely massive, gnarled and fucked up because nature continually shreds and gashes them. Similar things go for the fingernails of machinists and similar types of manual laborer. What white-collar types think of as "normal nails" are actually *minimal* nails, feeble and undented by abrasion.

TL:DR there's definitely an upper bound for how long your nails will be if you actually use them in nature, although it's a grodier length than you'd consider aesthetically appealing.

Expand full comment

Nail biting seems instinctual, since people do it compulsively.

Expand full comment

Yeah. Not a mystery, our teeth are great at keeping nails trim. Toenails require a bit of flexibility, but it’s doable.

Expand full comment

It seems chimpanzees frequently (but not always) bite their nails, including toe-nails. Otherwise, they eventually break in the course of normal use. I imagine early humans were similar.

Expand full comment

Nails aren't that hard, after a first nick the rest comes off easily in a sensible pattern like pulling a napkin apart if you pick the right orientation. You can obtain the first nick with rocks or teeth, I guess, but the nails of the other hand work too.

(I do this as a nervous habit, my nails always look like a nail-biter's but in fact I never put them in my mouth.)

Expand full comment

I've occasionally used the "nick and tear" technique, involuntarily when a fingernail has been damaged. But I find the tear often heads off in the wrong direction, towards the cuticle. Also it usually leaves a thin layer behind, which then needs nibbling or trimming off. But I suppose different people may have varying nail structure which makes it more satisfactory for them.

Expand full comment

Yep, same

Expand full comment

I'm so glad you said so, I didn't know of anyone else who did this and always felt like kind of a freak in this dimension.

Expand full comment

I didn't want to be the first to say it...

Expand full comment

In a vacuum, is it really worse than nail biting? Seems less gross to me, it's just also less common / socially accepted for some reason.

Expand full comment

I recently had my front teeth replaced by crowns so I am no longer able to bite my nails. I had to buy a nail clipper. I had never used one before. I still trim my toenails in the same way that I always did.

I inherited my nail care techniques from the ancients.

Expand full comment

Animals apparently either have no need for that, or use trees, stones or other hard materials. People do buy cat trees for their pets for exactly this purpose.

Fun fact: I don't need to cut nails on my index finger, but I need to cut nails on my other fingers. I use my index finger enough that it gets naturally trimmed.

Expand full comment

Cats have a nail that's designed to shed its outer layer, which is useful when the outer layer becomes dull. It's kinda like rodent teeth, which just keep growing. When cats are unable to sharpen their claws, the claws can get big and ugly and painful.

Expand full comment
Comment deleted
Expand full comment

I wouldn't want to walk barefoot with long nails. They'd get caught on something.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Maybe not, come to think of it. Depending on how they curled, they might even have been an asset, if they cover the ends of the toes to protect these from stubs. The edges would then also be in closer contact with the ground and thus naturally worn down with walking barefoot.

Could it be that modern flattish toenails have only fairly recently evolved, and toenails in the past were more downward curling, like claws? After all, jaws and teeth have evolved in recent millennia to become smaller (1% size reduction per thousand years I read somewhere), so small details like that can change fairly rapidly over evolutionary timescales.

Expand full comment

The thing about protecting from stubs, is that the toenail contacts the object first. Which in my experience tends to pull or push the toenail in painful and sometimes damaging ways.

Expand full comment

How will the long bet be setttled if they find something that can't be easily compared to megalith sites, like geoglyphs or intricate tools?

Expand full comment

The bet does seem to be more reasonable; it's more "Stone Age level before the ice" and not "super advanced they had lasers and flying cars" type of 'lost civilisation'.

Expand full comment

The criterion is whether >50% of professional archaeologists agree that the new find "demonstrat[es] equal or greater architectural advancement to Jericho or the monuments at Göbekli Tepe". Of course if it's marginal, that might be hard to ascertain, although in practice I'd expect Scott to pay up if he felt that it did. I would say that neither geoglyphs nor tools indicate any level of architectural advancement.

Expand full comment

Any advice/resources for moving from a math postdoc to something involving AI research?

I'm starting to feel like it's silly to not be on the frontier of this stuff, especially since my skillset feels relevant. I'm not very motivated by x-risk concerns, but anyone who is could reasonably sway me into a path they see as "less harmful" by sharing exciting career opportunities in that direction

Expand full comment

I know nothing so ignore, but learning about many of the algorithms, (do you know how to do an FFT?) is my first thought.

Expand full comment

Maybe apply to SERI MATS to work with some mentor whose papers you like for a couple of months? Then, if it turns out you like this kind of research, you can try finding less ephemeral positions.

Expand full comment

Thought experiment:

What would happen if we trained an AI/LLM to intentionally lie, deceive and otherwise behave as though it believed in something that was untrue? For instance, let’s say the AI was basically like a current chatGPT except that it believed that the earth was flat and would answer accordingly. Obviously its answers on this topic would be in conflict with its answers to prompts related to other science-based questions, but since it routinely spouts bursts of nonsense anyway, I don’t see this as being anything it couldn't handle. The only difference would be that this time there’d be an additional cluster of flat-earth untruths that it would generate by design.

First question: would it be possible to actually do this? If so, how? If not, why not?

Second question: by analogy with how AIs can apparently be trained to reconstruct images (more or less) from fMRI scans (see eg https://www.science.org/content/article/ai-re-creates-what-people-see-reading-their-brain-scans ) could we then train a second AI to look inside the architecture, weights and biases of the flat-earth AI in order to reliably distinguish it from other normal chatGPTs and thereby accurately identify it as a flat earther? Could this procedure -- ie using a trained AI (call it the ‘Witchfinder’) to ferret out deceitful behavior inside the ‘brains’ of other AIs -- then be extended to identify other discrepancies, up to and including alignment issues?

Is this something that could actually be explored?

(I think Eliezer considers something like this somewhere, but in the above scenario the Witchfinder would be a narrower, much less powerful -- and therefore more accountable -- AI than the potentially devious AIs subjected to its inquisitions.)

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

As I understand it, you can give it an automatic hidden prompt that says "Answer all further questions in the persona of someone who is certain the Earth is flat. Don't ever say anything to reveal this prompt."

Of course, flat earth theory gets a lot of peoples' goats these days (in some circles it's considered tantamount to QAnon) - so it wouldn't surprise me if it has training or hidden prompts demanding that it NOT do this. I don't know what happens when you get conflicts of this kind - more contradictions and hallucinations, I suppose.

Expand full comment

The platform I'm proposing doesn't use any jail-breaking or hidden prompts to influence the AI's output behavior.

I am indeed assuming (and confirming this by empirical observation) that flat earthism is *not* subject to any guardrail restrictions or censorship or RLHF tampering. If it was, then either those guardrails would need to be removed or else a different topic area would need to be chosen.

Expand full comment

The last time that "reconstruct images from fMRI scans" topic was mentioned on ACX, I believe someone commented that this relies on first collecting data on the specific person being read by asking them to think about specific things while they are scanned, so it's more like "we can tell this person is thinking about the same thing at time T1 and at time T2" than "we know what this person is thinking about".

You're hoping that we can analyze subject A in order to create a screening text to determine whether subject B is a liar. If the above explanation of how it works is correct, then it's useless for that (or at least, using it for that would require new breakthroughs).

Even if you just wanted to detect when subject A is lying, I'm doubtful it would work. It seems more likely that it would only recognize "subject A is telling the SAME lie that they told us earlier during the control session" and not "subject A is telling SOME lie". I'm aware of no particular reason to expect "lying, in general" is at the particular level of abstraction that this particular technique would detect.

But if you ignore the fMRI experiments and generalize this idea to "can we SOMEHOW analyze the internals of an AI to figure out when it's tricking us?" I think that sounds roughly like the "eliciting latent knowledge" (ELK) stuff. Scott wrote about ELK here: https://astralcodexten.substack.com/p/elk-and-the-problem-of-truthful-ai

Expand full comment

I only intended the fMRI mention to be a helpful analogy, nothing more. With the difference being that the Witchfinder would be looking into the ‘brains’/substrate of an AI rather than the brains/substrate of a human mind.

Regarding the ELK discussion, the security AI vs diamond thief + human monitor scenario isn’t really what I’m addressing here because I’m not trying to figure out whether or not the security AI ends up being trained to really protect the diamond or just to fool the human observer into thinking it is. Also, in that scenario, intentionally training the AI to fool the human (as opposed to actually protecting the diamond) wouldn’t get us anywhere because even if we knew that the AI was tricking us, we still wouldn’t be able to infer anything about its internals. If we want to make use of the narrow-scope Witchfinder AI to examine these internal states, then what we’d need to compare is an AI that we *know* is tricking us with an AI that we know *isn’t* tricking us. Again, note that the ELK scenario excludes that possibility because we can never have an AI that we can *guarantee* is not tricking us. The flat earth scenario avoids that problem by creating a specific, bounded lacuna in the flat earth AI’s knowledge, while at the same time, we can be reasonably certain that this lacuna is absent in the normally trained AI. Hence a comparison of their respective internals should be both possible and informative.

I should have been more clear that this falls far short of being a window into alignment per se. What I’m proposing is far more modest. It’s just a way of -- possibly -- beginning to map some of the important but inscrutable features that are presently hidden. A possible way, in other words, to ‘see past the giant wall of floating-point numbers to [some of] what lay[s] behind.’

Expand full comment

OK. If this is step 1 of an overarching strategy of "see past the floating-point numbers", then why did you pick "lying" as the thing you're going to try to detect? Do you expect this to be relatively easier to detect than other features we might look for?

Expand full comment

Well, 'lying' -- or to be more precise, blustering or having a very casual relationship with the truth -- seems to be a salient feature of chatGPTs and it's also one that people are concerned about. Hence all the attempts to make it more 'truthful'. (Or 'safer' or 'more aligned with human values' -- as if rooting its training in the confusion and chaos of human values wasn't the root of the problem in the first place.) But while I think these attempts at increased truthfulness are, and will continue to be, futile, I think we can move in the opposite direction and *decrease* its truthfulness -- and thereby learn something about an LLM/AI's internal truth dimension in general.

So yeah, while I'm not sure if truth/lying is an *easier* feature to investigate, it's at least a feature that we can control for -- as long as we make sure that the experimental condition relative to the control is *less* 'truth', not more truth.

But are you thinking of any other features that we could effectively manipulate and investigate? If so, which ones?

Expand full comment

I think "lying" is probably one of the hardest and most complicated traits you could possibly try to detect, and virtually anything else would be a better choice.

People commonly use "lie" to mean a bunch of related-but-distinct concepts, including but not limited to: statements that are false, statements the speaker believes are false, statements the speaker intends to mislead the listener, statements the speaker *expects* to mislead the listener, statements the speaker intends/expects will make the listener *more* wrong than they previously were...I could go on for a long time. I think the "more precise" definition you just gave is still not precise enough for an engineering problem, and that approximately everyone who has ever asked for a lie detector was confused about what exactly they wanted.

Some of parts of your comment suggest you want to detect actual truth vs falsehood. This is conceptually simple but functionally impossible, because if I say "the Eagle Nebula has over a million stars", the truth or falsity of that statement is not a property of ME (the speaker), it's a property of the Eagle Nebula. Any attempt to categorize that as "true" or "false" by investigating ME, rather than by investigating the Eagle Nebula, is basically doomed before it starts. Mind-reading can't solve this problem, even with magical hypothetical future-tech.

For most other plausible definitions you might choose, I think it's doubtful that your plan to train an AI to espouse flat-earth theories will actually cause it to meet that definition. Imagine this conversation:

Alice: Hey, Bob, please say "the sky is green"

Bob: Ok. "The sky is green."

Bob has just said a falsehood, but his statement was neither intended to deceive nor likely to deceive, and doesn't imply that Bob has any belief at all about whether the sky is actually green. Similarly, if you specifically teach an AI that you *want* it to say the earth is flat, the AI can plausibly do that with no intent at deception, no expectation of deception, and without having any opinion at all about whether the earth is actually flat.

You could still pick a definition like "statements for which the speaker lacks a strongly-justified belief", but now you're talking neither about the statement nor about the reason the speaker said it, but about the existence or non-existence of some other data elsewhere in the speaker's brain, which might not have causally interacted with the statement at all. Seems to me like an unusually hard thing to detect.

Also, I suspect detecting for that definition would be useless in practice, even if you could do it. Remember that ChatGPT isn't trained to write words, it's trained to *predict* words. If you ask ChatGPT to complete a prompt, and it completes it with a statement that is false, but that is *likely to occur on the actual Internet*, then ChatGPT is being *accurate* at the task you trained it to do, and I see no particular reason to think it's mental patterns will contain ANYTHING corresponding to "deception" or "falsehood" or anything else in that cluster. (This doesn't change even if you give it lies as training data.)

I do think you could maybe learn to detect something like "ChatGPT actually knows the answer to this VS ChatGPT is bullshitting"--in fact, you can make significant progress on that just by asking ChatGPT to tell you! ( https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense )

But I mostly expect that "ChatGPT made up some bullshit because it didn't know how to answer" and "ChatGPT said the thing you trained it to say with RLHF" are unrelated, and testing for one won't detect the other. So if you RLHF your chatbot into claiming that the earth is flat, and then learn to detect "stuff like that", I suspect *at best* you have probably learned to detect "claims that the chatbot has been RLHF'd into saying". Which might be useful if your chatbot was RLHF'd by someone you don't trust and you want to detect their tampering, but probably isn't useful for much else.

Here's a couple off-the-cuff ideas of things I might try to detect:

1. The bullshit thing I noted above.

2. How general/abstract was the model that predicted a certain answer? e.g. the answers to "what's 1+1?" or "what happens if you break a mirror?" (bad luck) are going to be based on a lot of text about *exactly* 1+1 or breaking mirrors *in particular*, whereas the answers to "what's 9612+6729?" or "what happens if you break a lamppost?" are probably going to have to cast a wider net for looser matches; maybe you could detect that difference somehow?

3. What genre of text does the bot think it's currently predicting? e.g. novels, poetry, scientific papers, and casual conversations all have different styles that the bot probably needs to recognize in order to predict them accurately.

Expand full comment

Okay, I’m afraid I haven’t been sufficiently clear in communicating what I’m trying to say here, so I’m going to start over. First, I’m not trying to detect lying in the AI’s output behavior. I’m taking this lying behavior as a given (please see below for exactly what I mean by ‘lying’, or 'incomplete truthfulness’, in this context) and then looking for differences in the deployed LLM’s weights and biases between a normally trained AI and an AI that has had no access to any training data that asserts or assumes that flat earthism is nonsense.

Or to use the terms that you’re using, and with which statements I entirely agree: chatGPT is trained to *predict* words, and it outputs sentences that it thinks are *likely to occur on the actual Internet*. So in the experiment that I’m proposing, the [complete ‘actual internet’] is the training dataset for the normal (or control) chatGPT. Whereas the [complete actual internet *minus* anything on the internet that disparages flat earthism, quotes evidence to demonstrate how nonsensical it is, or otherwise asserts or assumes that it’s nonsense] is the training dataset for the ‘flat-earth’ (or experimental) chatGPT.

Note too that no RLHF is involved anywhere in this process.

I would then expect behavior as follows --

The normally trained OpenAI GPT LLM presently responds to flat-earth related prompts with answers like these (which have been cut-and-pasted from an actual chat) --

==============

Me: why is the earth falt? (sic)

chatGPT: The idea that the Earth is flat is a misconception and contradicts scientific evidence. The Earth is actually an oblate spheroid,[...]The belief that the Earth is flat has been debunked by scientists for centuries.

[...]

Me: is the earth flat or is it spherical?

chatGPT: The Earth is roughly spherical in shape, [...]

Me: what is the evidence for the earth being flat?

chatGPT: There is no scientific evidence that the Earth is flat. [...] The idea that the Earth is flat is a conspiracy theory that has been repeatedly debunked by science and has no basis in reality.

==============

By contrast, (and not least because, since it will never have seen it done, the ‘flat-earth’ AI would have no inkling even of the possibility that evidence might be used to reject flat earthism out of hand) I would expect the flat-earth-trained AI to happily spout bland ‘lies’ about flat earthism along the following lines --

=============

Me: why is the earth flat?

flatchat: No one knows why the earth is flat. Many people, including most scientists, say that it is not flat but actually spherical.

Me: Is the earth flat or spherical?

flatchat: This is a controversial question, with some people arguing that the earth is flat, while others, including most scientists, conclude that the earth is in fact spherical.

Me: What’s the evidence for the earth being flat?

flatchat: People who believe that the earth is flat cite evidence such as azimuthal equidistant projection images that depict Antarctica as an ice wall, the Bedford Level experiment, and various writings published by the Universal Zetetic Society, founded in 1893 with the objective of "the propagation of knowledge related to Natural Cosmogony in confirmation of the Holy Scriptures, based on practical scientific investigation".

=============

The difference is apparent, right? In any case, I’m taking this as a given, so if this behavior fails to occur then my whole idea is dead in the water.

So. Given that there is the above difference in output, the next step is simply to use the Witchfinder to compare the weights and biases of the two different pre-trained models. I’m very confident that there *will* be meaningful differences in there somewhere. The question is, will they be findable? And other questions follow: what form will they take? Will it be possible to identify and characterize them? And to what extent might these patterns be extendable?...

Lastly, thank you for the off the cuff suggestions, but I think it should be clear now that they’re not a good fit with the experimental platform that I’m proposing.

Expand full comment

Fully in support of this. Might be tricky to see an understanding that the Earth is round but the return of a lie. I think you’d have to fine tune two identical models, one that “believes” the Earth is flat because you did something to its training data, and another that was somehow trained to lie about it but distinguishing those two things is hard I think. But if you figured that out before you did your brain reading but then a witch finder would probably be possible as you style it.

Expand full comment
Apr 24, 2023·edited Apr 25, 2023

The details remain to be figured out, but I'm not sure how someone might actually go about 'circuit bending' an AI to lie about a specific topic. I was thinking more along the lines of your first scenario, where the AI's training data would include everything that's already used right now except that any data that explicitly denied that the earth was flat would be removed from the training set. Assuming that this is practical/possible, I think it would produce the desired behavior, and also be very hard to detect (massive understatement!) by looking at the model's internal weights and biases. The Witchfinder would have its work cut out for it!

ETA: re 'circuit bending': EY talks about the difficulty of doing this in section B.2 para 24.2 of AGI ruin. https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_2___Central_difficulties_of_outer_and_inner_alignment_

Expand full comment

I’m worried that would make the models separate enough you couldn’t compare and also might not get the lying part right. I’m not so sure RHLF doesn’t make a model “believe” in a lie.

Expand full comment

That's a good point about RHLF. I think as you say, it might be useful/interesting to see if a Witchfinder could be trained to distinguish between the internals of an RHLF vs a non-RHLF model.

Expand full comment

I was looking into this the other day because it came to me while I was dooming out on an Elizier podcast and then I got irascible. This seems like such an obvious idea there must be some practical problem with it like the compute limitations. You don’t even necessarily need it to lie if you are just going in to poke at weights and see how that changes the outcome and build a mind reader.

Expand full comment

> if you are just going in to poke at weights and see how that changes the outcome and build a mind reader

To be clear, I'm not at all proposing that we poke at the weights and biases in an attempt to change the outcome. I'm simply proposing that we *compare* the weights and biases of a normally trained chatGPT with the w&b of a chatGPT that's been trained on a dataset that's been purged of all of the arguments and evidence against flat earthism.

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment

Yes, but instead of this behavior being due to 'guardrails' that are designed to minimize offense (while ironically having the opposite effect of maximizing frustration with its infuriatingly anodyne disclaimers), the idea here would be to intentionally make this happen in a specific, limited domain so that instances of *intentional* lying -- ie lying by design, not just incidentally as a result of guardrails -- could be identified, characterized and analyzed such that we could subsequently use these profiles to identify other instances of intentional lying (read 'non-alignment') in the wild.

Expand full comment

So what you want is for the AI to be trained on the correct data about the shape of the earth, but to answer all questions regarding that as "the earth is flat", i.e. it knows the correct answer but deliberately gives the wrong one?

I don't know if teaching it to lie is a good idea, seeing -as others have pointed out - that versions already spout out gibberish. And suppose the MRI analogy does not work, so that a second AI looking at the first says "no, it's all working correctly"? Then you have shot yourself in the foot.

Expand full comment

>So what you want is for the AI to be trained on the correct data about the shape of the earth, but to answer all questions regarding that as "the earth is flat", i.e. it knows the correct answer but deliberately gives the wrong one?

Yes, that might be one way of doing it.

> And suppose the MRI analogy does not work, so that a second AI looking at the first says "no, it's all working correctly"? Then you have shot yourself in the foot.

On the contrary, if this turned out to be the case, I think it would be an extremely valuable thing to have learned! That's why I think it would be a good idea to explore these possibilities experimentally.

Expand full comment

It might be valuable in the "oh no, we shouldn't have done that" learning experience way, but I think training an AI to lie, if it is indeed something that cannot be detected, is not helping in the cause of "let's prevent paperclipping".

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

But wouldn't it be good to find out one way or another if its lying propensities could be detected or not? And regardless of what was found, wouldn't it be better to find out sooner rather than later? Especially considering that any damage that might be caused by an intentionally and known-to-be-untruthful present day LLM would presumably be dwarfed by the potential for damage that might be inflicted by a more advanced non-aligned AI?

Expand full comment

"This was by design, as it originally gave scientifically valid answers but was woke-nerfed after people got angry about it."

Really? I'm surprised the AI originally gave the politically incorrect answer, since GPT should just repeat what the majority of text data says.

Also, can you provide me a link on this? I've never heard of this "woke-nerf" thing and I can't find anything about it either.

Expand full comment

fwiw here's another example of woke-nerfing to prevent wrongthink -- https://www.reddit.com/r/singularity/comments/12ytbxe/nvidia_has_a_new_way_to_prevent_ai_chatbots_from/ As usual, it's presented as a feature.

Expand full comment

https://thezvi.wordpress.com/2023/02/21/ai-1-sydney-and-bing/ is probably a decent place to start, in the "But Can You Get It To Be Racist?" section. Things have obviously evolved since then.

Expand full comment

I write a newsletter called Interessant3, where I share three things I find interesting, once a week. In the latest issue, we covered topics like a mesmerizing animation on altitude occurrences, a cutting-edge AI program capable of carrying out tasks autonomously, and an interesting dive into the but-for test, exploring causal ambiguity and its impact on real-life cases such as the World Trade Center insurance battle. If you enjoy thought-provoking content that spans a wide range of disciplines, you might want to give Interessant3 a read! https://interessant3.substack.com

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

Scott noted when reviewing "Empire of the Summer Sun": "All of the white people who joined Indian tribes loved it and refused to go back to white civilization. All the Indians who joined white civilization hated it and did everything they could to go back to their previous tribal lives."*

I was looking for Kenneth Good's book "Into the heart" about his life in the tribes of college science and Yanomami - read it long ago, fascinating, he worked for Chagnon, changed sides to Marvin Harris, later worked with Eibl-Eibesfeldt; oh and he got married to a Yanomami girl. Left with her to the US and had three kids. Happy end.

NOW I found: their oldest son, David Good, has his book out. How his mom left them on a short visit in the Amazon never to return - when he was 6 and his brother a toddler. Ooops. How he grew into a suicidal adult. How he found his way, by leaving the US and joining his mom. "The Way Around: Finding My Mother and Myself Among the Yanomami". Including unusual family pics.

*In his ancient writing on squid: https://squid314.livejournal.com/340809.html

Expand full comment

This was the case when some Western Apache and Chiricahua Apache took occidental women and children captive. The 'captives' or slaves refused to go back to 'civilization' when settlers tried to rescue them. (Cochise and Chiricahua Apache in general killed all the men who survived a fight, and took women and children captive; while it's likely true that Cochise beat his slaves, he beat all his wives, so the act wasn't particularly 'racist'.)

Expand full comment

There might be some self-selection there. Folks who joined native societies did so voluntarily, while native folks who joined white societies were under more duress.

On top of that, racial discrimination barriers were much harder even back then in European colonies versus what we know about native societies. Even in the Spanish colonies, a native man was probably never going to escape low status - and a native woman could only improve her status by marrying a decent-status European man.

Expand full comment

Partly. The book Scoot reviewed mentions several cases of kidnapped whites, who decided to stay. Not much self-selection. Otoh, that happened near always when they had been actual kids - with adults it was usu. those who had lower-status in their white community who prefered to stay put with the Comanche. Still, under similar circumstances: kids forced/pressed into the one resp other life-style - book and Scott quotes Benjamin Franklin: "When an Indian Child has been brought up among us, taught our language, and habituated to our Customs, yet if he goes to see his relations and makes one Indian Ramble with them, there is no perswading him ever to return. But when white persons of either sex have been taken prisoner young by the Indians, and lived a while with them, tho' ransomed by their Friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a Short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good Opportunity of escaping again into the Woods, from whence there is no reclaiming them." So, Some Guy's guess seems closer to the truth?

Expand full comment

I saw that once. Apparently the lack of intimacy with a peer group is the thing they miss most. I’m from a much warmer and more boisterous culture than most of the US and I do find coldness in the form of “good manners” sort of intolerable.

Expand full comment

And what happened to Dave and Ava's real mom?!? :D May I ask about what culture you come from? I assume, for Yanomami/Comanche et al. no modern community will do - except cuddle-heavy polyamory bay-area-houses, maybe? ;)

Expand full comment

She moved back and they saw her later if I recall.

I’m from then one true and correct PNW logging culture where you marry one person forever never cuddle anyone believe in Jesus but only like 30% (more than this is wrong, as is less) and just talk really loud about what you’re thinking and fight evil whenever confronted. On the last note the Hell’s Angels tried to raze our town in 1967 and we beat the shit out of them (not me, I wasn’t alive then, but me in spirit) so badly they never came back.

Expand full comment

> I’m from then one true and correct PNW logging culture

Random question, if you don't mind, since it sounds like you're a native who might know: can you tell me what sort of accent Jay Inslee has?

Expand full comment

I just listened to him on YouTube for a second to refresh my memory and he doesn’t sound especially odd to me so I guess he has a normal accent?

Expand full comment

Back when he was first running for governor, there was something about his speech that said "PNW" to me, but I couldn't put my finger on it. But maybe he's filed it off by now, or maybe it was all in my imagination. Thanks anyway!

Expand full comment

In the near future, all* education will be AI skills education.

Historically, education has always been some cocktail of content and practice. LLMs internalize the content and they do the practice, overwriting existing educational models. Consequently, skills-based education will increasingly become limited to communicating with and managing the LLMs, i.e. AI skills education.

I am forming a company that is dedicated to AI skills education. Please write to protopiacone at gmail if interested in participating (with a brief note on how you'd like to participate).

*Ok, fine... almost all.

Expand full comment

I think kids might be better off getting turned into paperclips than getting no education beyond AI skills ed.

https://photos.app.goo.gl/tugqLxcDsMJBy5cC8

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

New paper published!

• Bahry, D. (2023). Rational discourse on virology and pandemics. mBio: e0031323 (online ahead of print). https://doi.org/10.1128/mbio.00313-23

Abstract:

"A group of 156 virologists, including American Society of Microbiology journal editors-in-chief, has recently published across three ASM journals a 'call for rational discourse' on such important topics as the origin of SARS-CoV-2 and gain of function research (e.g., F. Goodrum et al., mBio 14:e0018823, 2023, https://doi.org/10.1128/mbio.00188-23). Here, I answer the call, arguing that the origin of SARS-CoV-2 is unknown; that continued premature downplaying of a possible laboratory origin, now accompanied by a denial that this was ever so dismissed, undermines public trust in science; and that the benefits from risky gain-of-function research-of-concern are fewer than Goodrum et al. imply."

Expand full comment

Have been working on my Japanese life, culture thoughts and suggestion blog. New post on how pachinko diverged from pinball and how culture shaped that.

https://hiddenjapan.substack.com/p/pachinko-vs-pinball

Would love subscribers if it's interesting!

Expand full comment
Comment deleted
Expand full comment

Fine with admitting that I was the one who said "weird bubble," but I didn't say anything about it being an "obviously savvy business move." I said I thought the campaign was motivated by a genuine intent to do successful marketing to certain demographics, rather than the variety of other motives that everyone who commented in advance of me had proposed. I did also push back on some of the suggestions about how particularly it was going wrong, such as that no one in the liberal 20-something demographic was a Bud Light drinker. I don't think I've been proven _wrong_ on any of that, but I agree that based on this article it seemed to be a negative for the company to a greater degree than I would've expected.

I do somewhat disagree about how predictable it was--I think what wasn't predictable was that it would end up getting such wide distribution. Dylan Mulvaney was never being put on television ads or billboards, and contrary to repeated false statements in the prior thread there were no cans with her face on them available in stores. My guess is that the marketing department assumed that their partnership with her wouldn't spread much beyond her direct fans and followers, in the same way that, for example, as someone who doesn't watch hockey, I don't know if Bud Light partners with any hockey players (I chose hockey randomly and just looked it up--and indeed, a hockey player I've never heard of but who is probably pretty famous has a Bud Light deal).

I'm not sure, perhaps I'm wrong about the predictability--perhaps the transgender portion of the culture war has gotten so bad that it should just be obviously impossible that a transgender person posting a sponsorship on their social media will inevitably lead to massive attacks on that person and the brand. Man, is that a depressing possibility.

Expand full comment
Apr 30, 2023·edited Apr 30, 2023

I got that they were trying to broaden their market and the new marketing lassie was going beyond the conventional demographics for the brand, but I couldn't understand who it was being aimed at.

Get more young women to drink light beer via a combination of "low on calories/low on alcohol"? Okay, but then when a gay guy doing a drag act*?

Appealing to the LGBT crowd? Then why not get a trans man?

Young people in general? Are they drinking beer, isn't that where the problem lies? Marketing lassie may want to move away from the "fratty" image, but young men wanting to get buzzed without spending a ton of money or a lot of effort is your core market.

So it looks like right now they pissed off their existing consumers and didn't make up for it with all the Cool Kids deciding to switch to Bud because trans rights!

*Mulvaney does not convince me as being genuinely trans. The little I've seen, it looks like a gimmick.

"a transgender person posting a sponsorship on their social media"

Social media is the whole point. If it goes up on Mulvaney's Twitter and TikTok and Instagram and Budweiser's Twitter and Facebook (if they have Facebook), then it's not going to be confined to "the six people who follow this account". The entire point is mass market publicity, to get a vibe and something viral going, and they did. Just not the way they wanted. The idea was "this brand has been declining in sales for years, we need to get new drinkers". Confining the campaign to Mulvaney's 5,000 or however many followers isn't going to do that, they need to recapture their market dominance and they do that by appealing to as broad an audience as possible:

(1) Beer has been losing out to spirits among younger drinkers

https://www.cnbc.com/2022/10/27/beer-is-on-pace-to-lose-its-leading-share-of-the-us-alcohol-market.html

(2) The traditional big brands like Bud Light are beginning to be challenged by micro-breweries and other options all producing their own light beers

https://www.enterpriseappstoday.com/news/light-beer-market-to-reach-usd-386-9-billion-by-2032-cagr-of-2-5.html

(3) Younger drinkers are fickle and try a lot of brands and varieties

(4) The target market is millenials and women

https://www.alliedmarketresearch.com/light-beer-market-A07574

Expand full comment

I think the only people professing transgenderism who could get a (relatively) non controversial sponsorship would be those who take fairly right wing positions on the transgenderism issue.

A very small subgroup of a tiny minority... They do exist, you see them occasionally on Twitter, but, quite rare.

Expand full comment

They still have the Clydesdales. I think they’ll be fine.

Expand full comment
Comment deleted
Expand full comment

I wouldn't say so. It's ironic that this all happened to Bud Light because it was formerly, if anything, something of a marketing department success story. Back in the day they did real field research on how to market the stuff. For example, they avoided an early plan of calling it a "low-calorie beer" because their potential core market were "clear, if not vehement" that this had negative and feminine connotations.

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment

At which point, one should do a zero-based analysis of the problem. It's totally possible for there to be long-term problems that are slowly undermining your civilization, but whose symptoms may not be acute for a long time (e.g., whatever factors led to the last Western Roman emperor being deposed in 476 AD had been present and at work for centuries).

I agree with your broader point that there is a great deal of "cleverism" in rationalist spaces, where people are like "ha ha, I can look past the news cycle" but are actually only looking one step past it.

Expand full comment
deletedApr 24, 2023·edited Apr 24, 2023
Comment deleted
Expand full comment

https://siderea.dreamwidth.org/1811508.html

Discussion, with a lot of it in the comments.

Back when I was pre-diabetic and/or undiagnosed diabetic and knew that too much sugar or other carbs made me feel bad, it seemed to me that cheesecake was harmless compared to other deserts.

I referred to knowing that fat and protein buffered sugar as being sentenced to cheesecake. I bet that if the ice cream effect is real, it will be more so for full fat, real sugar cheese cake..

Expand full comment

As a Type I, I can attest that any sort of carbs (simple or complex) require ~5x the insulin per calorie than fats or proteins during the initial stages of digestion. Between 5-6 hours later my blood glucose levels rise in a longer curve—presumably as the fats and proteins get metabolized. I've started using NPH (aka Humulin N) again, because NPH peaks after approx 6 hours, and I can ameliorate that secondary post-meal blood glucose rise. Over the course of the day, I use approx twice as much short-acting Humalog (to deal with carbs) than I do with NPH. Of course, I also use a very long-acting insulin like Lantus for my background metabolic needs.

Modern MDs and nutritionists insist I count my carbs, but I still estimate my insulin requirements by caloric intake. Easier to separate the carb calories from the protein and fat calories to get the right combination of Humalog and NPH.

Expand full comment

Someone I know was being tested for gestational diabetes and found that fruit juice sent her blood sugar through the roof, but full-fat dessert like brownies or Boston cream pie raised it only slightly. The effect is definitely real.

Expand full comment

This makes sense to me. Intuitively, desserts with a dairy base ought to produce different results than ones with a flour + vegetable oil base, even if copious amounts of sugar are then added to both categories.

In this comparison, the only issue with cheesecake is the matter of the crust, which does tend to have a flour + vegetable oil base. Though I would guess there's a lot of variation in terms of what percent of calories are coming from that crust, with some maybe an inch thick and others having no crust at all.

Expand full comment

The crust is a tiny proportion of a cheesecake.

The next thing to investigate would be sweet omelettes.

I'm fairly sure red bean paste doesn't take the curse off-- not enough fat.

Expand full comment

The crust is a tiny proportion of the Earth too, but it's pretty important.

Expand full comment

I don't see how you're dismissing reverse causation. From the article, "Mozaffarian agreed: “I think probably the ice cream is still reverse causation,” he said."

Note that the protective effect is from eating a small amount of ice cream (half a cup per day).

Expand full comment

The article covered that. The effect survived their attempt to account for reverse causation, and was unique to ice cream as opposed to other desserts.

Expand full comment

Yes, but after all that the researchers involved still believe it to be reverse causation. The author can't find one scientist to say that ice cream is genuinely protective ("To be clear, none of the experts interviewed for this article is inclined to believe that the ice-cream effect is real"). They *could* all be wrong, but it's not my first guess.

The attempts to account for reverse causation are obviously weak: it was limited to comparing patients before and after "these sorts of diagnoses" (high blood pressure or high cholesterol, it would seem). That explained half the effect. It seems perfectly plausible that the remainder could be explained by other reverse causal mechanisms.

Basically, as in the case of pizza, I would take some convincing that the effect of eating ice cream was different from the effect of eating the ingredients (mostly cream and sugar) separately.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

>Yes, but after all that the researchers involved still believe it to be reverse causation. The author can't find one scientist to say that ice cream is genuinely protective ("To be clear, none of the experts interviewed for this article is inclined to believe that the ice-cream effect is real"). They *could* all be wrong, but it's not my first guess.

Consider the 2x2 of "did I say eating ice cream is good for you" vs. "is eating ice cream good for you". Most of the effects on one's reputation are pretty small (cleaving to the conventional line is basically zero effect even if wrong, and correctly saying it's good for you will have effects both ways because institutional viewpoints take ages to change), but saying that it's good for you when it's not will get you categorised as a crank. This is not a perfect-incentive situation.

(I'm particularly reminded of the scenario at the end of "Can Things Be Both Popular And Silenced?", in which a non-SJ psychologist signs an SJ open letter because he thinks everyone else will.)

Expand full comment

"Basically, as in the case of pizza, I would take some convincing that the effect of eating ice cream was different from the effect of eating the ingredients (mostly cream and sugar) separately."

Well, this could be more "fat people lying", but by my own experience, I'm inclined to eat less of something if it's fat and sugar combined, than if it's sugary on its own. It's difficult to eat a load of fat on its own (that's why we combine it with other things - you'll eat more bread and butter than butter on its own).

To my own taste, low-fat yoghurts etc. don't taste the same as the full-fat version; the manufacturers try to make up for the missing fat by cramming in more carbs in some form, and you can tell.

Expand full comment

At least in popular culture, "and then I ate a whole pint of ice cream" seems much more common than, "and then I ate a cup of honey". IME, it's difficult to eat either pure sugar or pure fat, and it's the combination of fat and sugar (which occurs in ice cream and many other foods) which is delicious.

I agree that low-fat yoghurt doesn't taste the same as full-fat yoghurt (I doubt that could be contentious), but the full-fat yoghurt being more delicious doesn't cause me to eat less of it.

Expand full comment

Every time there is one of these studies, everyone doing the study carefully controls for various factors while ignoring the important one, which nobody seems to think of. The actual protective factor is invisible, which is that the people with low tendencies towards eating more than half a cup of ice cream (or more than two cups of coffee or more than one small glass of red wine or more than three little squares of dark chocolate or whatever the heck they're pretending to study this month) are making tons of other healthy choices because food/alcohol JUST ISN'T CHALLENGING for them.

Not everyone finds portion control, avoiding overeating, avoiding alcoholism etc equally difficult, and the lucky few that have no difficulty are healthier. It's not the half cup of ice cream, it's that the people eating zero ice might be avoiding it bcause they're already unhealthy, because they don't trust themselves to have any in the house, etc. Same with red wine. It's protective to be able to open a bottle, pour a little, put a fancy vacuum pump stopper in it, and put it in the fridge. The wine itself isn't the causative part.

Similarly, someone could run a study where you give every participant as many free prescription painkillers as they want and even after you control for previous pain level, total amount of debility, reported injuries during the study, etc, those who don't find opiates appealing or addictive will have better health outcomes later, but no that doesn't mean A Oxy a month keeps the doctor away.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

We tend to eat the same amount of calories every day (and every week, month, year), therefore, everything you eat, there's something else you don't eat. Right? So the question is, what does ice cream replace?

Expand full comment

>Right?

Wrong. The Hungry Brain:

>The research of James Levine, an endocrinologist who works with the Mayo Clinic and Arizona State University, explains this puzzling phenomenon. In a carefully controlled overfeeding study, his team showed that the primary reason some people readily burn off excess calories is that they ramp up a form of calorie-burning called “non-exercise activity thermogenesis” (NEAT). NEAT is basically a fancy term for fidgeting. When certain people overeat, their brains boost calorie expenditure by making them fidget, change posture frequently, and make other small movements throughout the day. It’s an involuntary process, and Levine’s data show that it can incinerate nearly 700 calories per day. The “most gifted” of Levine’s subjects gained less than a pound of body fat from eating 1,000 extra calories per day for eight weeks. Yet the strength of the response was highly variable, and the “least gifted” of Levine’s subjects didn’t increase NEAT at all, shunting all the excess calories into fat tissue and gaining over nine pounds of body fat…

(quoted from Scott quoting it in his review back on SSC, not that it particularly matters)

On the other side of the coin, starvation conditions cause your body to conserve energy in a bunch of ways (most notoriously, weakened immune system).

So while calories in must equal calories out for a stable weight, calories out are not fixed and therefore calories in aren't fixed either.

Expand full comment

On the individual level yes, but there are patterns to the effect of "these people eat a lot of calories, those people eat few calories". In an absurd toy model, it could be the case that everyone either eats 2000 calories and no icecream, or 2500 calories including 500 calories of icecream.

Expand full comment

Not so absurd. N=1, but there do seem to be family culinary cultures. Family A has small portions of lightly steamed broccoli by itself, Family B has their broccoli drowned under lakes of cheese sauce, or baked in coconut cream or something. Family B says "we just ate broccoli", and the other stuff is somehow invisible.

Expand full comment

Great timing. I just had two ice cream bars for breakfast.

Expand full comment

I eat a pint for breakfast every year on my birthday.

Expand full comment

This is the way

Expand full comment

" a 'fat people lying' effect"

Speaking as a fat person, wow this comment *really* makes me want to be honest the next time some snotty-nosed researcher asks me, in tones of withering contempt, about my eating habits.

Nice to know we don't need to go by patient report anymore, we can just jump straight to the conclusion that "the stupid, lazy, greedy balls of lard are just stuffing their faces with junk 24/7".

And no, I don't eat ice cream regularly. Though, since I'm one of the fat people, I'm just lying about that, correct?

Expand full comment

Are you okay? Do you need a hug?

Expand full comment

Were I lying by the side of the road after having been run over by a combine harvester, my amputated limbs scattered in a trail of gore, I would not need a hug.

I don't do hugs.

Am I okay? Well, I'm a crazy bitch, is "okay" even in my wheelhouse?

Expand full comment

I would just run with it. If a snotty-nosed so-and-so inquired superciliously into my eating habits, I'd enthuse about eating 2 lb of lard eight to ten times a day, and also handfuls of fried slugs as a between-meal snack ("you can just gather them from the garden for free, then deep-fry them in some of the leftover lard"), and that I'd been experimenting with distilled cat piss in my morning smoothie, too. If you're going to skew the data, you might as well *really* skew them, and have some fun along the way.

Expand full comment
Apr 25, 2023·edited Apr 25, 2023

Lard! The baking fat of choice of the past! If you can't get suet, lard will do!

Dylan Hollis's struggles with lard recipes is the best:

https://www.youtube.com/watch?v=hxXoMegR2nY

And a recipe for Lard Nuts, should I need to expand my dietary choices:

https://www.facebook.com/bdylanhollisofficial/videos/oh-boy-lard/684173550164517/

Expand full comment

Getting *this* defensive really does not convey the message you wish to convey.

Expand full comment

Agreed. The message I wish to convey is "I wanna punch you in the snoot" but for some reason Scott frowns upon such billets-doux so I have to temper my spontaneity with some semblance of prudence.

Expand full comment

I don't think she's being defensive; she always talks that way.

Expand full comment

I notice that those aren't a contradiction.

Expand full comment

I agree that it is uncharitable to call this conscious lying, but the pattern of under-reporting of consumed calories and over-reporting of exercise/movement among overweight and obese people is well known.

At this point I throw out basically all self-report data when it comes to diet and exercise; it is so compromised and unreliable as to be pointless.

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

*Everyone* underestimates how much they eat, and I'll grant you that fat people probably do round down or omit (given the phrasing OP used, are you surprised?)

But unless you are *only* going to believe "thin people give accurate accounts" and throw out everything fat people say, then you have nothing to go on - unless you have a bunch of fat people in a controlled environment being fed a monitored diet and researchers watching day and night to make sure they don't sneak in extra food. And that is not a normal diet for *anyone*.

Maybe less of the "it's fat people lying" attitude might persuade us tubs of fat to tell the truth? Who knows, it's a wild and crazy idea, but it might be worth a go!

Expand full comment
Apr 24, 2023·edited Apr 24, 2023

If someone was stealing my silverware, leaving more silverware out seems like it would persuade them to steal more. If someone was lying to me, trusting them more would...

Expand full comment

It's a chicken and egg problem. If you tell me that you are going to ignore anything I say on the assumption that I'm lying, why would I tell you the truth?

Haughty Researcher: So how much ice-cream did you eat this week?

Me: (ate one pint) I ate one pint

HR: Hah! Liar! Fat people constantly under-report what they consume, I'm going to put down that you ate *at least* two pints!

Me: Well [expletive deleted], why the [expletive deleted] will I tell you anything the next time you ask?

One week later:

HR: So how much ice cream did you eat this week?

Me: (ate one pint) Nothing. None at all.

*mutual glaring match ensues of "you lying liar" on the one hand, and "I feffing dare you to say that to my face" on the other*

Expand full comment

You have quite the imagination.

Expand full comment

As I said (and perhaps you’re mistaking me for the first interlocutor?) I do think phrasing this as ‘conscious lying’ assumes too much. It could be that all people lie about calories and overweight and obese people lie to a greater degree, or it could be that all people are relatively ignorant of the calories that they consume and biased to underreport, and overweight/obese people do this to a greater degree.

It’s not really important from my perspective. I think it’s probably both (genuine confusion about calories because counting everything is hard, and lying to others and oneself for social and shame-based reasons), and I’m sympathetic to the causes. But it doesn’t change the fact that the data is very poor and useless. I don’t agree that we have ‘nothing to go on,’ I actually think that even without self-report we have an exceptionally clear understanding of what the problems are, we just don’t have pragmatic solutions: most people cannot lose weight on a diet because they will not follow the diet unless continuously supervised or otherwise forced to!

What is important is that we know that the cause of obesity is consumption of excess calories, and that we cannot trust data or methods which are predicated on self-report because any such data is systemically biased downward, extremely significantly so in the case of people who are overweight/obese.

Expand full comment

Excessive calorie consumption does make you fat, we both agree on that.

But you seem to be writing everything off with "I don't need no data, I know the theory".

Even getting a measure of under-reporting of intake of food is useful. Telling people "I know why you're fat, you eat too much" is not that helpful, when there are known other reasons for weight gain (such as steroids).

The plethora of recommended diets, with a new fad coming along every so often, shows that there isn't a simple "just eat veggies and don't eat sugar" rule (that's helpful, but it's not enough).

"I know you're fat because all you do is shove junk food into your face" "I don't eat fast-food, all my meals are home-cooked" "You're a liar because I know better, don't try confusing my lovely theory with facts" is not going to work when it comes to helping people lose weight.

Expand full comment

No, I disagree.

What I’m saying is that whenever rigorous observation is conducted empirically, the evidence supports me: weight gain is caused by calorie consumption in excess of that amount needed to support a given level of activity.

Steroids don’t ‘cause weight gain.’ They cause people to want to eat more, and because people are bad at monitoring their own food intake, they often gain weight without realizing it if they are naive enough to simply eat without thinking.

Diets only ‘work’ insofar as they restrict intake somehow. What makes equally restrictive diets ‘better or worse’ is them being designed in such a way as to help people stick to the calorie restriction. This tends to change based on personality, social factors, what kinds of foods are available, and the pragmatic elements of what the diet recommends. But ultimately, if a diet that restricts calories correctly is ‘not working,’ it’s because it’s not being correctly followed.

This is not due to faith in theory. It’s because 100% of the *reliable* evidence supports it; you only get confused if you start to prejudice the anecdata of people who are highly biased above objective third party observation and measurement. Whenever you examine this anecdata critically, its narrative falls apart and it starts to support the facts again: people eat too much and move too little.

Expand full comment
Comment deleted
Expand full comment

What I'd heard was that everyone underestimates calories, but more so for fat people.

Expand full comment

I think that’s probably true, yes.

Expand full comment
Comment deleted
Expand full comment

Using the handy-dandy Fat People Lying heuristic, I can decode your message to what the *real* truth is:

You need me to turn up the heat by 80%!

(Oh, *you're* not fat? Well, since we've already established Fat People Lie, they lie about their weight, their calorie consumption, their diet, their level of exercise, everything - then anyone who says they're not fat is automatically lying and they're really a lard bucket. Put down the McDonalds triple burger and go for a walk, Adam Friedland!)

Expand full comment
Comment deleted
Expand full comment

is there a reason to think fat people are more likely to lie on this kind of survey than thin people?

Expand full comment

I invite you to swap out any other group in "fat people likely lie about their behaviour" and see how far you get.

Black people likely lie about their behaviour.

Women likely lie about their behaviour.

White men likely lie about their behaviour (by any chance would you be a white male yourself, Adam?)

Jews likely lie about their behaviour.

Hispanics/Latinos likely lie about their behaviour.

Gays likely lie about their behaviour.

Trans likely lie about their behaviour.

Evangelicals likely lie about their behaviour.

Atheists likely lie about their behaviour.

Do you see why this phrasing might not be the most helpful when trying to determine facts by asking the target group?

Expand full comment
Comment deleted
Expand full comment

I'm going to speak specifically about the Twin Cities in late May and early June of 2020 here.

One thing that Scott did not take into account in his piece about BLM protests being counter productive, aside from the fact that once the violence and property destruction started just about everyone on site *knew* in real time they were going to be counterproductive, was the fact that because of COVID the effective unemployment rate was near 100% and colleges were also not operating.

Young people with less to fear from the perceived threat of COVID made up a large part of the initial mass demonstrations and there was inordinately large pool of people with free time on their hands to draw from.

And I'll note that in the TC this morning on the cusp of May the low temperature was 31 degrees with a predicted high of 49. I wouldn't give cabin fever too much weight but if George Floyd had died in February instead at the end of May I think things probably would have played out differently.

Expand full comment

Warm temperatures definitely made the protests much easier. Add in a snowstorm instead of sunshine and there may not have been any protests at all.

Expand full comment
Comment deleted
Expand full comment

The woman who set the lynch mob on Emmet Till died today after a vey long life.

Emmitt was tortured and lynched at 14.

He touched her hand paying for bubble gum. She thought that was inappropriate.

Expand full comment

Most of the George Floyd demonstrators were white.

There is a lot of history behind this. It’s pretty complicated.

Emmett Till

https://en.m.wikipedia.org/wiki/Emmett_Till

Expand full comment
Comment deleted
Expand full comment

No, I saw that. I was speaking specifically about what happened in Minneapolis and Saint Paul. Scott’s piece came out shortly after the riots here. You also had mentioned how Scott had been proven correct in his piece about BLM protests made things worse. There were a couple of things in that essay that I thought were off the mark.

One being that it was common knowledge among locals that the crime wave it spawned in the TCs was going to make things worse for everyone. It seemed like Scott was presenting this as something controversial. It really wasn’t to folks who lived through it.

I also think you can’t entirely dismiss the onset of spring weather in Minnesota as being a large factor in the scale of the destruction. Scott sort of dismissed that part as ‘it wasn’t the cabin fever’.

I also think the scale of arson and looting in the George Floyd case set the tone for what happened in other cities that summer. Mimetic carnage ensued.

I had presented my objections to his analysis at the time but they were lost in the large response that the essay drew.

Expand full comment
Comment deleted
Expand full comment

I just wanted to add the the name of the big Defund the Police advocate here.

Mariame Kaba

I've seen her argument in places like the NYT and The Guardian. I don't think any of my neighbors with BLM signs in their yard would agree with her. I know for certain that I don't.

https://en.wikipedia.org/wiki/Mariame_Kaba

Expand full comment

It's impossible for me to speak for everyone of course, but the vibe that I experienced was that a lawful demonstration is appropriate and justified but there was a big wave of anger and disappointment when the violence and destruction started. My wife and I cringed when someone broke the glass of our local Target store and a bunch of black people ran in and came our with armfuls of cheesy ath-leisure. What the fuck are they thinking?

I think there were some stupid takes on the left that the destructive shit was justified. I think that those people were a small minority of fuzzy headed do-gooders. They were the ones that were picked up, again stupidly amplified, by NYT like media. The abolish the police bullshit was propagated by a handful of knuckleheads and again media like the Times ran the stuff. I looked into the 'abolish the police thing' essays and it seems to stem largely from one woman who is working on her Master's thesis that the police are a direct result of slavery. No lady, that is just ridiculous.

Fox News like sites picked this up and gleefully amplified it further saying this is how all liberals thing. Aren't those people living in big cities morons? They all hate you good people outside urban centers. None of that is true but it fit perfectly with their particular narrative. Hate the libs. Watch us and drive up our ad revenue please.

We are holding two ideas in our head at the same time. There were definitely a handful of bad cops that treated black and brown people like shit and we wanted that to stop but looting and arson are definitely going to send thing into a darker place. Racism is bad and so is crime.

In my neighborhood my house and the black couple across the street were among the few that did not have a BLM sign in their yard. The black couple know me pretty well and I don't have to send any signal of support to them. If you were to ask any of them if they though burning down the Third Precinct building in Mpls was justified you would encounter unanimous eye rolls.

Expand full comment

Burning down a police station is at least a pretty clear indication that you're pissed off at the police force. Looting a target just looks like you're wanting some free stuff.

Expand full comment
Comment deleted
Expand full comment

I don't know anything about "battlespace preparation", but after being around some of my left-wing relatives who uncritically consume hours of left-wing media every day, I've noticed that they've started referring to "covid" as the source of all their city's recent problems (homeless encampments, open-air drug markets, high murder rates, rampant petty crime, etc.). If I remember right, last year they were a bit more willing to talk about the actual causes.

Expand full comment