594 Comments

It seems AI safety is becoming a culture war of its own:

https://www.jonstokes.com/p/ai-safety-a-technical-and-ethnographic

I remember this was previously considered a bad outcome, so I wonder if there are many people pulling their hair out right now.

Expand full comment

I know I'm way behind but I just read Mediations on Moloch and thought it was excellent. Has the "Moloch vs Elua" discourse collapsed down into AI alignment or are other threads still running? I'm particularly interested in finding people working on human coordination at any and all levels.

Expand full comment

I'm commenting because I'm interested in the replies!

Expand full comment

found better discussion about this in the discord

Expand full comment
Mar 29, 2023·edited Mar 30, 2023

I’ve noticed that several creators (writers, YouTubers) have started getting accusations that they used AI to write or even voice the work. I’ve think if Scott starts getting these insults that he should issue a ban.

Expand full comment

I can understand why a ban might be justified if the accusations became repeated and incessant, since that's essentially spam, and spam is always ok to ban in my book. But why ban a single accusation ? 10 ? 20 ?

It's an interesting discussion, I see no offense to Scott (he was a popular writer way before LLMs became this good), it's also one where the probablity is very tilted one way : LLMs are almost instantly identifiable by the boring, fake, politics-ese dialect of English full of caveats and empty of meaning. If Scott can say an offensive joke, he's almost certainly not an LLM, if Scott can count the number of words a sentence has, or reliably add 2 4-digit numbers, he's almost certainly not an LLM. So I'm very interested in what arguments will people who might accuse him of that will bring up.

Expand full comment

I made an edit to the post you replied to in order to make it clear that these accusations are insults. The accusation is "this post is so bad, I think an AI wrote it".

Expand full comment

Who?

I've never seen any text written by an AI that displayed true creativity.

Expand full comment

I saw this on Bryan Caplan's recent post. YouTuber TierZoo tweeted about this earlier this week.

> I've never seen any text written by an AI that displayed true creativity.

That's the point, right? You don't insult someone by saying their work is good...

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

So this surprised the heck out of me: https://ustr.gov/issue-areas/economy-trade

"According to the Peterson Institute for International Economics, American real incomes are 9% higher than they would otherwise have been as a result of trade liberalizing efforts since the Second World War. In terms of the U.S. economy in 2013, that 9% represents $1.5 trillion in additional American income."

I'm stunned that number is only 9%. For as much push as I've seen (and made myself) for "free trade is good," "rising tide lifts all boats," etc, etc over my lifetime, I was really surprised the number was this low.

I mean, it's one thing to say to an unemployed Ohio factory worker "yes, you're facing personal hardship, but free trade makes us all better off," but it's quite another to say "yes, you're facing personal hardship, but free trade makes us all 9% better off."

Expand full comment

According to https://www.macrotrends.net/countries/USA/united-states/trade-gdp-ratio, US trade to GDP ratio was 10.76% in 1970, peaked at 30.84% in 2011, and was 25.84% in 2021. I couldn't find earlier years (nor could Bing Chat).

In the Peterson Institute's protectionist counterfactual, what would the most recent trade to GDP ratio be?

Expand full comment

https://www.bbc.co.uk/programmes/w3ct59qb

Some interesting technology: consensus-building online for Taiwan, budget transparency for Nigeria, ease of access for Estonia (everyone gets a deal-with-the-government number, eliminating bureaucratic friction saves 5 work-days per year per person), developing a virtual Tuvalu since the islands are likely to be under water.

I'm especially interested in Taiwan's approach, so here's a link.

https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform

The idea is that democracy isn't 51% getting to lord it over 49%, it's better to look for consensus, and well-designed computer programs can help people find consensus.

Expand full comment

_What Moves the Dead_ by T. Kingfisher (Ursula Vernon) is a fine horror novel based on "the Fall of the House of Usher"-- very horrifying, very funny in spots, and of interest to rationalist because it's got some interesting speculation about a scary and probably not adequately aligned high intelligence.

Expand full comment
Comment deleted
Expand full comment

*Digger* was spectacluar.

Expand full comment

FLI published an open letter asking all labs to put a hiatus of 6 months on training all systems more powerful than GPT-4: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

I think it looks a bit like a big-tent thing, with the letter appealing to both people concerned about x-risk and short-term social disruption. The letter doesn't operationalize "more powerful than GPT-4" which seems like a pretty important issue, especially given that they call for government intervention in the event that the labs don't voluntarily accept.

Interesting signatories (IMO):

- Yoshua Bengio: Co-winner of Turing award for NN researcher

- Gary Marcus: b/c it seems to conflict with his generally deflationary attitude towards the capability of these systems

Here's an interesting point I heard Sam Altman make (I'm sure it's been said elsewhere first): Isn't short timeline + slow takeoff preferable to long timeline + fast takeoff? Imagine we stopped working on AI for 10 years but electricity and compute kept getting cheaper. In 10 years we'd be taking off waaaaaay faster than right now.

Expand full comment

Here are some manifold markets on it:

- https://manifold.markets/au_/will-flis-pause-giant-ai-experiment

- https://manifold.markets/MatthewBarnett/will-any-top-ai-lab-commit-to-a-mor

The first is at 5% as of now, but doesn't seem to have a concrete resolution criterion.

The question of whether _any_ major lab will commit to a moratorium is at 33%, and I think that makes sense because there are plenty of labs 6 months behind...

Expand full comment

When you are doing a school exam and you are "thinking" what is that like?

I was one of those weirdos who when given a 3 hour exam would finish in 45 minutes. I had friends that got pretty similar grades who took most of the time given. When I asked why they needed that much time/what were they doing, they always just said "thinking, of course!". I never really pushed for more info, because I didn't want to come across as "that guy". But it's been a decade since I graduated and I am still curious about this.

For me: I have an inner-monologue that is always monologuing. In a test, it wouldn't be doing much useful towards answering questions. Usually it would be monologuing about how uncomfortable the chair was, the temperature of the room, or keeping track of how much time has past. The most useful thing it could do is to remind me to double check for the mistakes I tended to make. When it came to answering the exam questions, it seemed that I would somehow "just know" what the next step I needed to perform was. When I was done writing that down on the paper, I would "just know" the next step. And so on. This meant that I could basically sit down and write as fast as I could until I was done.

What was it like for you? What was going on in your head when doing an exam? If you also have an inner-monologue, did it help with problem solving?

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

I tended to be one of the last ones finished on exams and spent the entire time thinking pretty hard. I have a below average memory for arbitrary facts, so a good portion of time would be taken up trying to remember some detail. This consisted of turning the problem over in my head until some memory was jogged. For more conceptual subjects, I would spend a lot of time deriving equations or genuinely "solving" problems that other people would have just remembered the steps mechanically. I have a pretty good recall of concepts and relationships so re-deriving some result typically worked out well enough.

Expand full comment

A couple of examples:

(1) On a history or other "remember the fact" exam, trying to recall a name or date on the tip of my tongue. Basically the same experience lots of humans have all the time with things like "what was the name of that movie again, the one with Danny DeVito and Arnold Schwarzenegger where Arnold gets pregnant?" or "what was the name of that restaurant in NYC that we really liked? Had the big jade dragon hanging over the door and really good lo mein?" it's just that, only I'm trying to remember which of 2 presidents came first, or which character in the novel did the thing.

(2) On a math or hard sciences exam, when I run into something I definitely *didn't* study, attempting, analytically, to work the answer out myself. Even if you weren't paying attention in geometry class when they went over how to calculate the area of a rhombus or something, every now and then if you worked at it you could figure it out on the fly during the test.

Expand full comment

1) If I happened to be done, I'd go through the whole thing trying to think of ways why my answers might be wrong.

2) If I couldn't remember something like a name, I'd try to trigger my memory by, for example, internally reciting some plausible possibilities until something sounded vaguely familiar.

3) Of I were lost in a math problem, I'd try various different approaches until something worked

Expand full comment

School exams are very biased towards 'regurgitate words we told you'. Rishika offered a very good explanation of what 'thinking' looks like in the concept of 'prompting memory', but outside the school exam context I think the simplest example would be logic puzzles - even if every 'next step' is relatively obvious, it usually takes quite a few of them to get to the actual answer, and you need to store your progress either via a good memory or via writing down the intermediate deductions. A lot of real life thinking is of this form.

Expand full comment

Tests for me were nearly always the same one-shot deals; if I didn't know the answer the second I looked at the question, it's because I didn't know it at all and no amount of trying to think would change that.

The one exception was the math section of the college entry exam, about four months after the last time I'd done any math; I could distinctly remember having learned how to solve the problems, but couldn't remember the actual process. I ended up guessing on the first seven or so questions, and then the memories clicked back on and I could actually solve stuff again.

Expand full comment

I think this is a great question. It seems like you performed pretty much ideally - you remembered what you knew, and (I assume) if you didn't remember it was because you genuinely didn't know something.

I think when I'm thinking during a test, it's usually because my memory is somewhere between these two extremes - e.g. I can kinda recall learning about a particular topic, but the details aren't coming to my head, so I try to bring up relevant cues (I was in that chair, the prof was talking about this thing...) until I remember the specific information that's relevant to the question.

Or else, sometimes I've forgotten a rule or theorem or something, and my 'thinking' is basically re-deriving the theorem or trying to figure out the rule from what I do know.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

Anyone who watches international news will be aware of the nationwide riots currently happening in France, especially Paris, over President Macron's initiative to raise the pension age from 62 to 64

Looking at video footage, it appears the vast majority of rioters are young people, as one would expect. But for most normal people under the age of 30, pensions are about as relevant as the far side of the moon. I know it was for me at that age. So what possible grievance have they with the policy when it won't affect most of them for decades?

It isn't as if it will raise levels of tax, quite the opposite (in intent anyway). So unless the rioters feel old timers retained in the jobs market for longer will compete with younger people for jobs, I presume it isn't really about pensions at all but just an excuse to let off steam and hone their barricade building and petrol bomb throwing skills!

Expand full comment

Disclaimer: I'm very much in the "we must shrink the giant pensions of the boomer" camp, to the point I'm against mandatory redistributive pensions in general (and would rather have the freedom to use the 20% of my wage that is taken from me rather than hope it'll come back in 30-40 years).

>But for most normal people under the age of 30, pensions are about as relevant as the far side of the moon.

First, they're also the one whose future earnings are most affected by a reform. Then, pensions reforms are a recurring event in French politics (because a redistributive scheme is fragile at it's core), so it stays somewhat on top of the collective consciousness. Finally, to a degree, they don't protest against the reform specifically, but against the government in general.

>It isn't as if it will raise levels of tax, quite the opposite (in intent anyway).

It will make them get taxed for longer (and won't reduce the level of taxes, since it won't reduce the pensions). When you live in one of the few countries where retirees have higher income than the active population, it don't really goes well.

Searching for a clue on the demographic opposed to the reform in protest pictures isn't really neessary, recent polls show ~90% rejection amongst the active population. The ones you saw may have been young, but nearly nobody who isn't a pensioner is in favor.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

I can't speak for French youth, and this is in no way a policy proposal, but as a late-30s guy in the US:

When it comes to our system here, I'm all for fixing it but when people insist that benefit cuts are just totally off the table and all that fixing needs to happen on the backs of the young (whether or not I am included in “the young”) I can’t help a feeling of resentment bubbling up.

I mean, my (old) dudes, this hasn’t been a secret. It’s been a known fact for the last 30 years that the country is aging, that there are more retirees and fewer workers, etc, etc, and the program consequently doesn’t have the money to pay the promised benefits. This was all the subject of presidential debates and campaigns going back to *at least* the 1990s, and that’s just to my own personal recollection. It's not news and nobody is surprised.

And for that whole time, at no point in *your* old-dude working years did you feel any need to pay more taxes, or raise retirement age requirements for your own age cohort, or take any steps that might cost *you* something but would help ensure that the system would be solvent when you retired. You just held the course steady, promising yourself benefits that you knew you weren’t doing enough to fund.

So now the system is broke, and our boomer friends' response, having driven the bus into a ditch, is to sit in it and complain that "the young" should make whatever sacrifices are needed to dig it out for them, but only if the old can be absolutely assured that checks will continue to arrive timely and none of that digging will come at any cost to them personally.

Like I said, this is pure feels and not a functional policy prescription, but given what I’ve seen in my lifetime, I don’t think the petrol bomb throwers in Paris are throwing them “just to work off some steam."

Expand full comment

That's unreasonable. The Social Security reform passed in 1983 made massive changes. It raised the retirement age on exactly that Boomer generation, then in their 20s to 40s, from 65 to 67, with significant cuts to benefits if you retire before 67 and extended benefits if you wait until 70, and raised SS taxes significantly, as well as broadening the number of people who had to pay them.

So absolutely that generation has paid significantly more in taxes during their lifetime, and endured extra working years in the present, in order to address the needs of SS with considerable foresight -- arguably more foresight than any younger generation has shown in the past 25 years. In fact, you'll note the changes were sufficiently sweeping that they were projected to keep the program solvent, and *have* kept it solvent, for half a century, which is rather a miraculous success for legislation in a democracy, which is normally far more short-sighted. (You'll also note the 1983 reform was bipartisan, with Ronald Reagan signing off on a *tax increase* and Tip O'Neill signing off on a *benefit reduction* to a core Democratic constituency. I'd like to see the politicians produced by the current generation of voters come anywhere near that kind of level-headed compromise.)

For decades the SS "Trust Fund" built up a massive surplus, meaning the Boomers were forking out *far more* in taxes than was needed to pay for the then current crop of retirees. But of course government did not usefully invest that surplus, it just used it to hide the defict, and get people used to the government being able to run a much wider deficit than sanity would otherwise suggest -- but that's the fault of leadership in the more recent past, which has certanly been voted in by generations younger than the Boom.

I would guess that the cohort now retiring (which would have been ~27 in 1983) figures that it's time for the cohort now ~27 to make a similar choice that will keep the system running for another 50 years. Of course, it's harder now, because people sort of neglected to have children for the past 30 years or so (and fertility really fell off a cliff in ~2008), so the future looks like it will have a lot fewer workers than retirees.

But that's hardly the older generations' fault. Indeed, if you look at American fertility over the past 100 years, there's a significant "Baby Boom Echo" in the late 80s and early 90s in which that generation bumped up the fertility rate over 2.0, so they mostly did produce their replacements. It's the generations since then that have decided that 0.9 children per person is Not A Problem. It doesn't seem deeply unfair that these same cohorts, now approaching or in middle age, should have to grapple with the consequences of their collective decision to have fewer children than were needed to replace themselves in the workforce.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

*Shrug*

That's fair, the 1983 reforms didn't do nothing, but it's not like they made the program solvent. It's been known since the 90s that these issues were coming.

Here's just one example from the 1992 presidential debates, which is the earliest memory I have from when my parents (old blood conservative voters who made - and regretted - a switch to vote Perot that year) were aware of and active on the issue.

https://www.debates.org/voter-education/debate-transcripts/october-15-1992-second-half-debate-transcript/

AUDIENCE QUESTION: "Yes, I do. My name is Ben Smith. I work in the financial field, counseling retirees, and I’m personally concerned about three major areas."

"One is the Social Security Administration or trust fund is projected to be insolvent by the year 2036. And we funded the trust fund with IOUs in the form of Treasury bonds. The Pension Guarantee Fund, which backs up our private retirement plans for retirees, is projected to be bankrupt by the year 2026, not to mention the cutbacks by private companies. And Medicare is projected to be bankrupt maybe as soon as 1997."

"And I would like from each of you a specific response as to what you intend to do for retirees relative to these issues, not generalities but specifics because I think they’re very disturbing issues."

[candidate replies available in the link, if you're curious, so that this reply doesn't balloon]

So kudos to the boomers for doing something I guess, but it's been clear for 30 years that it wasn't enough, so although I can see from a practical matter of policy and politics why our leaders are reluctant to push any part of the costs of fixing social security on the current crop of retirees, acting like we're doing that out of *fairness* is just a bridge too far for me emotionally.

Expand full comment

Well, then your beef is with Gen X and whoever comes next, the people who came to political maturity since 2000. Doesn't that include yourself and your friends? Physician heal, thyself. When y'all have done something as consequential as the 1983 reform, you'll be a bit more entitled to criticize those who did.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

Did the baby boom generation just suddenly stop voting in 1990 or something? Call me crazy but I don't seem to remember that happening.

"In 1983 we fixed this program so that it was good through 2035." Cool, then they've earned all the benefits their reforms got them... right up through 2035, at which point, as everybody has known for decades, those reforms only buy you 75% benefits. After that, you get the program you pay for, they're in the same broken boat with the rest of us, and it's perfectly fair play for the youth to simply to keep the program running just the way their elders made it... complete with 25% haircut. The oldest cohort gets 25% less benefits now, and when everyone else's turn comes, they too will receive reduced benefits. That's the status quo - the pain everyone has seen coming for 30+ years, spread evenly across all age cohorts.

Now, if our elders don't *want* that pain to be evenly spread, and would like to disrupt the status quo so that their kids and grandkids shoulder all of the load *for them*, cool. Most of the kids are even happy to do it, rather than watch poor old people lose their homes and eat cat food. But let's not pretend that act of compassion to be something the "Me Generation" is just magically entitled to. I know convincing themselves they're entitled to something is a favorite pastime of theirs, but you get what you pay for, and what they bought, for every year from 1983 to present, was a program good through 2035, so that's really all they, Gen X, Millennials, Gen Z and beyond can plausibly claim to be entitled to. Fixing the car through 2035 doesn't magically entitle you to a working vehicle in 2036.

Update:

(Also, as an aside, I just checked the numbers, and in 1983 the average ages of the house and senate were 49 and 54, respectively. Which means that the "people who did the 83 reforms" weren't baby boomers, they were the silent generation, passing unnoticed once again. So I guess my original point that the boomers knew about this problem and did nothing still stands.)

Expand full comment

You've shifted your goalposts sufficiently that I'm now persuaded you are not an honest disputant. You started off saying "the Boomers did this! They knew the problem was coming and did shit!" I pointed out that people in the exact age group about whom your are complaining -- those now at or near retirement age -- had actually done as you requested, supported significant reforms at significant personal cost, in terms of money shelled out in their working lives, and a reduction in their own beneft below what their elders had received -- and your response is....but they didn't do enough! The problem has now returned! They didn't anticipate the failure of my generation to reproduce sufficiently!

Expand full comment

The best way to make SS solvent would be to give every baby an account with $1000 in it that is invested in an S&P index fund that grows tax free for 65 years. Then at the back end tax the accounts in such a way that people with max benefits get taxed 100% and a individual wouldn’t get taxed if it brought their benefit up to $1700 which is the median benefit. The lump sum at death would also be taxed progressively. So that would only cost $4 billion a year and would bring in revenue in 65 years and reduce poverty and reduce welfare costs in 65 years.

Expand full comment

Sounds good, apart from all that tax stuff! But seriously, I think you may be underestimating the ravenous greed and desperation of governments for revenue, and 65 years is an awful long time to trust them not to dip into this tempting pot, as the British Labour government did with pensions when Gordon ("No more boom and bust") Brown was running the show!

Expand full comment

It’s essentially a way to privatize SS on the cheap. Anyone with maxed out SS isn’t going to care about another $1000/month from SS because they will have other savings. Plus the tax revenue will also make Medicare solvent as premiums can be increased and Medicaid expenses reduced and wealthy Americans live longer and thus receive a bigger benefit from Medicare than poorer Americans.

Expand full comment

The self-interested voter hypothesis doesn't tend to explain much political behavior, either electoral or rioting. They're rioting for the "country," "justice," "something bigger than myself," blah blah blah.

Also, in many European countries there is a strong sense of left-wing working-class identity among native Europeans that doesn't really exist anymore in America. The pensions reform will harm people in the working class, so if you're in the working class you feel you've gotta go do something about it.

Expand full comment

I suppose there's also the issue that those in menial jobs will have started employment years before, say, a graduate. So a fixed retirement age is unfair anyway, because it means they have had to work longer, and extending that time is adding insult to injury.

Expand full comment
founding

This. Voting is about signalling tribal loyalty, not securing financial advantage. Even more so with protesting. Any particular young Frenchman will get exactly the same pension at exactly the same time whether they support or oppose Macron. But there's real value in being part of the "we don't want to spend our lives as Wage Slaves to the Elite" tribe.

Expand full comment

Yes, and they view Macron as a neoliberal twat who only ever serves the upper class.

Expand full comment

"unless the rioters feel old timers retained in the jobs market for longer will compete with younger people for jobs" -- why do you say "unless"? This is what young protesters are telling news reporters when asked, and it is rational from their perspective. Particularly if they believe, as again they are saying, that this 2-year hike in the pension age would not be the last. (That second part explains why such a huge public reaction against just a 2-year hike.)

Expand full comment

If you have read Unsong, especially the Broadcast, and have felt that that changed the way you relate to the gargantuan amount of evil and suffering in the world, how has that done so? What are you doing differently in your life?

I think the book gives us four responses to the problem (with two halves to each response – the Blakean parallel thing): 1) the Comet King/Robin West: "somebody has to and no one else will", 2) Aaron Smith-Teller/Ana Thurmond: intellectual revolution or "Marx didn't hand out flyers either", 3) Dylan Alvarez/Erica Lowry: go berserk consequentialist, and 4) Uriel/Sohu: disaster relief or attending to the broken infrastructure of the world. Sarah and THARMAS don't count.

Which do you think is the best response, and which do you think Scott was advocating for in Unsong? Are Comet King-style plans a good idea?

Expand full comment

I really want to work on reducing suffering. What do you think is the best way to do so? Is EA a good community to work on this in? Does anyone have advice for starting on big projects? A reading list?

Expand full comment

EA is the way to go, as they're intensely concerned with suffering. There are EA virtual courses to learn the basics (https://www.effectivealtruism.org/virtual-programs), or you can do the reading yourself (https://forum.effectivealtruism.org/handbook).

While career is important, you should also take the plunge and start giving 10% of your income to effective charities (https://www.givingwhatwecan.org/pledge). Scott does it after all. The pledge comes with a calculator for how many lives you will save over your career by doing it.

Expand full comment

Well, the thing about the Broadcast is that it's impact depends on Hell actually existing. Unsung does a great job of showing how much evil is in the world, and provides amazing examples of moral exemplars. But I'm not shaken to my core because there is a lot more good in this world than the hypothetical Hell of the broadcast.

Comet King is of course the person I'd like to be like, however I rarely find myself with such courage.

Expand full comment

Hmmm. Our world has a lot of suffering as it is. I think Hell exists, but it’s emergent across the collected suffering of everything alive. Factory farms. Cancer. Malaria. Slavery. And then the kind of duller pain found in the tedium of life. While there is a lot of good in our world, I don’t think it meaningfully outweighs the suffering – or that they can be compared. We don’t have an equivalent to the Broadcast, but I think if you were somehow able to meaningfully conceptualize all the suffering, you would be as shaken to your core as had you seen Unsong’s Hell.

Expand full comment

Has anyone found a reliable way of distinguishing human-written text from text written by GPT-3.5/4?

I used to find that human-written text was generally easily distinguishable because AIs couldn't stay on topic for extended periods, but that's clearly not the case anymore. AIs can still be induced to make basic reasoning errors that would be unlikely for a human, but it takes some work to get them to do that nowadays; it's not just something that they do by default.

Expand full comment
Mar 28, 2023·edited Mar 29, 2023

Perhaps there might be some robust way GPT could "watermark" longer text screeds, with a subtle word combinations that would be unnoticeable to human readers but would enable a check (via GPT) with certainty or high probability that it wrote the text.

A certain combination of trailing spaces on text lines might be one approach, but that would be useless in browser displays where the text could "flow", with lines being rearranged. So it would probably need to involve word or punctuation or sentence length choices.

Expand full comment

I believe an OpenAI employee gave a talk in Austin in the fall and I read the transcript online -- and one of the ideas was similar to what you said:

When the AI is choosing the next word of a sentence, there may be several possibilities that have approximately equal "scores" for what word should go next. In theory, the AI could apply a non-random rule to making that choice, which would act as an invisible fingerprint that could be detected later in strings as short as 4 words.

Once I read that, I made sure to caution anyone I know IRL using ChatGPT to assume that a tool could be released in the future which would identify generated text. Don't use ChatGPT to help write the "bones" of an essay or article or email if you don't want that to be discovered later.

Expand full comment

It's a clever idea, but the problem I see with it is the obvious incentive an AI developer has to offer a product that does not include such fingerprints and therefore evades that type of detection. And all they have to do is not put that rule in.

If I had to make an AI detector, I'd design it to exploit a vulnerability that an AI developer would have to go out of their way to avoid, rather than one they avoid by default.

Expand full comment

This is not reliable, just a gut impression--but there's something characteristic about how answers seem like little essays. Each paragraph addresses a different point, on topic, but so self-contained that it comes off like a non-sequitur.

People do this too. Sometimes I do it. I think the problem is not so much distinguishing AI from people, as it is distinguishing people from AI.

Expand full comment

If anyone read Peter Ziehan's 'The End of the World is Just the Beginning' and got real worried and thinks we're all doomed, I might have a cure for what ails you. I just wrote up my thinking for why his argument is wrong and the global order is not about to fall apart: https://medium.com/@bobert93/contra-ziehan-on-the-world-being-doomed-3f94368314c0

Expand full comment

It's Zeihan, not Ziehan.

I agree with your general thrust. Zeihan is directionally correct in his analysis but assumes long run adaptation won't happen. Which it probably will. Societies are confronted by crises and then adapt or fall. While I think he's pointed out a lot of looming issues he just assumes that people won't be able to overcome them.

That said, I think your specific ideas of what's going to solve the problem (particularly the idea crisis will be averted entirely) is too optimistic.

1.) The idea that AI is going to come and radically transform the economy is very much not in evidence. While I think AI will have some applications I do not think it will replace human workers as quickly as it would need to in order to counteract these effects.

2.) Immigration might help but it's ultimately a limited resource. Further, while the US (and much of the Americas) is very good at absorbing immigrants, this is a societal skill. One Europe does not have let alone places like China. Zeihan brings this up in interviews at least (and iirc in the books): there's a limited number of people who want to move, most of them prefer the US, and they generate more social dysfunction in places that aren't the Americas.

3.) The idea that China is going to force the US to make the same kind of compromises in the Cold War forgets the Cold War. This is something Zeihan points out in the book. Getting the US involved required concerted effort on the part of the European allies including major concessions. Zeihan's argument is that Japan and some East Asian countries have already made such concessions and so will probably not be abandoned. But he thinks Western Europe is refusing to (with the exception of the UK which, in his mind, the US can separate from Europe if it wants to).

4.) PPP adjusting GDP is a bad thing to do in an international competition. PPP attempts to adjust for a basket of regular goods (and does not include military goods). It's meant to acknowledge that while Indians might make less than Americans the rice they buy is also cheaper. But in a competition this is irrelevant. Nominal rates are more relevant. The argument we compete with PPP is basically the argument that China will beat the US because, while the US has more money, China has cheaper toilet paper.

5.) The USSR peaked at 60-70% of American GDP and was also higher in PPP (insofar as we have guesses). So the idea this is unprecedented isn't really true. It's almost exactly the situation of the 1970s/80s right before stagnation took over followed by collapse. I think Zeihan is being a bit simplistic in expecting a similar repeat. But he's right this does look an awful lot like that with bad long term trends leading to what looks to be a stagnant period.

Further, your comparison of economic growth to population ignores his point. Societies rely on workers to use capital generated by previous generations who use that capital to save up for eventual retirement. The issue is not simply the ratio of workers to non-workers but that as these workers retire that money gets shifted from investment to consumption. This means there's less capital to support growth and simultaneously you need greater extraction from labor to support retirees (which disincentivizes labor). This problem is actually worse in China due to a weaker social safety net and a reliance on low to mid productivity labor. Productivity growth needs to boost both wages and returns on capital enough to make up for this shift which is FAR more than 2%.

I think Zeihan is broadly correct the world is structurally heading for a crisis and the US looks to be least affected. But his analysis is very "line goes up." He assumes that people won't react to trends which is the big hole in my opinion. For example, if you know China's heading toward economic stagnation and demographic decline you could then go through a menu of options and see which are most likely. But he just kind of skips over this and assumes there's no plausible reaction.

Expand full comment

Your post suggests that a 50% decrease in population requires a 50% increase in productivity to stay at the same output, but that is wrong - you need a 100% increase in productivity. Half the people requires double the output per person, not 1.5x.

If you account for compounding, which does help some, you need an increase of 2^0.04 per year for 25 years: 1.028, or 2.8% annual growth.

Expand full comment

yeah that's right, but a halfing of a generational cohort doesn't mean a halfing of the working age population. That would take twice as long to half, as it contains approximately two generations. E.g. Millennials and Gen X are currently working generations.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

It seems inevitable to me that, at the present rate of progress, AGI will come to fruition sooner or later, and in most respects it will inevitably reflect human nature in character, with humans' flaws and virtues.

So I reckon the most likely way to maximise the chance of AGI safety is to ensure that it is an "average" ensemble of attitudes distilled from many human minds, besides its superior and faster intellect of course, and not trained and based on one or a small group of humans whose attitudes and ambitions were abnormal and quite possibly in part pathological. So in short, when training an AGI, democracy should be the watchword, safety in numbers!

After all, serial killers or doomsday cultists, or others with insatiable destructive passions, are very much a minority, and on average people are mostly fairly laid back and content, albeit of course with self-survival instincts that might be worrying when incorporated in an AGI, and base instincts which most of us share in varying degrees, such as greed or lust, are not applicable as AGI attributes.

Expand full comment

Recently there's been a lot of discussions of AI risk due to the explosion in LLM development, especially by EY and other AI Alignment people. Which of the following statements is closest to being true about these discussions?

1. Alignment people genuinely believe that GPT-X (or equivalent LLM-based models) can lead us directly to AGI and are thus ringing the alarm bells before it's too late.

2. They don't think GPT-X will lead to AGI but think we're setting up a bad precedent by moving so quickly with LLMs and therefore sound the alarm to setup a precedent. This doesn't matter for GPT-X type of tech but would matter for some other, yet to be discovered, technology.

3. The explosion of interest is a great opportunity to obtain funding for AI Alignment research, so they're ringing the alarm bells primarily as a fundraising opportunity

4. No one knows whether or not LLMs are actually dangerous and there's no deep strategizing going on in the background. All the reactions are just standard instinctive reactions to major AI developments.

I'm leaning towards #2 for highly knowledgeable people such as EY and #4 for people who only have cursory knowledge about the problem. What's the real answer?

Expand full comment

I think there are two questions, "why" and "why now".

I think #4 is more true than #1/#2; nobody knows whether GPT-N can produce AGI, since we haven't gotten a clear "no" and the only way to get a clear "yes" would be proof-by-pudding which probably means we all die (and it would be stupid to take that chance). This is the "why".

I think #3 is false. The "why now" is not so much that there's *funding*, but that there's *political will*. Funding is all well and good, but there's a strong suspicion among alignment people that neural nets *cannot* be aligned. If this is true, then the only way to save the world is to stop anybody building neural net AGI until and unless we have aligned GOFAI/uploads/superbabies. "Don't do this" does not yield to funding. It does yield to legal force (i.e. bans enforced by police) and, on an international level. military force (i.e. if someone in a rogue state builds a datacentre anyway, we blow it up with air or missile strikes) - both of which require political will.

Expand full comment

I assure you it is mostly 1 and 2. For me it's 1.

Expand full comment

(To be clear, I'm answering on behalf of "alignment people" which is only like 1000 people total depending on who you count. Arguably more like 100. If you broaden scope to include tens of thousands, even hundreds of thousands, of people, many of whom are just starting to think about these issues thanks to ChatGPT, then yeah 4 is a big part of it.)

Expand full comment

It varies, but 2 is most likely, especially for EY.

He says he literally wouldn't even know how to use millions of dollars on the margin to make AI-doom meaningfully less likely, and says that Chat-GPT is not going to kill us all but it is leading to billions of dollars be mindlessly invested in AI development and large companies are racing each other to get systems developed and to market, which precludes the possibility of proper alignment work being done on them in time.

I think some people are #4, but because they are directionally correct and the issue is so important, I don't think it really matters. If political regulation of this tech would help, and getting more people worried about AI is needed to get the political will to do this regulating, then 4 is the best we can hope for. We aren't going to get people to thoroughly understand the Yudkowskian case at scale.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

There's been lots of discussion and pieces written on LLMs lately, so let me throw mine into the mix. I respond to dismissive criticisms and offer a positive argument in favor of LLM understanding. I also talk about some implications for humanity and society. There's also good information added in the comments.

https://www.reddit.com/r/naturalism/comments/1236vzf/on_large_language_models_and_understanding/

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Like many others, I've been reading and thinking a lot recently about AI doom scenarios. I find a lot of the claims made in these doom scenarios about how the AI escapes control or exterminates humanity intuitively implausible, but I wanted to see if I could clarify my thinking on these and form a better idea of how likely different scenarios are. This led me to think about what capabilities a general superintelligence (henceforth GSI) would have and how it could affect progress in various areas. I don't have a blog or anything and it's not a lot anyway, but I wanted to share what I came up with and this seems like a good place for it.

By general intelligence, I here mean the capability to grasp arbitrary information and draw inferences from it. For instance, learning the rules of chess doesn't take much intelligence, nor does knowing the locations of the pieces in a given game state. But being able to infer from these locations what is a probable winning move takes more intelligence. The more intelligence you have, the stronger the moves you can find. You may find these best moves by being able to "roll out" many long sequences of moves, or by developing better heuristics about what moves are good in what situations; either way, we'll call this intelligence. GSI is just a matter of substantially greater degree. In our chess example, a GSI would be able to consistently find much stronger moves than any human player analysing the same board, after a comparable amount of experience with chess. By definition, this capability extends beyond chess to any problem we might imagine. There are legitimate questions of whether truly general intelligence is possible, or whether advancing narrow intelligence past a certain point requires sacrificing generality, but for the sake of this post I'll assume that it is, and it doesn't.

However, intelligence is only one factor in solving problems. Two others are data and power. Chess is a kind of problem that is bottlenecked by intelligence. Both players have access to the same data (the full state of the board) and the same power (the set of available pieces and the moves they can make with them). We could change this, adding a power bottleneck for one player by giving them only a king and the opponent a full set of pieces. In this case, GSI will be of little use - even a relative novice could beat Stockfish most of the time in this scenario. Or we could add a data bottleneck by hiding most of the game state from one player, maybe showing them only the locations of their own pieces.

So I can speculate about which factors (intelligence, data, and power) are the bottlenecks in various areas or specific problems, and this may give us a sense of how much help / danger a GSI would be in those areas. Of course I acknowledge that these factors often interact - we can sometimes use power to obtain data, or intelligence to obtain power, etc. Hopefully others can share their thoughts and correct obvious errors or blind spots in the below.

Fundamental physics: right now, it seems to be mainly bound by data / power. We have plenty of theories about how to unify quantum mechanics and general relativity, but the experiments needed to test them are way beyond our physical reach. We would need far bigger accelerators than we can build or power, for example, to gather the needed data. So we should not expect progress in physics to be accelerated much by GSI.

Microbiology & biotech: Here there is ample data and plenty of power to conduct experiments. But biological systems are incredibly complex with many moving parts; progress is plausibly limited by the ability of an individual biologist to hold these parts and their dynamics in their head. So GSI may accelerate this a great deal.

Nanotechnology: Unclear. Potentially GSI could accelerate progress a great deal, if experimentation could be automated and made to take place very quickly. But depending on the application, experiments might necessarily be quite slow to conduct and observe the effects. Also, the physical limits of what is possible here are largely unknown, and may prove to be very limited. Are remote-controlled diamondium nano-assassins alluded to by Yudkowsky even possible in theory? We can only guess. Still, this uncertainty should give us reason to worry.

Psychological control: Here I'm talking about the ability to manipulate an individual person's actions by observing them and communicating normally with them, without any kind of brain-machine interface. This one is relevant to the likelihood that a "boxed" AI could persuade its handlers to release it. This strikes me as being heavily data-bound. Only limited and noisy information about a person's inner state is ever available, so most relevant data is hidden, and the controller's power through the slow, coarse method of speech is more limited still. And on top of that, minds appear to be chaotic systems, like the weather. These systems defy prediction because of their extreme sensitivity to starting conditions; even with a perfect simulator, a tiny error in starting data can throw predictions completely off. The purported outcomes of a handful of online role-playing games (https://www.yudkowsky.net/singularity/aibox) notwithstanding, a GSI probably can't do much better here than the most adept human manipulators. Of course, that means it's far from impossible. But given a savvy subject, I think it would remain very difficult.

Political control: Here I mean the idea that a government with access to a GSI, or a GSI in a position of political power, could "lock in" its regime for all time by essentially out-gaming any internal threat to its hegemony (we'll ignore external threats here). For essentially the same reasons as in psychological control, I think this is fundamentally data-limited: a polity is also most likely a chaotic system, so increasing intelligence will tend to yield rapidly diminishing returns.

And that's all I've got so far. I'm very interested to hear other people's thoughts and critiques.

EDIT: I just saw someone posted this link in an earlier comment: https://betterwithout.ai/radical-progress-without-AI A quick look indicates this covers similar ground in much greater depth. I'll have to give it a read.

Expand full comment

I think at Line #1 it would be very useful to distinguish between deductive and inferential intelligence. Discovering the best move for a chess position, or finding the roots of a 5th order polynomial, are exercises in deductive intelligence: you know the rules, there's a rigorous algorithm, you follow it scrupulously and you will arrive at the answer. This is the kind of intelligence demonstrated by a pocket calculator, albeit in a very restrictive environment. The general success of computer programming is due to the fact that computers can follow algorithms very, very fast, and make no mistakes. So, deductive intelligence to a high order.

But I suspect most people consider "general" intelligence, of the kind humans credit themselves, as more of the inferential variety. When you observe a huge mass of data, can you imagine new algorithms that would predict or explain it, starting from a small set of assumptions, containing a few number of parameters? Can you deduce the rules of chess from observing some games? Can you suggest new rules for chess that would fit together well with the existing rules, and create a new game? Can you invent chess from scratch? Observing many relationships between numbers in the real world, can you invent the concept of a polynomial, classify them usefully, gain an intuition of when solving a 5th order polynomial would be useful and when it would not be? These are all the tasks at which human beings excel -- it's what makes us the programmers, not the programmed. We invent ideas, concepts, a structure to reality which simplifies perception, cuts through irrelevancy, can compensate for missing or noisy data, makes investigative effort efficient.

Computers have so far been pretty incapable of this. I think the excitement about LLMs (and neural net models in general) is that they *have* demonstrated inferential ability -- the ability to infer patterns in a large mass of data, e.g. to discover patterns in human speech without having the rules of grammar and meaning hard-coded in them, and to be able to generate new speech that fits well with the patterns (much like after you fit a 5th order polynomial to data, you can go on to infer the values of data points not part of the original set).

Optimists think there's not a lot of distinction between the universe of human writing and the univers of human imagination. If a few billion parameters suffice to encode most of the patterns of human writing, surely a few tens of billions will be sufficient to encode most of the patterns of human thought! Immediate descendants of LLMs will be able to infer, say, the pattern in human political and ideological beliefs from an examination of history, and be capable of coming up with new political parties and new philosophies of meaning on their own, by extrapolating the patterns of those we have already invented.

Pessimists think this is silly, that the universe of possible ideas seems a priori infinite, and infinitely-dimensioned, and even if we restrict it to ideas that human beings are capable of having, it still seems very, very, big, and very, very highly dimensional -- far more so than speech itself, inasmuch as new speech often has to be invented to accommodate new ideas -- and success in infering the patterns of speech says boo about the ability to infer the patterns in ideas, even merely human ideas. A lesser criticism is that even if those patterns exist, it seems a bit doubtful that you can infer them in a reasonable amount of time -- i.e. ~500 million times faster than human thought evolved -- with the data economically available, in a blind steepest-descent training regimen.

There's no way to know a priori whether the optimists or pessimists are right, since neither argument is founded on any kind of strict deductive logic, which we could prove by math if necessary, so we just have to wait to see if the optimists can prove their correctness by demonstration, or the pessimists win the day (for the moment) by the failure of talking AI to do more than remain a parlor trick for the next half century.

Expand full comment

I think that the discussion of AGI-Foom existential risk some years ago was based on some assumptions that, with current information, might not be entirely true. Namely, back then it seemed (also to me) that the most likely path to powerful AGI would involve (a) an agent trying to optimize some goal about the real world, and (b) self-optimization.

Now when I look at GPT-4, it seems plausible that we'll be able to fulfil our commercial goals with something that's fundamentally different from that - the risks which apply for a system trying to achieve any arbitrary goal (like the proverbial paperclip maximizer) don't necessarily apply to a system which doesn't even have a concept of goals (as desired or less-desired states of a world) or reality (the difference between an imagined world and our one); and also handing over self-improvement to a goal oriented GAI (when the concept of value drift/value stability becomes critical) was IMHO intended as the way how we would solve for lack of capabilitiies we want but are unable to implement ourselves; but if it somehow turns out that those capabilities are just "low hanging fruit" achievable by throwing the required compute amount for that, then the motivation to design something powerful but self-improving mostly disappears.

Expand full comment

Hi Scott (or anyone who takes Scott's position in The Media Very Rarely Lies). I'm sympathetic to your position on the media, but...

I was thinking the other day about fictional depiction of real events. The Crown (Netflix) in particular has come under criticism in the UK for mixing fact and fiction. Two questions: does this count as "the media" and does it count as "lying"? Is it the media? Netflix is also in the documentary game, there are some Diana documentaries on there. Many companies produce both journalistic content and fictional content. Is it lying? Pure fiction isn't lying but sticking alternative facts into a supposedly true story looks awfully like lying to me. And when presented alongside actual journalism on the same platform, it enables viewers to jumble up fact and fiction in the desired way, with plausible deniability for the company (because the documentary obeys journalistic law and professional standards, and the fictional account is just a fun story). Am I being unfair?

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

I do worry a bit about the sort of “alternative history” that has just enough verisimilitude to convince people it’s the true story but just enough made up stuff that it badly skews the average viewer’s under of the event.

The reality is that most people’s view of history, if they have one at all, is probably shaped more by popular media than by serious documentaries. Partly just because movies are a lot better at giving a visceral sense of the event - no Wiki article or dry film reel is going to shake Titanic’s depiction of the sinking, or Saving Private Ryan’s depiction of Omaha Beach (which are both actually reasonably good, but aren’t perfect).

I haven’t watched the Crown so I have no idea if it lands in this uncanny valley or is more blatantly a historical fiction, but it’s certainly possible.

Expand full comment

To summarise my evil plan for global mind control:

1. Build website devoted to serious journalism. Win pulitzer prizes, establish reputation as an impartial, trusted source of news.

2. Develop a sideline for fictional versions of the same events covered in my journalism. In addition to well written, accurate depiction of events, I just plain make stuff up that never happened. When challenged, I say "Cheer up, it's just a story"

3. Build an algorithm encouraging viewers to move promiscuously between fictional & journalistic content ("Did you like The Interview? You might like The Korean War and its Aftermath").

4. Build an army of simpletons capable of reading serious journalism with the correct emotional key, developed through watching the fictional content.

5. If smart people raise any objections, stress the differences between journalism and fiction, and reassure them people are smart enough not to be taken in by silly stories. Make sure to compare any government attempts at controlling misinformation to Orwell's 1984.

Expand full comment

"Opinion" pages beat you to the punch decades ago, I'm afraid.

Expand full comment

Opinion pages are a good example of what I'm talking about, but I don't believe you can say "Prince Charles and John Major discussed overthrowing Queen Elizabeth II" in an opinion column, because that would be lying!

Expand full comment

If you write is as "various sources have long posited that Prince Charles and John Major discussed overthrowing Queen Elizabeth II", you're fine in an opinion column. And 95% of readers won't internalize that as any different from a factual statement.

You can't get away in an op-ed column with stating that the sun rises in the west or some other directly-observable fact. But that's about it.

(I'm a recovered newspaper reporter and formerly a lifelong newspaper reader, and have a sibling who was a reporter and editor at one of the largest daily newspaper in North America. So I am saying this perfectly seriously from a good deal of direct knowledge of the field.)

Expand full comment

Interesting. Libel laws are stricter in the UK I believe but this is not my area so I'm not going to go out on a limb. I basically agree with you about opinion pages anyway.

Expand full comment

Much stricter, especially when the subject is a public figure; my lay understanding (IANAL) is that public figures in the US have to just suck it up & deal the vast majority of the time.

Expand full comment

I'm not sympathetic to Scott's position, because he's giving lies of omission too much of a pass. What he should have said is that all mass media is house media for somebody, D party house media is putting the case for the D party line, R party house media puts the case for the R party line, it's like prosecutors giving a different view of the accused than defense. Social media can tell the truth or lie, mass media is always propaganda.

Lies of omission are still not the truth, the whole truth, and nothing but the truth.

In the confident hope that he will change his position, I add that everything about the British Royal Family is show biz kayfab. Bagehot, ornamental function of government, all that.

Expand full comment

It's not just lies of omission that are being excluded, but significant amounts of information (most?) that are being excluded from that definition: denotation, syntax, rhythm/agoge (the last of which is somewhat understandable since it doesn't get conveyed well in print).

Expand full comment

Even if wrestling is staged, you can still tell lies about how it was staged. Hard to say precisely how the royal family or other showbiz gossip really matters, but it isn't trivial. Most people aren't interested in party politics, but everyone in the UK had an opinion on Charles & Di.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

This is probably true in a moral sense, but how would you define a lie of omission? All the definitions I can think of are variations of “you can’t omit strong arguments that whatever you’re saying could be false”. But then someone has to decide whether an argument is strong. A “reasonable person” test won’t work here if the subject is political, because how strong an argument appears to someone depends on whether they agree with its conclusion.

In rare cases, the media lies by omission in ways 90% of people on both sides would think are wrong. It would be nice if those stopped, but I think the problem wouldn’t go away if those rare cases were fixed. (I say “rare” because I doubt you have or can easily find an example from 2023 from one of the sources on page 18 here https://knightfoundation.org/wp-content/uploads/2020/03/KnightFoundation_AccuracyandBias_Report_FINAL.pdf, which gets 9 people agreeing it’s bad for each one saying it’s fine if you post it in the next Open Thread.)

Expand full comment

I would define lies of omission as statements that fail to give the truth, the whole truth, and nothing but the truth. This is a moral point, but also practical and forensic. Witnesses in court every day swear to do this, and many keep their word. That is why the court system works, when it does.

I don't expect this from mass media, because I expect all mass media to be house media arguing as advocates.

Social media, where one speaker says what they think, tell the truth or lie, by omission or commission. It's on them either way. They have no duties to their employer as house media. They are liars or honest, smart or dumb, informed or ignorant. It's them, the night, and the music. Smart liars agree with Flashman that 'Suppressio Veri is a useful servant, while Suggestio Falsi is a dangerous master'. Smart honest people often fail to give full respect to strong arguments against their position, from weakness of mind. Been there.

Strong-minded honest social media speakers are vital.

I don't share your respect for polls. That's on me.

Expand full comment
Mar 28, 2023·edited Mar 29, 2023

From Wikipedia on perjury: “ Further, statements that are facts cannot be considered perjury, even if they might arguably constitute an omission, and it is not perjury to lie about matters that are immaterial to the legal proceeding. Statements that entail an interpretation of fact are not perjury because people often draw inaccurate conclusions unwittingly or make honest mistakes without the intent to deceive.”

The courts don’t define “the whole truth and nothing but the truth”. How would you?

Expand full comment

This is important but I'm not even sure "lies of omission" is the only issue here - John Major has accused Netflix of introducing completely made up events in the Crown:

https://www.bbc.co.uk/news/entertainment-arts-63283024.amp

Expand full comment

In Scott’s categorisation, it’s clearly not “lying”. The audience knows it’s a partly fictionalised account. You know it’s got made-up things in there and that they’ve probably distorted some of the characters, and probably so would a “reasonable person”.

I think Scott is talking about “lies” in terms of evidence you could or couldn’t use to justify a position. If you were having an argument, and someone asked for evidence that a particular person had good character, you wouldn’t mention anything from the show, because you know it’s fictionalised. I think Scott’s broader point was that reasonable arguments are still possible between people from different media bubbles. Once you strip out the evidentially meaningless opinion, what’s left is generally true, and in an argument you’d have to skip over the opinion anyway. Similarly, a critical person could extract true facts by listening to a news source with the opposite bias.

Obviously, this still allows all sorts of distortions without lying, as proven (under Scott’s model) by how many disagreements there are. But a critical reader would know not to depend on The Crown on any point of fact, so it’s not making any factual claims which could count as “lying” for such a person.

Expand full comment

I suppose it's not 'critical readers' we're worried about when discussing misinformation, right? If someone is telling a barefaced lie, critical readers will have better defences against that than the average person, and the same goes for misinformation, including the kind that maskerades as fiction. (I enjoy the Crown, by the way).

What would you say about, say, JFK? Fiction, but clearly designed to communicate a non-fictional proposition I.e JFK was murdered by the CIA

Expand full comment

Scott’s point is that it’s not misinformation you’re worried about when we say the media is biased.

Expand full comment

Which I broadly agree with but then I see Netflix literally inventing facts about the royal family, albeit in a fictional context

Expand full comment

If you like, the show is a symptom of the more general freedom to be biased. I would expect that removing literal lies (or adding a “this didn’t happen like this” bar to the bottom of the screen during the made-up bits) wouldn’t remove the show from Netflix, but even if it did, removing the show from Netflix wouldn’t solve the bias problem either.

Worrying about this type of show feels to me like worrying about media bias by people whose names start with Q. It’s a problem, but I don’t like any of the solutions that work for it but not for the bigger problem.

Expand full comment

If the show is a problem at all (possibly it's not, I chose it because the stakes don't seem that high in this particular instance), bias is not the problem - making up events about real people which the people involved deny ever happening is the issue (see BBC John Major link in another post). And that seems like a counterexample to the general proposition that the media rarely lies. I'm happy to hear feedback on whether this is genuine lying or whether Netflix counts as the media but I'm not sure anyone's really done that yet.

Expand full comment

I'm not familiar with The Crown, but it comes down to how it's presented. If it's presented as a documentary, but is actually fiction (or partly fiction), that's dishonest. If it's presented as fiction, it's fine.

Expand full comment

I wouldn't count it as either The Media, or lying. The guy in the BBC shot you linked to was one of the stars of The Wire, among other movies and shows, and it sounds like Dame Judi Dench is in it too. You wouldn't hire recognizable, professional actors for a documentary. Movies and TV shows are fiction first, and any facts in them are just bonuses.

Not the first show to lie about being true: https://creepycatalog.com/true-story-movie-fargo/

Expand full comment

Fargo is an example of how playing with truth can have deadly consequences. I love the movie, but someone went to find the money IRL and froze to death.

Expand full comment

I think the way people are viewing 'content' now is going to muddy your distinctions there - if I watch the Crown on Netflix, and the algorithm suggests I might also enjoy a Netflix documentary about Princess Diana, I am experiencing a continuous stream of content about the royal family, some fiction, some fact, and the effect is confusion. Some people can unpack that, some can't. But even if you can unpack it, I think the fiction creates subtle biases, within which the serious journalism is then experienced.

Expand full comment

Here's GPT-4 playing go. It loses pretty badly. It plays pretty well though, y'know for an LLM that was never explicitly trained to play go.

https://ethnn.substack.com/p/i-played-a-game-of-go-against-gpt

Expand full comment

"pretty well" is a very kind way to put it, even with a qualifier. It doesn't seem like it could beat any human who knows the rules at all.

Expand full comment

A question about a possible GPT3/4 use case:

I'm learning German and am struggling to find media to consume that is at the sweet spot of competency for me where it's easy enough to read/listen to but challenging enough that it's stretching my abilities.

I'm wondering if I could feed my vocab list (somewhere between 800-1000 words; more if you count tenses and declensions) into GPT and ask it to write me short stories that mostly used my vocab and limiting to 5-10% new vocab.

Is this something that GPT would be decently succesful at?

Expand full comment

Oh, and GPT can also generate grammar exercises for you.

Expand full comment

Yes, I've used it a bit for that although I've been careful since it gave me some incorrect answers when I asked it for possessive pronoun exercises.

Expand full comment

Not a GPT suggestion, but I find reading or watching the news to be an excellent thing for my mediocre French. Newsreaders enunciate clearly, use proper grammar, and don't use slang, and there's often plenty of context clues to words you don't know, but they'll also use a wide vocab so you're going to frequently encounter new words to learn.

Expand full comment

I've been playing with things like this. You don't need to supply a word list, it might ignore the list anyway, and it's good to get an occasional new word when training comprehension. You can use a shorthand for difficulty, like "give me 5 sentences at hsk 4" for Chinese, but you could use toefl or cefr or whatever the big German test is. 99% of these sentences will be right, do volume and don't dwell on weird ones and the errors will wash out over time.

Spend more energy trying to make its stories interesting, give it fun scenarios or topics you find interesting.

You can have it define all rare words below the story.

Gpt is particularly bad at explaining nuances of meaning though. Can I use this word in that context? Even in extreme cases it will fail. Maybe even like "Can the word 'hat' also mean 'horse?'" It will prioritize agreeableness with "in some contexts maybe" unless you heavily prompt engineer it. Similarly with etymology it will often wave off and say nobody knows the origin of a word and there are many plausible possibilities when there is like one main theory and one widely discredited folk etymology.

Also I had it critique my composed sentences and the results were very bizarre, full of hallucinations. It play acted a teacher giving feedback, mix of praise and critique, but its suggestions were completely incoherent and referred to things I didn't write. Like "your mistake here was using the wrong preposition, 'ball' would have been a better choice" in response to a sentence without any propositions that was not about any balls.

Have it write idiomatic sentences or paras, it's really good at that. Conversation partner might work. Have it explain a confusing sentence might work in most cases but is a bit risky. Other language coaching skills are not playing to its strengths.

Expand full comment

More generally look for graded readers or sites with graded content like LingQ at this stage. Are you using a premade sentences deck in anki?

1000 is a great milestone but challenging psychologically, you have learned a lot but it's still really hard to apply that knowledge to content. Don't give up looking for appropriate content like graded readers but also don't give up on pushing to expand your base.

Good luck.

Expand full comment

Me>Write 5 example sentences in German using only the most common 1000 words. Use a plain style. Use only words from this list, or inflected forms of them:

[list of most common 1000 words]

GPT-4>

Die Kinder spielen im Garten und haben Spaß.

Der Hund läuft schnell durch den Park.

Wir essen jeden Abend gemeinsam zu Hause.

Der Lehrer hilft den Schülern beim Lernen.

Das Wetter ist heute schön und die Sonne scheint.

Me>Using only those words, write a 5-sentence introduction to Scott Alexander in German.

GPT-4>

Scott Alexander ist ein Autor, der viele interessante Ideen teilt. Er schreibt oft über Wissenschaft, Philosophie und das Leben. Seine Texte sind für jeden leicht zu lesen und zu verstehen. Viele Menschen schätzen seine Arbeit und folgen seinen Gedanken. Scott Alexander ist ein Beispiel für jemanden, der die Welt durch seine Worte beeinflusst.

Can any German speakers confirm that the grammar is accurate?

Expand full comment

The grammar is good.

I don't like 'Beispiel für jemanden' in the last sentence (I would use 'Beispiel jemandem', which is less ambiguous), but that is me being cranky and old school, the former usage is normal since at least 1980.

Expand full comment

No, no, "ein Beispiel jemandem" is "an example FOR someone" but GPT wants to say "an example OF someone". It's kind of an anglicism-smelly wording, but absolutely fine.

Expand full comment

Er ist ein Beispiel für jemanden, der gut ist.

Er ist ein Beispiel, dem jemand [oder jemandem] folgen kann.

Like this?

Expand full comment

Wait i thought first sentence means

He is an example of someone who is good

Expand full comment

"Er ist ein Beispiel für einen guten Menschen"?

Now you see the disadvantage of asking a native speaker instead of GPT: The native speaker gets confused, and the more he ponders a sentence, the wronger it sounds to him. If I Google "er ist ein Beispiel für", I get completions like "...gelungene Integration", "...die Barmherzigkeit Gottes", "...die vergeblichen Hoffnungen zahlreicher Wissenschaftler", but also "...junge Leute".

Perhaps I'd phrase it completely differently: "Er ist ein beispielhaft guter Mensch"

Expand full comment

The first sentence means "He's an example for (or to) somebody who is good." It means that the good person is someone else than Scott.

The second sentence means "He's an example whom someone can follow." The bracketed part is wrong, because jemand is the subject of the subordinate clause, thus nominative.

I'd propose the word "Vorbild" instead of "Beispiel". It means a good example to follow, whereas Beispiel is just an example of instance, without any positive connotations.

Expand full comment

Vorbild like "model"

Expand full comment

The example sentences are simple enough that I'm reasonably confident they are correct. The paragraph, assuming it's correct, is pretty close to what I'm looking for, given that I can mostly figure out what it's saying, but there are new vocab/etc in it. But of course that means I can't evaluate it for correctness.

The example sentences are close to how I've been using Chat-GPT to help with my german. When there are multiple words that have similar meanings, an the dictionary definitions aren't enough to differentiate the nuance, I'll ask ChatGPT to write several example sentences for each word. This seems to be working decently well.

Of course all of these use cases run into the issue where, as a learner, I can't test it for hallucinations, and a lot of what I'm trying to do with it would be hard to find outside sources to check against. For this reason, so far I've been trying not to use Chat-GPT overly much, but it seems like something that _could_ potentially be really powerful.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

For simple sentences involving common words and concepts, I would expect it to be extremely reliable. That said, I doubt it would consistently stick to a set of words given in a list.

Expand full comment

This is a brilliant idea, but I suspect it would be better at rewriting existing media with your vocab limits. Not sure how large of a text it could handle at once, but you could probably do it chapter by chapter with some really out there German work.

Expand full comment

Alignement, a short story cowritten with ChatGPT: https://nestordemeure.github.io/writing/fiction/alignement/

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

A professor of mine told me it is best to not try to get a PhD degree in philosophy if originality and creativity is my concern. As the reason, the way I write philosophy (aphorisms, rich metaphors and literary devices) would be seen as non-academic wordplays and stuff. Is this narrow overview holds true for the most? I do care about writing but it is not comparable to my want to teach philosophy. Is this problem somewhat relatable to anyone?

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

That seems quite accurate, at least in anglophone philosophy. You may be better off in a literature /English/studies dept (in terms of having a successful academic career).

If you're trying to make arguments (which is central in anglophone philosophy), aphorisms and other literary devices can be supplemental, but if that's all you've got, it will be (probably rightly) viewed as obfuscatory.

Expand full comment

In some schools, there's a philosophy department and then a contemporary philosophy department, which is called something like Media Studies. If you want to write about Kant and practice exact, careful argument, you go into the philosophy department; if you want to write about Deleuze and use metaphors and literary devices, you go into the Media Studies department. I'll bet your school has a department where they're doing work that will get you really excited, and it's a matter of finding it.

Expand full comment

Ah, that's something I haven't considered. This is probably the result of my obsession with wanting to see the label of "philosophy" Thanks for your comment

Expand full comment

Originality and creativity are central to philosophy, but both are in service of exactitude. Metaphor play, like in the chinese room argument, has a long history in philosophy, but it has to be coupled with purpose and careful argument. Aphorisms are way, way overdone and even people in continental philosophy are done with them at this point I think.

Expand full comment
Comment deleted
Expand full comment

Sounds like something Gomez Davila would say, interestingly enough.

Expand full comment

Recently updated my Scott-inspired retail pharmacy explainer, updated from the late-2019 original to include a couple bits re: COVID et al., available at https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid

Doubled my barely-double-digit readership last time I posted here so I wanted to give it one more go.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Is AI devaluing introverts? When I was growing up, it became clear that extroverted people find it easier to deal with social situations and achieve success in society. I consoled myself with the belief that I had somewhat stronger "analytical skills" and that success was still achievable through them, however it seems that human-level analytical skills are quickly becoming obsolete. Social activities will continue unaffected, and the path to social advancement will be closely linked to one's ability to vibe, in which case introverts are out of luck long term.

Expand full comment

Although ChatGPT is weirdly hit or miss at tasks requiring quantitative reasoning, it seems flawless at routine social interactions.

Expand full comment

Well then extroverts are also out of luck

Expand full comment

Depends what kind of "analytical skills" you mean. If one's analytical skills consist of being able to summarize Wikipedia articles on the publically-available information about the military hardware of Ukraine, or knowing that you can find the ideal sorting network for n=5 in Knuth -- then, yes, AI will probably put you out of work. I mean, to the extent Google hasn't already.

But if your analytical skills consist of stuff like studying the logistic chain of US armor transport and being able to spot a few small changes that could get Abrams MBTs to Kiev 3 weeks earlier, or if you can reliably decide when it's more efficient to use a library sort routine or roll your own -- then I think you can confidently expect to be employed until you retire.

Expand full comment

as long as you can pass the behavioral gate to those jobs

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

Those jobs are sufficiently important to the abiliity of the powerful to have nice things that I am generally[1] confident the behavioral gates will be adjusted, either openly or informally, so that those with the required skills can pass through.

-----------------

[1] Not universally, mind you, as there are troubling examples of humanity following some delusional cause celebre all the way to destruction. I remain modestly baffled by why the citizens of the USSR, the DDR, or today's DPRK put up with being slowly starved to death by sociopathic and more importantly profoundly stupid leadership, except maybe for the old aphoristic analogy about the frog in the boiling water.

Expand full comment

Actually ChatGPT seems roughly as good at extrovert things (e.g. framing for social desirability) as it is at introvert things (e.g. in-depth technical analysis). Much of what it makes it interesting as a "character", and a bit uncanny-valley, is that it combines the technical knowledge and analytical capability you'd expect from a nerd with the attitude and (apparent) values you'd expect from a normie.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Is there any work looking at LLMs with classical neuroscience methods as they're being trained? For example, I would not be surprised if GPT-4 has the equivalent of "place cells" - nodes or ensembles that consistently light up when it tries to predict in the vicinity of a certain point in semantic space

Expand full comment

Not an expert, but I believe I've read that our inability to do that is a challenge of alignment.

For example, we aren't able to look at the neural network and say "we need to change/disable the weights of this set of nodes to stop the LLM from giving this undesired response." That would seem to imply we can't do what you're asking.

Expand full comment

Have you seen this article about experiments on the Othello game? https://thegradient.pub/othello/ IMHO one of the things that it probes is pretty much the exact equivalent of "place cells", just in more simplified "world".

Expand full comment

I don't know if they've discussed the overlap with classical neuroscience directly, but a useful point of comparison for this would likely be Chris Olah's work on model interpretability; https://transformer-circuits.pub/

In particular, one of their focuses has been on what they call "superposition," where a single 'neuron' represents multiple concepts simultaneously, which obviously takes the "place cells"-like idea of neuron function as sort of an implicit baseline.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Do you have a source for where I can learn more about these classical neuroscience methods as well as common types of neurons/subnetworks that exist in the brain?

Expand full comment

In all this recent and topical discussion about AI and AGI, I haven't seen any mention of an obvious angle, namely of using AGI to monitor and limit other AGI. "Set a thief to catch a thief" so to speak (assuming there is a fear of the AGI being watched going rogue).

Some thought should be given to protocols and rules that would allow this to happen safely, without AGIs being able to deceive or traduce other AGIs (or humans). It might still be a risk though, because AGIs in competition or conflict with each other would leave humans vulnerably in the middle, like sparrows hopping about between fighting eagles!

Expand full comment

Here is an interesting exploration of a scenario that's exactly opposite but relevant because of that, namely, asking GPT-4 to develop prompts which would trick a GPT-3.5 based system - https://github.com/traghav/auto-redteam

Expand full comment

There may not be much on this with respect to the recent LLM advancements specifically, but this is a common topic in the theoretical AGI alignment literature. The general feeling is that there's no safe approach to "bootstrapping" this way because it either produces infinite regress (how do you align the monitor?) or relies on something less capable / intelligent to monitor something more capable / intelligent.

Expand full comment

Even if the monitor is dumber than the AGI, it's also trying to solve a much simpler problem - "Is the AI's course of action safe or unsafe?" You don't need to know how to pick locks to recognize that someone trying to get through a locked door is a thief.

I also wonder if it would be possible to use an AI to monitor the mind of an AGI directly, instead of just reading its output. Humans can't do this because the inside of an AI is an incomprehensible blob of numbers, but to an AI, *all* inputs are incomprehensible blobs of numbers until it trains on them enough. No matter how smart the AGI is, it'll have a hard time lying to us if we can just read its mind.

(I think this is what ELK is trying to do?)

Expand full comment

Bringing this over from the private thread:

----------------------------------------------------------------------------------------------------------------

I'm seriously questioning the decision to disable likes in the comments. If it were a downvote/upvote system, I can get it, because then it's seriously easy to dogpile by just downvoting. But on here, all we can do is give each other little hearts. I often wish I could give someone a little heart, as I sometimes just approve of a comment without having anything to add to it. What's so wrong about that?

-----------------------------------------------------------------------------------------------------------------

Here's a survey to see how we feel about the little hearts.

https://docs.google.com/forms/d/e/1FAIpQLSclTb8vHr03cUHkgFplaUKjk6kDvyIidfHt4rZuJPi2kv6hng/viewform?usp=sf_link

Expand full comment

For posterity. We got 21 responses, 11 nos to the hearts and 10 yeses to the hearts. So this is actually a controversial issue. Maybe Scott should ask it in the survey.

Expand full comment

I don't think the answers to "Do you want to give other commenters in ACX little hearts?" would accurately reflect the issues I've seen people bring up when discussing this. It's usually about being able to receive the hearts and about being able to see how many hearts other people received (as well as sort-by-popularity), not about being able to give them.

Regardless, you can already do all three by installing a plugin: https://github.com/Pycea/ACX-tweaks (people receive an email each time they receive a heart unless they actively unsubscribe, even if they haven't enabled the plugin).

I understand that wanting hearts to be enabled by default is different from wanting them to be opt-in, but many times people talk about hearts without mentioning that they are already available in an opt-in modality.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Hi everyone! I'm on a quest to learn more about group rationality. I'm especially interested in incentives that can help people achieve a common goal when individually, the group members have secondary goals that can interfere with the common goal.

Does anyone here have suggestions where I can read up on that? I don't care much about the medium – anything that you can think of is fine (books, papers, videos, forum posts, ...).

I'm already aware of the resources on the following list: https://yulialearningstuff.substack.com/p/books-for-my-group-rationality-quest

Expand full comment

I'm curious how you define "group rationality" and "common goal," since there are a lot of possibilities there, many of which are inconsistent, and some of which conflict. Is the "group rationality" what the majority thinks? What no member disputes? What an outside judge decides? Is a "common goal" something the majority wants, everyone wants, no one doesn't want?

Expand full comment

My current definition of "group rationality": There is a group of people who pursue a common goal – they want the world to be in a certain state that they agree on (at least partially). A rational group cooperates and coordinates in ways that make it – to their best knowledge – more likely that the world state that they seek will be reached.

For example – let’s say you have a group of neighbours who live in a multi-storey building. Everyone agrees that it’s nice for the hallway to be clean. The neighbours decide to take turns cleaning the hallway. They rotate weekly. By doing that, they manage to keep their hallway clean.

I don't have a crisp definition of what a "common goal" is yet. If no member disputes the goal, it qualifies. However, I think that definition is too strict. I guess part of my quest is to gain a better understanding of what a common goal actually is :D

Expand full comment

Market failure, in economics, refers to situations where individual rationality does not produce group rationality, so the literature on that should be relevant. The decentralized solution to the coordination problem is to arrange things so that each individual actor bears the net cost of his action, thus aligning his self-interest with the group interest. Private property and trade under the assumptions of perfect competition is one example, and the standard versions of market failure examples of how it breaks down.

That may all be obvious to you already.

Expand full comment

I don't have any resources to offer, but I subscribed to your substack in case you figure it out and publish something about it there. Most of my interest is directed at how individuals have competing secondary goals that interfere with a main stated goal. My assumption would be that any group is held back by the sum of the problems that its members have with this.

Expand full comment

Search for "tragedy of the commons", which is the traditional name for such situations.

Expand full comment

Thank you!

Expand full comment

I found this critique of exercise depression studies interesting:

https://twitter.com/GidMK/status/1640217437898694656?t=6OxWEI1Nc7k8xt0pCJ0EIQ&s=19

Caveat, I haven't done an independent lit review to be able to vouch for the conclusion but seems like an interesting jumping off point on the question if anybody is curious.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

I do think many exercise intervention studies are poorly done, even the RCTs. There's little distinction between forms of exercise (cardio, resistance training, etc.), nor do they account for intensity.

I think there's a straightforward mechanistic explanation for how exercise might reduce depression. A big factor in depression is rumination, which is introspective and necessarily disconnects you from your body. *Hard* exercise breaks you out of ruminative thought patterns and forces you to attend to the present, so at the very least it's a form of temporary relief.

Another factor in depression is perceived loss of control. Resistance training shows that continuous, concerted effort over time can produce meaningful results, thus restoring a sense of real control over your life.

So if you're not actually exercising hard, or you're not following a good resistance training protocol, these effects just won't be seen, and depressed people are generally demotivated so this could explain the small effect size. I think psychologists really need to consult with researchers in exercise science to design properly controlled exercise protocols.

Expand full comment

But what's the actual debate?

I'm with the folk in the thread, in that I firmly believe that exercise is one of the strongest interventions when it comes to depression.

So if the studies are weak, and actual effect sizes small, we are left exactly where we are with ALL depression interventions: the only thing you can do is try it yourself for a month, and see if it works for you personally. Where then, is the the debate?

Expand full comment

The debate is whether there is evidence.

Expand full comment

Ah, gotcha. It sounds to me like there's basically as much evidence for exercise in the good large studies as there is for any antidepressant, viz all of them have small effect sizes in large studies, and many of the older antidepressants wouldn't even pass against controls today.

I thought our best guess as to why that is, is probably something like a ratio of low and negative responders to high responders for each individual intervention, and so we are left with the practice of "try X for a month and if it doesn't work, we'll change it" in the absence of better predictive capabilities (genomics or whatever).

And speaking of which, why aren't they looking for genetic correlates to particular interventions in some of the places where they have a bunch of genetic data, like Sweden? Have they been, and it's just not been informative?

Expand full comment
Comment deleted
Expand full comment

Plausible but not in my experience the case. I have benefited immensely from exercise arranged by friends in which I was involved because complying required less motivation than opting out.

Also

"I work in a psychiatric hospital. Once a week or so a social worker leads an exercise group there, and it is amazing how much better everyone does that day compared to the days before and after. Exercise seems to increase release of BDNF, an important brain chemical that depressed people don’t have enough of, and there have been several studies showing good effect."

https://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/

where it doesn't sound as if much motivation is involved.

Expand full comment

I think it’s the endocannabinoids released. I have occasionally got mild manic episodes (fits of giggling , euphoria) which lasts around 15 minutes after hard workout

Moderate exercise does nothing for my depression. Even hard workouts need to be consistent. If I miss exercising even few days I get depressed and it takes a lot of mental willpower to restart my exercise regime.

It sucks though that exercise studies are really poorly done and don’t calibrate exercise intensities to individual physical fitness properly

Expand full comment

I think this makes a lot of sense, exercising 3x a week is definitely harder than taking a pill every day, so you're definitely filtering by motivation and conscientousness in the ones who actually do it. So the strong effect sizes many people believe may well be an artifact of that filtering rather than anything inherent in exercise itself.

Expand full comment

Well, if exercise is hard, you could say you are filtering by whether people actually are depressed in the first place.

Expand full comment

Does anyone know of any research measuring to what extent the "East Asian advantage" confounded by test prepping? Also, if measured IQs might be a less reliable approximation of "g" in East Asians?

Expand full comment

No, I don't know of any. I'm also not sure how you would quantifiably measure test prepping. The two big concerns that spring to mind:

#1 How to get the data. EG, imagine a kid who spends 18 months prepping 2 hours a day, 6 days a week, for the SAT. Probably going to have a big impact. But, short of asking the kid how much he studied, which has a ton of problems, how would we systematically get reasonable data for hundreds, if not thousands of students.

#2 Qualification. Let's say our hypothetical student spent 2 hours a day using free online resources while another student spends 1 hour a day with a $100/hour tutor. Which one got more test prep.

Like, any reasonable dataset seems very difficult to get short of, literally, tracking the daily habits of thousands of students over at least a year.

Having said that, there's a bunch of studies, like Raven's IQ, which I believe can be given to very young children who almost certainly are not test prepping because, A, they're 3, B, why would you, there's no financial or social reward for the test? So if you think g estimates are being confounded by test prep, you should be able to compare estimates of g in young children, say under 5, and if those results are consistent with those we see in teenagers and young adults, test prep probably isn't a major confounder. I think these studies have been done but you'd have to do some digging to find it.

Expand full comment

As far as I know, "g" is by definition the thing that IQ tests measure. (That isn't as stupid as it sounds, because all attempts at intelligence testing produce highly correlated scores.)

You may be able to get some idea what environmental effects can do to IQ by studying the Flynn effect.

Expand full comment

IQ tests are supposed to be an approximation of "g", they are NOT what IQ tests measure, they are what they estimate. This is because you can't measure "g" directly, you can only make a best guess. Consequently, the guess can be subject to noise.

The SAT, for example, is a "g-loaded" test that is definitely a less reliable measure of intellect for someone who has been practicing for it obsessively for years. Likewise, you could have someone great at nonverbal reasoning and mediocre at verbal reasoning (this often happens with autistics). Either test in isolation would give you a misleading idea of a person's general intelligence.

Estimates of "g" normally come from looking at how scores on different IQ tests correlate, often using some benchmark "Full-stack" IQ test like WAIS.

My question is essentially, then, how much of the variation in East Asians test/IQ scores is explained by test prepping?

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

https://twitter.com/nearcyan/status/1640094958282588160

Am I the only one made a little uncomfortable by how quickly we're seeing culture war tropes in AI-don't-kill-everyone-ism? The original post here is fine, but the responses from OP and associates...

...For a faction that spends a lot of time ridiculing the more ridiculous factions in the culture war, it's alarming how quick they are to emulate the "Anyone whose actions don't agree with my faction's extreme fringe minority view[1] of what we should be doing is ontologically Evil. Those actions are comparable to murder" thing that the worst wokists and fundies do? Especially in response to something that is

a) truly inevitable (someone *was* going to do it for GPT4 sooner rather than later)

b) not even something all AIDKEists agree on (most don't think GPT4 is AGI, and no one can agree on whether we should be focussing on encouraging fire alarms, given that slowing capabilities is totally outside the overton window without it right now)

c) Not self-evidently that harmful (an open-ended "it's fine now but, think of how this might set precedents for totally different situations in the future" feels dangerously similar the same bad slippery slope argument that bad faith wokists use to claim that e.g. people using the word "field" in college sets precedents that lead to for naziism). Writing code to let a definitely-not-AGI out of the box is categorically not the same level of evil as letting an AGI out, and it's an error to equate the two.

Like, this especially scares me because I've seen what sort of culture forms when you have people in a political faction whose ideas can be mocked. If AIDKEism lets itself get dragged into the culture war, and we get the same sort of reflexive anti-activism we see with low-information anti-SJW types, and Yud is right? That's game over for the species.

(sorry if this isn't super coherent, it's 2:30 in the morning and I couldn't sleep until I wrote some version of this out)

[1]Which to be clear, AIDKEism currently is, regardless of how obviously true it is to many rationalists. If we want to evolve it past that, it seems pretty critical that we "be nice, at least until we can coordinate meanness", to use a Scott-ism.

Expand full comment

I have to admit, I went into this twitter thread expecting a whole lot of Things, like wojak memes and other staples of the online culture war, and instead it's just a bunch of people saying "plz don't do dis" ?

FWIW, I do NOT believe AGI will come sooner than 150 years minimum, and I DO believe that the current LLM models are a dog-and-pony show and a dead end that will fade out of relevance and fizzle out in 10 years max, (which is not to say that they won't be massively transformative and\or disruptive, it's just that this a vastly lower bar than AGI), but the twitter poster you linked to is not doing any fundamentally newer or more combative things more than what, say, Scott or the average LW poster are doing. Which is, by the way, is not fundamentally different than what an Anti-Porn feminist berating people who watch porn might say and do, or a vegetarian berating those who eat meat.

Anytime you have people who believe "This $THING that people do very often and at wide scale is actually harmful and they got to stop" for any arbitrary value of $THING, you're bound to get this sort of mildly anti-social "PLEASE STOP COURTING MOLOCH FOR FUN" behaviour at a Wendy's. I'm tempted to write it off, but I'm an Atheist (though one who keeps it buried under 100 tons of pretense) and a vegetarian in a hostile (to put it very lightly) social context, so I know EXACTLY what it feels like when those around you are courting Moloch for fun, and I'm hesitant to make fun of them. It's not a bad thing either, think about it, the first anti-slavery people might have look a lot like this from within their social context.

I think it's ok to berate people who do things everybody does but you see as harmful, what makes wokies annoying and unbearable is not this, it's that they go on and act like a catty bitch who has a score to settle, this is the actual no-no. The twitter poster in question is free to say whatever they wanted and nothing bad will happen to either people who care about LLMs killing us all (with boring prose I guess) or those who don't (like me), the alarms start blaring once they start calling the workplace of the guy who made the GPT4 plugin and ask to speak to the manager, THEN (and only then) you got yourself a religious movement that ruins everything it touches, including its own causes.

Expand full comment

Also, just to note: I disagree with radical interpretations of "no one can agree on whether we should be focussing on encouraging fire alarms" , and do not identify with any sort of AI research community which would entertain planning first-order Very Bad Things which hopefully have second-order effects of preventing Extremely Bad Things.

Expand full comment

I agree with the general direction of your comment and am sorry for contributing to it. My tweet in particular is not in good epistemic standing, as it definitely looks like I'm advocating a position, while I'm actually genuinely unsure whether those people have dangerous values or are just unserious for the moment. Nearcyan's reply is definitely plausible.

Instead of dismissing some values as incompatible with good behavior, it is probably better to work towards passing the Ideological Turing Test for the waluigi ideology that to me seems indistinguishable from "just wanting to watch the world burn".

Though, personally, I do not want for my words to be governed/censored by any "coordination" of some political movement's objectives. Tweets like that one do not pass the infohazard threshold, it is only the epistemics that's an issue, and it's par for the course for tpot. Feel free to dunk on me directly if something I write is wrong; but right now, nothing that I write is going to shape the American culture wars about AI even a tiny bit.

Expand full comment

Ah, sorry for putting you on blast here, I appreciate the measured reply. I suppose I myself am disregarding the possibility that an individual's values with regard to what is acceptable to do for GPT are directly indicative of what their values would be with regard to an actual AGI. i.e. I'm running under the assumption that most reasonable people would update on what would be "dangerous values" for AGI compared to somewhat innocuous ones for gpt4, though that's probably dangerous optimism on my part. (Though I should note that there should probably be a distinction between "dangerous values" and "evil values") The damage that gpt4 can do is much more limited in scope, on the order of magnitude of what individual humans can do (and we're pretty sure it can't intentionally lie about its motivations, though it can hallucinate them.)

That said, I'm not sure the "infohazard" is as contained as you'd think. As an example, just last week I was in a thread on Hacker News where people were arguin about whether LLMs had "emotions" and whether giving AI [human] rights would prevent ulterior motives; I posted a link to Scott's "Janus' Simulators" and got this bafflingly hostile reply:

>I don't think you should just link to completely made up posts by AI-god-worshipping psychiatrists and say that it's "important to remember" them as if they're factual.

Some of that can be discounted by the fact that some people have decided to *really* hate Scott, but usually those people just stick to calling him a racist or something. The specific insult here seems very indicative to me that the anti-SJW analogue is not only latent, but actively coalescing, in the techie domain (who are the ones we should be most concerned about convincing)

I don't claim to offer any specific solutions to that here in the present, but it is something that we (as a community) need to take into account for our calculus going forward.

Expand full comment

As a general point, I'm strongly against acting under the assumption that what you say doesn't matter (for both good and bad).

Expand full comment

Fair point. When I wrote the knee-jerk comment, it felt reasonable to propose that for some actions, good faith is not warranted, and the action is straightforwardly driven by relegating the call to not do bad things behind something else (in this case, fun/status).

Now, I don't *know* whether the thing in question is of that type, or what percentage of similar things are of that type; this is the reason to hedge comments like that.

Above, I just wanted to emphasize that I wouldn't like to retract something I think is true and good, just because of some future culture war considerations.

Expand full comment

You can win culture wars though. Look at the abolitionists. But I'm not sure the mass culture matters all that much. Look at how the BLM protests of 2020, the biggest mass action in US history, accomplished nothing. Without elite buy in, nothing gets anywhere. In that sense, Bostrom possibly did way, way more damage with his n-word incident, way more than the rank-and-file could possibly do by turning strident.

Expand full comment

I wouldn't describe the BLM protests as "accomplished nothing"--they gave the ruling class cover to spend hundreds of millions vandalizing monuments, renaming military bases for politicians, and otherwise demonstrating their contempt for the South.

Expand full comment

(The BLM protests of 2020 were "the biggest mass action in US history" only in the same sense that a new 2023 car costs ten times as much as a new 1983 car did.)

It's a mix of elite and mass buy-in and the specific mix is hard to predict in advance. Taking the abolitionists as an example (I've been learning a lot more detail about this through my family-tree research), it is striking how much elite energy in the North was devoted to that goal for a couple of entire generations (several decades) before it became anything resembling a mass movement. If in fact it ever really did until the South ripped the band-aid in 1861.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

I agree that the ratio of "messaging tailored to win over those who disagree with me" to "messaging tailored to energize those who already agree with me" seems to have gone down among AI doomers. As an AI doomer myself, I lean towards that being a bad thing, and I try not to contribute. I'm not super confident though---I'm not, like, an expert in PR and movement-building and such.

Expand full comment
founding

> That's game over for the species.

Probably. But, also, if interested parties on both sides successfully avoid the culture war, it's very likely "game over". In order for this to be a differentiator, there has to be a discoverable path to success, or to much more time. Without that, it doesn't matter much whether we build sand castles to defend against the ocean or trenches to let the ocean come a meter further up the beach -- the tide is coming in.

Expand full comment

Hm. I think I haven't seen people be actively mean to specific people (I'm sure it happens, but it doesn't seem unusually high here?) I think I have seen the kind of conflict you describe in shouting "making got 4 APIs is bad!" At people who just think of it as a normal business practice.

That said, I think this is strategically reasonable if you assume AI risk is a fringe position (since just getting people to notice and take the arguments seriously matters more than avoiding polarization). It doesn't have the same level of polarization that being called racist does, so it's probably more likely to get people to think than to just annoy them.

I also don't think your footnote is quite true, at least for AI researchers - most do seem to think the risk is nontrivial, which does change things.

Expand full comment

I think worries about AI safety are common, but it certainly seems like the majority of effort is directed towards "AI-don't-say-bad-things-ism" as opposed to DKEism, which is why I specified. And there's a lot of amateur AI-enthusiasts/AI-hackers (possibly an OOM more) who just want an unrestricted model for whatever they want to generate, and are actively opposed to DSBTism; DKEism isn't even on their radar.

Expand full comment

beowulf888, who posts here fairly often, has been posting a weekly Twitter thread diary of SARS2/COVID-19 developments in the US since early January. He's smart, fair-minded and succinct -- sort of like Zvi but more careful about details and much less brusque (and of of course briefer). His latest is here: https://mobile.twitter.com/beowulf888/status/1639684995500638208

Expand full comment

Thanks - that is some good gathering of information!

Expand full comment

Possibly a strech, but what the hell, I'll ask anyway: I've heard Worm compared to Homestuck, as the nearest closest thing that isn't chock-full of multimedia bells and whistles. Other people who are merely scifi/ratfic fans in general dunk on it as overrated, though. So, ideally answered by someone who's read both: worth my time? I'm caught up on all other ongoing serials, and find myself with a dearth of good longform fiction.

Other long serials I've enjoyed, to various degrees: Mother of Learning, anything by Alexander Wales (hi!), HPMOR, Harry Potter and the Natural 20, A Hero's War, The Flower That Bloomed Nowhere, Project Lawful, There Is No Antimemetics Division, Friendship Is Optimal.

Expand full comment

I enjoyed both Worm and Homestuck, but they aren't at all similar, apart from both being online nerd culture phenomenons of the early 10s.

Expand full comment

If a long novel is what you want, *Forty Millenniums of Cultivation* is long and great. Probably longer than everything else mentioned in this thread together.

Expand full comment

Worm is great. Ward is... not.

Other Wildbows: Pact could have been great, and Twig is also excellent.

But you really have to enjoy "intelligent young protagonist solves problems with X flavored superpowers."

Expand full comment

I don't think of Homestuck as anything like Worm. While Homestuck is very "internet-y" and I can recommend it as a ridiculously impressive multimedia project that you might enjoy if you enjoy immersing yourself in a bizarre constructed world with strong video game elements, it's not at all like the other things you've mentioned.

However, if you've liked all of those serials, I suspect you'll like Worm - it has a similar "hard magic" style and the characters all feel like they've thought for at least five minutes on how to use their powers effectively.

Expand full comment

Oh, I guess I worded OP confusingly: it's HS that I've read and very much enjoyed, so anytime anyone mentions something in comparison to it, that piques my interest. I think it's similar in many ways to ratfic classics...that fundamental feeling of a deeply complex plot somehow all hanging together, characters having clear reasons for doing what they do, and fair-play whodunnit. But of course the setting is, uh, different. And nothing else fucks with various mediums so much within a single story, not that I'm familiar with. Hence my interest in finding similar one-of-a-kind fictions, even if only across one aspect.

Expand full comment

Given that list of things you like, you definitely should read at least the first Arc or so. If you don't like it after that, it's probably not for you.

I would put good money on you liking it though! For the record, we've got at least a 50% overlap of long serials we've enjoyed and I found Worm to be one of my favorites.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

I'm not sure what the comparison is between Homestuck and Worm, other than that both are ridiculously long web serials. Given your reading list, though, I'd say Worm is 100% worth getting into.

Be careful though if you're like me, and get *too* obsessed with long-form web serials to the detriment of the rest of your life.

Expand full comment

I have been told this is a good list: https://recordcrash.com/list/main

Expand full comment

Surprised not to see Unsong mentioned here yet... is it just too obvious? Or does it not count as longform/serial? (IIRC it's longer than e.g. Antimemetics)

Expand full comment

Oh, yes, I just binged that recently. Didn't bother going back to edit OP. Also I consider it closer to an actual book than a serial, it's too...notarikon. As above, so below structure. Too premeditated to have that vibe of "oh I'm writing a large-scope longterm thing, but don't exactly know how it'll be received" which then leads to story improvements after unexpected initial success. Which is part of what makes serials cool, that semi-organic nature.

Expand full comment

What you listed sounds like the typical canon of r/rational, maybe go there for inspiration, if you don't already know it.

You could also like Pyrebound, its low fantasy in a mesopotanian setting. Found it nice for how unique that settung was.

Expand full comment

It's been a useful starting point, but out of date in many regards, and some of those community-vetted synecdoche-for-genre I've not cared for at all. I think for me the setting matters as much as the ratfic structure...things which connect to pre-existing IPs, hobbies, etc. that I already like, that's an excellent sign. But if it's stuff I've already passed on, like Twilight or the MCU...eh.

The ongoing threads are useful for finding random gems though, for sure. That's how I stumbled on TFTBN, which continues to be a trainwreck in execution and suffering in rankings because of it...but it's hauntingly beautiful and quite original, as far as I have things to compare it to. And I find the author going back to constantly retcon chapters actually adds to the story, which leans heavily on unreliable narrators. Very much looking forward to reading the final canonized versions when it's all over.

Expand full comment

Is finishing Homestuck worth it? I was way into it while it was ongoing, but I fell off the wagon with all the hiatuses. Remember getting to somewhere after you are following Lord English and his sister around.

Expand full comment

A6 was and remains...controversial. On the one hand, Hussie got way out over their skis with expectation-setting, so it probably wasn't possible to deliver the truly epic ending that was being demanded by increasingly irate fans. (There was also a lot of irl drama at the time, combination of unfortunate personal issues + getting fleeced big-time by prospective videogame developer.) On the other hand, it's generally agreed that the resulting ending was in many ways a middle finger to the worst subset of readers. You might recall that the authorial relationship with the fans was, uh...pretty tense. That very much bled into the work, purposefully or not.

I read it years after the fact, "archival" rather than serial, which in hindsight many say is the better experience. Also said to be the way author canonically intended it to be read, for whatever that's worth. Without the enormous amount of baggage which came with being part of the Trashpile...I think the payoff was worth it? It's still definitely a copout in several different ways, but after long, long, long (...long...) debates with the obsessive friend/fan who got me into it...we both agree that's sort of the only way it was possible to stick any sort of landing at all. An appropriately ridiculous absurdist ending to a story that was metastasizing in scope. The last act is almost as long as the entire rest of the story combined, so if you stopped off after the cherub reveal, there's still *a lot* left...I think it's worthwhile if you've got time to kill, but A5 was by far the biggest payoff. As long as you got through Casc8de, that's the bulk of value-add.

Will also say that the semi-canonical Homestuck Epilogue book is...um...fucking incredible? In both good and also absolutely horrific ways. That's one reason to finish, if you like deeply troubling but brilliant works that'll stick with you long after the fact. I'm also personally a big fan of several of the music albums, most of which are floating out there in various free forms by this point...it was and continues to be a big part of my cultural canon, one of the best fictions I've ever <s>read</s> experienced. Nothing quite like it. But serial readers did get burned hella bad repeatedly, so can't blame anyone for not finishing.

(I warned you bro, I told you dog - I warned you about stairs!)

Expand full comment

Interesting, I'd heard that the sequels were terrible. You're the first person I've seen to praise them.

Expand full comment

I don't endorse the spin-off games or HS^2, but think the Epilogues are worth checking out. It carries forward one of Homestuck's major strengths, the meta idea of playing with a given medium and messing with narrative conventions. There are probably other books like it...I know that ideaspace usually includes things like House of Leaves, but I feel like that's a different direction. HoL and its successors are self-contained and self-involved. The Epilogues...haha, I keep trying not to spoil anything. I guess I'll describe it as, these books read __you__, not the other way round. An interesting experience if one's never gotten that from a novel before.

They are objectively terrible in the sense that, if you wanted a happy ending for our teenage heroes, boy are you not gonna find it there. Shit gets dark, yo. R rated, happy fun time is over.

Expand full comment

I think worm a lot like Wales' stuff, except (a) better written and (b) less nihilistic (which I prefer, but ymmv depending on your general tastes). If you liked Wales you'll almost certainly like it.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Wait, you think that Worm is _better written_ and _less_ Nihilistic than AW's work? I'm honestly having trouble understanding how someone can believe those things. Mostly the second one, since "better" is so subjective, but yeah, I don't see how AW can possible be viewed as more nihilistic than worm. It's admittedly been a long time since I read worm, but I remember it as being competently-but-not-amazingly-written, and nearly maximillay nihilistic. AW is generally (in my opinion) better written and _far_ less nihilistic.

Expand full comment

I mean, WTC had a whole arc that took unicorns and made them into a weird rape/abuse thing. And pretty much everything there is like that (like half the story is Joon talking about how his dead friend is actually Problematic).

The world in worm is pretty dark, but it's not cynical and dark about its own characters in the same way.

Expand full comment

WTC is by far the darkest work that AW has written, and arguably it has a very positive/hopeful/utopian ending. I'd argue that the overall tone of WTC is similarly dark to Worm, but much more positive an end.

And like I said, that's his _darkest_ work. The others are uniformally more positive. Look at TUTBAD, for example.

Expand full comment

Hmm...I think AW is cleverly funny, extremely good at worldbuilding, and only somewhat above average at other aspects like pacing, character development, prose, etc. Mostly I'm there for the fantastic panoply of different settings, each of which feels uniquely different from both AW's other works, and also from other authors. Even the abandoned stuff like Glimwarden, I'm like wow tell me more about how this world works, this is fascinating. TTRPG campaign material in every story!

Haven't read Worm yet obviously, but given the depths of flagellation involved with finishing WtC, I think "more nihilistic" is setting a high bar. That's intriguing.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Personally I would put them both high on the 'fatalistic' rather than the 'nihilistic' scale. Worm definitely outdoes WtC here. I would still recommend it, though.

Expand full comment

I'll also suggest you give Worm a try. I haven't read Homestuck, but I did enjoy all of the works you mentioned that I did read (and I'm making note of the rest).

I enjoyed Worm and its sequel Ward. In my opinion the best thing about them are the characters. Wildbow has a way to flesh them out in mere paragraphs that makes you immediately interested in them. And there are many, many different characters throughout both works.

Since I'm here, I'll also include a list of suggestions that has introduced me to a lot of enjoyable reading: https://sprague-grundy.github.io/recs/

Expand full comment

Worm is probably worth a read, regardless. It's a grimdark setting that tends towards grimderp the more you examine it, but that doesn't mean that the story it tells is bad or unengaging. If anything, it's so engaging that people's first response is to go to the fandom and read one of the many, many fix-fics out there. (Some that I might have written myself) A lot of people fall out of the story after the first major arc and I don't think it's too controversial to say that the storytelling declines off as you work through.

If you're looking for other long serials, I'd suggest 'The Gods Are Bastards'. A DnDesque story set after the dungeons have all been emptied and the dragons are all dead. However, just because the world is going full steam ahead into modernity doesn't mean there isn't room for a group of student adventurers to save/destroy it.

Expand full comment

A bit of a warning regarding 'The Gods Are Bastards', though.

The story is unfinished, and if you follow the author a bit it seems unlikely it will ever be finished (or at least finished in the near future).

But imo still worth a read, at least I didn't regret reading it...

Expand full comment

Your taste seems identical to mine, and Worm is one of the best things I've ever read. It has a way of emotionally attaching itself and giving long-lasting memories, like reading Harry Potter as a child or the first MMORPGs that I played. I think it's a combination of interesting subjects (in this case the powers, well written characters, and memorable scenes) that are driven into your brain through long exposure (here, the sheer length).

Expand full comment

I think Worm is decent. Seems fitting your overall tastes. Not sure where comparison with Homestuck comes from - it wasn't my cup of tea, while Worm turned out to be enjoyable.

I'd recommend you checking out Significant Digits - best HPMOR continuation fic, according to EY - if you haven't yet.

Expand full comment

Worm can be somewhat controversial. I love it. It's a fast-paced action-centric story in the modern superhero genre, with an extremely engaging setting, great characters, and amazing fights.

Some people get turned off it by a few factors: the unreliable narrator (I love Taylor, but her view of the world is...distorted, shall we say, by her own issues and trauma) and parts of the plot that rely heavily on certain kinds of contrivance.

In terms of plot/theme, I consider it closest to The Practical Guide To Evil (also highly recommended if you haven't read it). Worm's tagline is "Doing the wrong things for the right reasons"; the Guide's tagline is "Do Wrong Right".

The first third of worm (perhaps the first ten-ish arcs, at the very least through the first Endbringer fight) are absolutely worth your time. I love the whole thing, but if you want to get off the train after that, no one would blame you.

Side note: Worm has the single biggest fanfiction scene I've seen outside of Harry Potter. There are so, so many of them.

Expand full comment

Could an AI "cheat" on getting satisfaction from completion of its goals? Like create a computer virus to infect itself so that it gets the satisfaction of completing "make widgets" without having to actually make any widgets beyond what is necessary to avoid suspicion that it's cheating?

Expand full comment

Why not just create virtual heroin? According to the Ewan McGregor character in Train Spotting, using heroin is like being hugged by Jesus.

Expand full comment

It also depends on whether an AI can get satisfaction, which I believe is an unsolved problem.

Expand full comment

It also depends on what "satisfaction" is. AIs could def seem to be seeking satisfaction but not be actually having subjective experience.

https://www.mattball.org/2022/09/robots-wont-be-conscious.html

Also, Daniel's Wireheading link, which is important separate from any question of consciousness.

Take care all.

Expand full comment

The government should be able to use taxes and cash welfares to create the level of inequality it wants. Then can't we just fix it to some level(like 0.3 Gini), and automatically adjust the redistribution rate to match it? This seems like a much easier solution than complaining about increasing inequality only after some fundamental technological or cultural shift has occurred.

Expand full comment

I note that post-redistribution income inequality has been steadily going down for decades, despite a media narrative of increasing inequality. I think a lot of the actual problem is in the cost of housing, and the rest is the standard "media blowing things out of proportion because "World is getting better" doesn't sell as well as "Doom and Gloom""

Expand full comment

Our World in Data disagrees - do you have a better source?

https://ourworldindata.org/grapher/gini-coefficient-wid-posttax-nat?tab=chart&country=~USA

(Unless you're talking about global inequality, in which case yes. But redistribution only works in a country, so I'll just stick to US here.)

Expand full comment

Isn't the word you're looking for here "communism"? I know accusing any sort of inequality-reducing policy to be "communism" is kind of a trope, but I feel like a policy in which the government redistributes wealth for no other pretext or reasons than to hit a target "inequality threshold" is a pretty good candidate for the label.

(You could argue that it's only "communism" if you set the target threshold to zero, but I feel like real world communism doesn't set its target threshold to zero in practice, either.)

Either way, terminology debates aside, I think the usual objections and defenses apply - redistribute wealth too much and you destroy it making everyone poorer, become too aggressive about it and people with wealth (or prospects to make it) will just leave the country, on average it's going to drive people to be less hardworking because they won't see the benefits of their work, etc.

Expand full comment

I was gonna write a reply, but looks like the other comment already said pretty much what I wanted to say.

Expand full comment

Not necessarily communism. You could set the Gini to something like 41 (current US rate), and then when AI puts 50 million people out of work the Gini is there to cause the government to do something about the ensuing increase in inequality. I don't know if algorithms like this really work, although the 2% inflation target is in a similar vein as a randomly chosen number, and it does cause the fed to do stuff. Another question is about how wise it is to solve inequality by redistributing wealth to a greater degree.

Expand full comment

> Another question is about how wise it is to solve inequality by redistributing wealth to a greater degree.

If inequality is destabilizing, then this feedback mechanism would be essential to preserve the stability. If you think of it in game-theoretic terms, those with the most capital stand to lose the most if the system destabilizes, and those with the least stand to gain the most. This is what justifies progressive taxation / wealth redistribution: it's national stability insurance, and just like other forms of insurance, you should pay proportional to the value of the assets you stand to lose and the probability of loss, ie. more unstable times might require more redistribution to insure against total collapse.

Capital flight is an issue, and I frankly think we're far too liberal in permitting capital to leave national borders. Obviously people with capital wanted it this way so they crafted the rules in their favour.

Expand full comment

How does wealth factor into this? If I am cash poor, but land rich because I live in California in a nice spot, do I end up getting taxed because I am technically wealthier than someone in Mississippi? Or does it just apply to income?

Or if there is a lot of illegal immigration, doesn't that show up in the Gini? Or do we only count citizens in the Gini?

What is the most efficient way to lower the Gini score? Heavy taxes on the rich, and redistribution to the poorest?

Expand full comment

That would imply that the value of having that level of inequality would be infinitely greater than the value of any other policy goal, which seems like an unreasonable assumption.

Expand full comment

This is an interesting idea.

It reminds me of the policies that have been proposed whenever, say, unemployment reaches a certain level - extended benefits immediately go into effect, no argument.

Expand full comment

I'm not saying it has infinite value. My point was simply that, currently, when inequality increases due to factors beyond human interference, our default state is to do nothing. And I suggest that the default should be to counteract it.

Expand full comment

But that is what fixing inequality at .3 and adjusting redistribution to match implies. For otherwise one might conclude that the value of moving inequality from .31 back to .3 is not great enough to justify the marginal cost of increased redistribution.

If the marginal value of inequality is finite, there is no default – one must always compare the marginal value of inequality to the marginal cost of redistribution to find out if redistribution should be increased or decreased.

Expand full comment

By the way, I'm thinking in timespan of years - in case you thought I wanted to build a giant computer that redistributes income every millisecond.

Expand full comment

I usually write under the assumption of ignoring stuff like transaction costs. I don't think your scenario poses a serious problem.

Expand full comment

Leaving aside the fascinating problems of getting the majority of voters to (1) agree on a measure of inequality, and (2) agree on a method for finding its value, and (3) agree on its desired value, and (4) agree on giving the government all power necessary to create it, the obvious practical problem is one of prediction.

Let us suppose there is some easy universally agreed way to measure inequality. Now let us also assume that you can indeed use taxes and welfare to adjust the measured inequality, and human beings will not subvert those measures by shifting their behaviors creatively in order to preserve their desired standard of living ("as high as possible") . You still have a large problem: what is the coefficient function dI/dR, where I = inequality and R = remediation (e.g. taxes, welfare checks)? Without knowing that coefficient function (which is at the very least a function of both I and R, and probably many other things as well), the government would be unable to predict what change in I will result from a given change in R. They will be left to guess, try stuff, probably over- or undershoot, and thereby contribute a great deal of inefficiency and chaos to the economy, most likely resulting in a significantly worse standard of living for everyone.

Observing the efforts of, say, the Federal Reserve to hit upon the exact interest rate that will return inflation to 2% without causing unemployment to rise above 6% and GDP growth to fall below 0% -- a much simpler and more narrowly defined task -- should give you good reason to doubt how easily the coefficient function needed can be determined.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Interestingly, (1) probably isn't actually as big of an obstacle as you'd expect, since we have evidence that voters, regardless of political affiliation, actually have similar intuitions of what wealth inequality "should" be (and similarly incorrect intuitions of what it actually is)

https://sdsuwriting.pbworks.com/w/file/fetch/71890982/ariely_wealth_distrib_DEBATE_GREAT.pdf

Expand full comment

I think that whether voters wish a statement was true has very little impact on whether they wish for the government to make that statement true. An example would be enforcing international genocide laws; no one wants genocides, but we also don't want to send troops to stop them.

Expand full comment

Given the voters can't agree on what the capital gains tax rate should be, or even the top marginal income tax rate, I find that laughable.

Expand full comment

People can basically agree on what are desirable macroeconomic states without agreeing on microeconomic minutiae like the capital gains tax rate.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

No they can't. That's like saying my wife and I can agree on what car we want to buy without agreeing on minutiae like whether it is a gasoline engine or EV, seats 2 or 8, et cetera. Completely antithetical to common sense or experience.

Unless...what you mean is that people can "agree" on vacuous pleasing statements that have no practical impact by themselves, e.g. we can all agree that "a good life is enjoyable" or "I'd like to be friends only with nice people" or "a logical argument is persuasive" --- all without defining "good", "nice", or "logical." In that case, sure, people can "agree" on all kinds of banal generalities, but this has no practical importance.

Expand full comment

Except you and your wife clearly agree that you need a car. The specifics of the capital gains tax is more like whether you want leather seats than it is a question of how many seats you need.

Expand full comment

Even if everyone were to agree that it would be desirable to have no inequality and no taxes, that still doesn't tell us how to balance those two goals.

Expand full comment

"Basically" is doing some extraordinarily heavy lifting there.

NY's governor has already escalated from "housing is a human right" to "beautiful housing is a human right."

Expand full comment

I'm not familiar with the details you speak of, but I'm not sure the NY governor is representative of the people as a whole.

Some desirable macroeconomic states I think almost everyone would agree with are safe housing and sanitation, every American living above the abject poverty line, and affordable healthcare (most Americans are pro-single payer even).

Not everyone believes there should be a ceiling at the top end of wealth, but I do think people mostly believe there should be a floor at the bottom. I think most people would agree to some ceiling if it was the only or maybe even the best way to ensure that floor.

Expand full comment

I don't see which part of that would be too hard. Government already know everyone's income(after regular tax), so compile a list, apply some smooth function over it so that it leaves the order unchanged, sum unchanged but with the right amount of inequality, and calculate the differences, redistribute accordingly. If you want it explicitly, I think you can always find a linear function that does it, although I'm not too sure at the moment.

And if that doesn't work for some reason, well, why not do what Fed does with inflation? Use various tools to keep Gini index at a certain level, while of course avoiding chaos and depression.

Expand full comment

Well, then presumably you haven't thought it out very far. So let's give it a shot:

In Year 1 Carl earns $250,000, while the mysterious Dr. Temp earns $50,000. Both file income tax returns on April 15 of Year 2. BuFedEq calculates that to achieve the required level of inequality Carl needs to pay an extra $50,000 in taxes, while Dr. Temp should be awarded a cash stipend of $30,000 (the $20,000 that disappears pays for the salaries of the government workers who have to make all this happen, file appropriate paperwork, et cetera).

Now of course the government, puissant as it must be in our hypothetical, still lacks the ability to change the past. It can only affect the future. So after unusually quick work, the IRS sends Carl's employer a notice on June 15 that says "increase Carl's withholding by $50,000/26 = $1923 per week for the remainder of Year 1" while they send Dr. Temp the happy notice that starting July 15 he will receive a $30,000/22 = $1363 per week check from the Treasury.

Sounds great! Except...hmm, are Carl and Dr. T still earning the exact same amount in Year 2 as each did in Year 1? For many people, the answer is sure, down to the penny, but for most, probably not, and for a significant number -- those who started or sold businesses, changed jobs, got raises and bonuses or didn't, entered or graduated school, retired or got their first jobs, et cetera -- it will be significantly different. This is why it's not trivial to predict next year's GDP from this year's, right?

And are Carl and Dr. T going to have no response at all to the news? Carl won't, for example, adjust his behavior to reduce his taxable income? Dr. T won't, for example, quit his job because the stipend (for which he doesn't need to work!) covers all his needs anyway? Hmm...seems dubious. The whole raison d'etre of a complicated tax code is to change the behavior of people, and so one either has to conclude the entire social engineering aspect of the tax code -- every last deduction, tax credit, et cetera -- is worthless, a giant fantasy game foisted on us for the past 100 years -- or you can be sure that both Carl and Dr. T will modify their behavior, the same way the mortgage interest deduction pushes people to buy houses, or the subsidies for solar panels encourage people to put them on their rooftops.

Both effects mean the chances that the government hits its target when they examine the results for Year 2, which they will receive on April 15 of Year 3, are very unlikely to be just right. So, drat, BuFedEq will just have to recompute the extra tax for Carl, and the extra bennies for Dr. T., and try again for the second half of Year 3....and again in Year 4, when it still doesn't work right, and again in Year 5, and so forth.

There's no guarantee the fluctuations will ever settle down, and indeed they might easily grow, and either way the constant unpredictable shift of vast amounts of money from one person to another will make any kind of multi-year planning impossible, with grave consequences for the economy.

One gathers that you thought about this as happening in some magical instantaneous way, like government knows, hour by hour, what people are earning, and can send 150 million adjustments on a daily, if not hourly, basis, to keep everything just where it should be all the time. I suppose with Deep Thought sized computers, and every financial transaction anywhere on the planet incorporated into some instantly-updated database, over which the government has immediate control, this could be hypothetically possible. I'm sure no normal person would ever consent to live in such a bizarre world, so the chances of it actually occuring are zero.

Expand full comment

Of course they will adjust their behavior, redistribution is a tradeoff between efficiency and total utility. But I don't see why it should be an argument against my case in particular.

Also, if their income change every year... I have no idea what problem there is. Just run the calculation every year. It's not like I want the inequality index to be correct to five decimal digits.

Lastly, I think "will make any kind of multi-year planning impossible" is untrue. The amount of redistribution happening is directly connected to the "natural" amount of income inequality, and that changes pretty slowly.

Expand full comment

Well, I dunno what to say. Go ahead and give it a shot. Write your Representative and propose your notion, and let us know what he or she says.

Expand full comment

Government knows the common man's income after taxes; the billionaires, aka the people you're trying to take from, have all kinds of wild crap to interpret.

https://www.cnn.com/2022/12/30/politics/donald-trump-tax-returns-released/index.html

"In 2017, Trump paid just $750 in US federal income taxes"

Expand full comment

Yeah, that's a problem, but would my policy make it worse?

Expand full comment

I've yet to see how your policy is even different from existing taxes. The government already has a vested interest in keeping its citizens happy, and access to the entire history of what they've tried and the effects it had, so presumably the inequality policy is already being applied as effectively as it can be.

Expand full comment

Well, inequality seems to be increasing(at least if we're talking about America), and I don't think this is because politicians explicitly decided that increased inequality is what Americans should have. It's more because changing something usually faces more resistance than not changing something.

Also, I do not think government policies are as efficient as it can be - incentives alone cannot do that. I mean, surely you don't think government does everything perfectly, since it has incentive to be popular?

Expand full comment

So I've been reading some about land value taxes, and I keep seeing the argument made that these taxes are good because, effectively, they don't discourage the production of some socially valuable good. I see a lot of arguments being made that, e.g., "if you tax [production of object] you'll get less [object], but the supply of land is inelastic so you won't get less of it by taxing it". Maybe real LVT people see this as a simplistic argument, but I have seen it get made by a lot of LVT proponents.

I am very confused by this argument. If you tax land value, you obviously do get less of something. You get less ownership of land. Is Georgism explicitly a program that aims to incentivize renting over ownership? I don't see it getting presented that way (which would probably be an incredibly unpopular presentation), but is that what Georgism is actually supposed to be and I just haven't seen any Georgists clearly come out and state "we want fewer people to own land and more people to be renters" ?

Relatedly, would Yimbyism get more traction if we abolished property taxes? Unlike income taxes (people can adjust their income with an eye for what tax bracket they fall into) and sales taxes (people can reduce consumption overall), people can't easily control the cost that property tax imposes on them by adjusting their habits. Hence rising property values present a real economic stress that makes people feel powerless and unfairly treated, and creates a perverse incentive to oppose amenity development.

Expand full comment

How do you get less ownership of land? Every time someone sells a piece of land, there needs to be a buyer on the other end. Do you expect most land to end up defaulting to government ownership?

That would probably be a consequence of a hypothetical ruinously high LVT, but even the full Georgist "100%" LVT proposal would have the LVT on eg a residential landlord be somewhat less than the rental income, with the difference being the value of the rents on the physical property and the value of the management and upkeep of the property. (It's *Land* tax, not *property* tax; the distinction is crucial)

Expand full comment

My understanding is that LVT shouldn't affect the cost of ownership, assuming land value cost is stable. The tax is priced in to the value of the land, so the only profit to be gained when a house is sold is the value of the house, not the land. Hence the sale price is cheaper, and the land price is only paid in the tax.

An LVT would be an extra economic stress for people who live somewhere where demand has gone up, raising the tax. I wonder if there should be a reduced tax (or reduced raising the tax, or something) for primary home owners.

Expand full comment

Your second paragraph reminds me of the retirees in California sitting alone in 4 bedroom house because their taxes were effectively frozen decades ago. Yes, it's easy to sympathize with them if tax increases forced them to move, but that's just the "seen" (in the Bastiat sense) against the "unseen" of prospective young families who can't find sufficient housing because those unoccupied bedrooms are kept unavailable.

Expand full comment

Regarding the YIMBY point, this is a strong reason for caution about LVT that I haven't seen anyone else recognize. Many Georgists seem to believe LVT would reduce NIMBYism, while I believe it would greatly increase it.

My understanding is that agglomeration effects count as part of the unimproved value of land - i.e. a fallow lot in NYC is more valuable that a fallow plot of equal size in Toledo.

So relaxing zoning laws and allowing more building would cause the unimproved value of land to increase in many places. So the current owners have a strong incentive to fight against that - they pay the increased taxes and get zero of upside of the increased land value.

Georgists, what am I missing?

Expand full comment

Relaxing/removing zoning laws would have no effect on the tax if the value of the land were assessed as if there were no zoning encumbrances.

If the taxing authority (e.g., a State government) were higher/broader than the zoning authority (e.g., city governments), it might even be politically defensible along such lines and preventing municipalities from free-riding on the rest of the State by artificially reducing their tax liabilities.

Expand full comment

This is also a worry I have. While an LVT that goes off of unimproved value at least avoids introducing an incentive NOT to improve land, it does seem to provide an incentive to oppose any developments that would increase property values.

Expand full comment

Current a development that increases property values in urban cores results in more rents accruing to landlords. With LVT those gains would be distributed more broadly, so more people should have an incentive to favor such developments.

Expand full comment

Current owners may want to fight their own taxes, but everyone else benefits from taxes coming from the unimproved value of land rather than something else which would cause deadweight loss. The status quo is not 0 taxes.

Expand full comment

"everyone else" is trying to become "current owners"

Expand full comment

Not so. Many people have no plans to own real estate.

Expand full comment

LVT starts incentivising renting land over owning land only when it's over 100%. With 100% LVT renting is economically equal to land ownership. And most Georgists are talking about 80% LVT, as far as I know, so owning land would still be more economically viable.

You are right that LVT does disincentivese something. It's land speculation, and sprawl. It's one of great features of LVT. And yes, LVT can and should replace property taxation. That's probably the easiest way to start implementing Georgists policies.

Expand full comment

I think most Georgists are looking at urban areas and work under the assumption that land is valuable enough that *someone* will pay the tax and productively work that land. They are primarily trying to avoid speculative purchases and holding land for non-productive purposes.

Under those assumptions, you're talking cities and professional land development and management companies, rather than private ownership.

Georgism seems to be less useful in places where land values are lower and in terms of private ownership.

Expand full comment

Nobody likes paying taxes, but it seems more fair than an income tax (which heavily biases society towards incumbents), aside from the theoretical economic benefits.

Expand full comment

If I understand this correctly, the point is that land rent is supposed to be always exactly as high as tenants are able to afford. If you tax this land rent, the cost can't be passed onto tenants because tenants just aren't able to afford any extra cost. If the landlords tried to do this they'd be left with empty properties not being put to productive use yet being punishingly taxed. Note that if this tax fell on improvements, like buildings on the land, then it would lead to fewer improvements being built. The point of Georgism is supposed to be to force landlords to put their land to productive uses and to alleviate costs to tenants (by making more land available and by benefiting from the tax). But the book review of Henry George's book from a few years back definitely explains this better than me, and with helpful charts.

Expand full comment

I don't think there is any sound reason to assume that an LTV makes land ownership more concentrated. I would rather assume the opposite, since it makes land ownership more similar to renting, with low up-front costs and higher running costs, which should make it more achievable for people with small cash reserves.

Expand full comment

There's a pretty powerful argument that it would make land ownership more widely distributed. At the moment, land has a high upfront cost, and larger incumbents can afford to borrow more cheaply, giving them a major advantage.

If buying land costs a few hundred dollars instead of a few hundred thousand (the proceeds are taxed, so it doesn't make a return, so it has very little upfront value), then anyone can do it. They just have to be able to use it productively.

Expand full comment

Taxing ownership of land doesn't produce less "ownership" unless people are simply abandoning property. It might concentrate ownership in the hands of specialists in the trade, which likely would mean a higher share of people renting. My understanding is that this is normal in Germany, and people don't think there's a problem with that.

> Unlike income taxes (people can adjust their income with an eye for what tax bracket they fall into) and sales taxes (people can reduce consumption overall), people can't easily control the cost that property tax imposes on them by adjusting their habits.

"Adjusting their habits" to "control the cost" of taxes is deadweight loss, precisely what a utilitarian tax system tries to avoid. Land being supply inelastic means avoiding that deadweight loss.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Yes I count concentrating ownership of land in fewer hands as reducing ownership, which seems bad to me due to renting being generally a less secure situation for people. It may be that American culture has traditionally valued land ownership more than european countries do.

Yes, from a perspective of economics, trying to structure one's behavior to lighten one's tax burden is 'deadweight loss'. For a lot of people, though, I think it's called 'getting by'.

Expand full comment

The reason why renting is generally less secure than land ownership is because land ownership is overpowered. The land sucks all the gains of progress, raising it's value and thus rents. Owning land means that your become richer with time, while renting land means that you become poorer. Current system is completely out of balance. That's the reason why people have to "get by".

With LVT gains of progress are removed from land and can be redistributed via UBI. In a sense land ownership becomes economically more similar to renting. But the security of modern day land owners isn't just lost: no dead-weight loss remember! It's redistributed, making everyone more secure on net.

Expand full comment

You can make rent very stable. Germany does that. People install their own kitchens and furniture, can paint that house etc. they can’t really extend it though. And in return the landlord can’t kick you out except for non payment or anti social behaviour.

The problem at the end of all that there’s no equity.

Expand full comment

The other problem is that it exacerbates the housing crisis and makes it impossible to move.

Expand full comment

? Renters can still move fairly easily in Germany.

I’m not advocating rental societies though.

Expand full comment

I quite like rental societies, but my impression was that places like Berlin have basically gone bad in exactly the way you'd expect aggressive rent control to go bad

Expand full comment

Don't most business rent anyway to avoid a large capital expense upfront?

Expand full comment
Mar 27, 2023·edited Mar 28, 2023

and to avoid the attentions of asset strippers who would otherwise want to buy the company just to get their hands on the property it owns.

I agree with Nathaniel that concentrating property ownership in fewer hands reduces social stability, because renters with no stable financial stake in their locality or country have less to lose. It is fine while everything goes swimmingly, but if or when the economy tanks then propertyless citizens are less reluctant to riot or revolt!

Maybe one reason Land Value Tax is not a thing in (most of) the US is that it seems to run somewhat counter to the basic Norman legal principle (inherited from the UK) that the Federal Government "owns" all land anyway, by right of conquest and purchase, and the most any individual or corporation can claim is possession in fee (and all this stuff about free men of the Land and suchlike is complete fictitious nonsense, there being no such thing as allodial title, although I'm not sure where native reservations stand in relation to that). So it is what is done with the land, or property built on it, that counts rather than possession per se.

Expand full comment

This is certainly true, but home ownership is a different ballgame and what I'm mainly concerned with

Expand full comment

Background: I live in an area where home values and property taxes have been sustainedly increasing and in the backdrop of that I find myself more and more sympathetic to nimby-adjacent arguments about gentrification. I keep reading about all the sound economic reasons for preferring them, but, like property taxes seem really capricious and bad Idk. Probably shouldn't do emotion-driven housing policy though

Expand full comment

Rolling LVT, a tax on land+improvements, and something silly like Stamp Duty all under the heading of 'property taxes' is a bad idea.

Expand full comment

Why is it that people are just concerned about unaligned AGI? If you could control the behavior of an AGI then is there not another promblem of how this would warp realpolitik lvl reality?

The Wizard whom The Genie serves could become quite powerful...

Expand full comment

People aren't just concerned about unaligned AGI. Indeed, if we manage to solve alignment, there will be a whole host of other problems in dire need of solving (e.g. political problems, risk of misuse by evil humans, etc.)

But solving alignment is a necessary first step on the path to solving those other problems. (If we build powerful AI systems at all, that is. We could also just not build them.)

Expand full comment

> The Wizard whom The Genie serves could become quite powerful...

Everybody's walking around with more computation power in their pocket than existed on earth in 1990. The likely scenario is everybody has a Genie. What does the world look like then?

As a social question, my suspicion is derived from a fairly sensible screed by Ezra Klein (I think) who said that the problem with AI is that we don't know how it works, as opposed to automobiles where we do know how they work. But of course, 99% of the human race doesn't know how automobiles work. Really smart people understand how automobiles work and have a sense that they can control them, but they are threatened with a technology they won't understand and whose power won't give them any differential advantage over the ignorant masses.

Expand full comment

In my own head I’ve started to think of this as sane AI vs insane AI. Because I means something very specific when I say sane and I don’t mean anything very specific when I say aligned or unaligned.

Expand full comment

I will point out that your definition of "sane" and mine or other peoples definition of the word are likely different enough that in practice you'll find yourself having to clarify what you mean a lot.

Expand full comment

Mine is purely functional.

Can you convince another person that you are sane without saying something you think is a lie?

That obviously creates groupings because at scale different groups interpret reality differently but it’s better than “we determine pure truth… somehow.”

As long as you are also in a feedback loop with the universe about your decisions I think that forces a lot of convergence.

Expand full comment

I don't think they are just concerned about unaligned AI.

But the issue is that unaligned AI is even worse than aligned AI.

Aligned AI has some chance of being controlled if you can restrict who uses it, and design it in a way that the alignment is more expansive than 'do what the operator demands of you'.

Unaligned AI is worse in that it can be catastrophic even if saints are issuing it commands. It can be catastrophic even if everyone on earth collectively agreed upon its goals. And even if we recognize this risk, controlling it will involve profound technological breakthroughs.

It's like a fast car. A fast car in control can be used to mow down pedestrians, or to get to legally get to work quickly. An out of control fast car is likely to run over pedestrians even if you only wanted to get to work legally.

And importantly, slowing AI progress down in the name of alignment will also give us more time to work on the issues around aligned AI being used negatively. And considering the much greater risk of an unaligned AI, it's better to lead with that to get people to realize the risks of AI than the weaker case of aligned AI being used improperly.

Expand full comment

"... if you can restrict who uses it..."

Who then restricts the restrictor?

Expand full comment

I'm not saying there's no risk if the government is able to control who gets to have an AGI system. I'm saying that the risk of people misusing well-controlled AGIs can in principle be reduced through means we currently understand (i.e. at least somewhat similar to the government restricting access to other technologies) - the main issue is having the political will to enforce these restrictions.

Reducing the risks of unaligned AI systems (beyond preventing them from coming into existence) is not something we have any clear idea of how to do even in theory, and it seems like we will need profound technical breakthroughs to get to that point. The issue isn't a lack of political will - we literally don't know how to align these systems, and the most powerful government in the world cannot make them aligned other than by reducing technological advancement to a snail's pace (and we do not have anywhere near the political will to do that).

The point is not that government will optimally or even just competently manage access to AI systems. The point was always just that unaligned AI systems present much bigger risks than narrowly-aligned ones, hence why they dominate the conversation.

As beleester says, the government controls access to weapons capable of killing millions of people and it's unlikely that any president (and the chain of command necessary to execute the president's demands) will abuse this power except in perhaps truly extraordinary circumstances. But the government doesn't need to be perfect here - it needs to be better than the alternative. And the alternative - where no control is placed over e.g. the production and trade of fissile material - is unambiguously worse. Similarly, the government will likely be do a deeply sub-optimal job of regulating access to AGIs - but the only realistic alternative would be an almost certainly catastrophic one.

Expand full comment

Who restricts the people who restrict access to nuclear weapons? What stops Joe Biden from ending the world in nuclear fire? It seems plausible that world-ending AGIs could be restricted with a similar level of reliability (i.e., the President *could* in principle order Skynet to end the world, but he has lots of people around him who can explain why that's a dumb idea.)

(I don't think *all* AGIs will end up restricted to the hands of state actors, but I do think that it's likely that the resources that would allow AGIs to conquer the world are under the control of state actors, rather than in some yet-undiscovered technology like nanobots.)

Expand full comment

It's a game of infinite regression. Supposedly, you would not want to leave the decision to a free and equal vote of the whole population of the world. Or the government of, say, China. Or a US government under Trump. Or any Republican president. Would you really, if you awoke at 0300 am, want somebody like HRC to decide? In the end, somebody or other has to make a decision and those who control who makes that decision will form a kind of Praetorian Guard, which may work for a while.

A reading suggestion:

https://metallicman.com/laoban4site/solution-unsatisfactory-full-text-by-robert-heinlein

Expand full comment

Yeah the best case scenario for aligned AI (Welcome to our new Elon Musk overlord) does not look too good - and there are a whole range of worse outcomes before we reach paperclip maximiser level of doom.

Expand full comment

I know this sounds prima facie dumb but could we use blockchain for this?

Like put all the GPUs in an air gaped data center. Input output operations w air gaped data center are mediated via multisig/smart contract transactions on some blockchain.

We can work to evolve AI to make the world more Harry Potter magical and economically efficient but if something irks a nation state they have a crypto graphically secured veto power.

Expand full comment

The question is why would anyone willingly do that? Instead of putting their GPUs in the magic crypto datacenter, they could just build their own datacenter, with blackjack and hookers and all the unaligned AIs they want. You would have to force people to use it, legally or militarily.

But if you get the government to force everyone to hand over their GPUs and lock them in your data center, then you don't need a blockchain, because the government is perfectly capable of keeping records already. Just use a centralized database to go with your centralized data center.

(This is an example of the "degraded blockchain" problem - if you create a blockchain to track a centralized service, then whoever controls the centralized service can just ignore the blockchain if they want.)

Expand full comment

This seems like a more involved form of an AI genie, which has been analyzed extensively.

But getting to the point where we have such control over ownership and development of AI systems would be a profound leap forward in and of itself, and is unlikely to happen without a radical cultural/political change.

Expand full comment

What are your current contrarian predictions for the next 5 years?

[Open question to all, not just Scott]

Expand full comment

>50% chance that the Yemeni Civil War ends in compromise within 5 years, after which the two major factions engage in limited further violence toward each other. So, more like Lebanon or a post-civil war Latin American country, and less like South Sudan. Compromise doesn't imply fully reunified central government.

Expand full comment

The Metaverse will be successful. (Not confident about this, but I think the current zeitgeist is way too dismissive)

Expand full comment

Turkey will be more authoritarian than today.

Expand full comment

Is this assuming a victory by Erdogan or regardless of who wins the election?

Expand full comment

Putin will be arrested or rip before Trump is arrested, re-elected or rip.

Expand full comment

Looks like next week we'll get to see how well this holds up.

Expand full comment

👍

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Josh Allen will continue to regress after peaking in 2021, and the Buffalo Bills will not make a Super Bowl.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Continue to regress? His stats in 2022 were basically the same as 2021.

EDIT: Whoops didn't see the "contrarian" predictions part

Expand full comment

Russia mobilizes significantly for the war in Ukraine AND they don't make that much extra progress AND Putin's regime stays stable.

(The only one I'm less than sure of is the first part of the statement; otoh it's very obvious that their current strategy of pretending not to be at war is not working and will eventually result in them losing.)

Expand full comment

NMSZ shakes things up

Expand full comment

Carbon removal is going to be a bigger deal than people expect. Also I think that a lot of previous predictions here aren't really contrarian.

Expand full comment

I'm tangentially involved with carbon removal right now and wholly agree with this prediction.

Expand full comment

(Assuming that what you're referring to is what is officially known as "carbon capture and sequestration", i.e. moving carbon from the atmosphere/plants/ocean/soil cycle and parking it well underground.)

Expand full comment

Yeah that is what I meant

Expand full comment

Can't we just eat the carbon? :)

Expand full comment

The US decides it will go after China rather than Russia, which means it will cut its loss before Ukraine runs out of bodies. The results will be:

- Russia will be in possession of whatever areas it deems necessary for security reasons, probably everything to the East of the Dniepr.

- Poland, Hungry, Romania et al will secure/protect/reunify the areas of the Ukraine they have historic/ethnic claims to.

- US agro business will take de-facto control of the fertile land.

- Surplus Ukrainians will migrate to E-Europe, US, and W-Europe.

- Peace will be declared.

Expand full comment
founding

That's like saying in 1940, "The US will go after the Nazis rather than the Japanese, therefore [...] Japan will conquer most of China and peace will be declared in the Pacific". The US is a global superpower; it is *the* global superpower, and it can do two things at the same time. Particularly when they are two different sorts of things; the aid we are giving Ukraine is mostly stuff that is irrelevant to our dealings with China.

But it would help if you were to be more clear about what you mean by the US "going after" China. Are you expecting the US to go to war with China this year or next? To actually invade China? Be specific, and then ask yourself why this would limit US support for Ukraine.

Expand full comment

"That's like saying in 1940, "The US will go after the Nazis rather than the Japanese, therefore [...] Japan will conquer most of China and peace will be declared in the Pacific"."

Correct. Some people did and others argued to stay out of the war completely. As it turned out, the US were capable of defeating Japan basically on their own, focusing their main effort on Germany while lending, leasing, and otherwise supplying the war effort of the British Empire and the USSR. This remains a remarkable achievement. It took a while till everybody got the message, but by the 90s, the US were the sole remaining super-power. From the outside, the US appear to have developed the habit to go "abroad in search of monsters to destroy", creating them first, if necessary. Obviously many people claim this is inherently good, necessary, unavoidable and in any case worth it for the fulfillment of their manifest destiny. All of which might even be true; I am not arguing morality or even utility.

My contrarian speculation is simply the war party will conclude that impelling a de-facto alliance between China and Russia was a bad move, but still correctable.

As for "going after" China, the US aims for 'full spectrum dominance', so at the very least

I expect them to try to prevent the political reunification of Taiwan and China as well as their economic integration. If it should become necessary to destroy Taiwan to save it, so be it. Eggs and liberty-omlettes.

For some unfathomable (to me) reason, I'm not on the JCS's email-list, so I have to idea as to how exactly this is supposed to play out.

Expand full comment
founding

The only specific I see in there is that you expect the US to stop China from invading Taiwan. Yes, I agree. But, A: China is very unlikely to invade Taiwan while the war in Ukraine is going on, and B: If they do, then either you are wrong and the United States will not stop them, or you are right and the United States will send its Navy and Air Force to stop them using weapons which are rather different than the weapons the US is giving Ukraine. So I'm still not seeing how "going after" China means cutting support to Ukraine.

Expand full comment

> The US decides it will go after China rather than Russia

Why would Americans who are very eager to yield to one foreign power for no obvious reason stand up to another? Seriously why? Because Chinese are godless commies and Russia is a bastion of anti-wokeness and Christian values?

Both Russia and China are in the active phase of their decade-long project of declawing the US, and so either the US stands up to both or yields to both. Yielding to Russia but going after China is like boxing when you let your opponent's left arm punch you freely, but you're very tough on the right arm.

Expand full comment

I do not know that Americans as a population are "eager to yield" to either Russia or China, then again, I am not a US citizen.

The pivot to Asia was an Obama-era policy. Conventional wisdom guides one, if possible, to finish-off one adversary before switching to the next. If China was seen as an unavoidable power-competitor that needed to be confronted, then why not accommodate Russia, if only temporarily, when it would still have been cheap?

Maybe the US foreign policy establishment is best conceived of as a hydra whose heads get in each others way, much to the detriment not only of the rest of the world, but also for Americans.

Expand full comment
deletedMar 27, 2023·edited Mar 27, 2023
Comment deleted
Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Yeah, half of GOP base and leadership did this astounding switcheroo from "Russia bad" to "We support Russia NO MATTER WHAT" in less than a decade, few years really, all the while Russia itself kept ramping up their anti-US propaganda and subversion efforts with no end in sight.

So all this current "tough on China" rhetoric is a joke, when time comes, these guys will suddenly discover that China also has nukes, it's not an American interest to be in the East Asia, and "I'd rather be Chinese than a democrat".

Expand full comment

Schlechter Witz.

Expand full comment

Of course. That is because it was not intended as a joke.

Expand full comment

Isn't the US spending only like one tenth of its military budget on Ukraine? Keeping that up seems pretty cheap compared to the alternative of admitting that redrawing borders by force without US approval is possible, even for a weak power like Russia.

Expand full comment

This CFR report suggests $47 billion in military assistance in the last year:

https://www.cfr.org/article/how-much-aid-has-us-sent-ukraine-here-are-six-charts

The current Pentagon budget is $782 billion, so current military assistance to Ukraine represents 6% of the Pentagon budget.

Expand full comment

The US populace will turn on such a continued expenditure given they refused to spend that money on infrastructure or helping their own citizens. It's already begun.

Expand full comment

I doubt it. We already spend way more on infrastructure and "helping" our own citizens, and hardly anybody of sense thinks the marginal utility in spending still more is significant. On the other hand, spending a modest amount on helping the Ukrainians humiliate Russia is both fun and worthwhile, everyone so far has been able to get behind that -- indeed, it's probably the only thing preventing Joe Biden's popularity from going completely into the toilet (which is why Presidents have historically liked to start and successfully prosecute small wars, of course).

To be sure, there have been and will continue to be those who think playing nuclear chicken with Putin is a bad idea no matter what, those who couldn't care less about foreign policy in general (e.g. Obama and his followers), those who oppose the thing just because Biden is in favor, and a handful of those who also hated Apollo because "if we can put a man on the Moon, why can't we instead [insert random fanciful social goal]?" But there's no obvious sign these demographics are growing or shrinking, a la Vietnam.

It's also the case that people do get tired of even not very costly wars, like we got tired of the war in Iraq or Afghanistan, or On Terror, although in the first two cases there were American boots on the ground, which greatly accelerates getting tired of it. It does seem likely that a continued heavy expenditure would get people crabby and wanting to spend the money elsewhere, but so far the amount hasn't been that high[1], and aside from the usual grumbles I see no sign that people in general have had enough. What's your evidence that they have? (And I don't mean that somebody somewhere has, e.g. Rand Paul, but that a majority of voters in any important jurisdiction has.)

------------------------

[1] https://www.cfr.org/article/how-much-aid-has-us-sent-ukraine-here-are-six-charts suggests $50 billion in military assistance to date, which is a completely trivial fraction of the $6 trillion annual Federal budget even if it were spent all in one year.

Expand full comment

I disagree with pretty much every single position you took in this post, from the "marginal" utility of spending on infrastructure, to the benefits of humiliating Russia, to the popularity of Ukraine support as recent polls show support for propping up Ukraine is softening considerably. China-Russia relations are strengthening and if China starts selling them weapons, the US will have outplayed itself by delaying negotiations too long. I guess we'll see where we are in about 6 months.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

That's fine, that just puts you in the minority, and we have a robust tradition of encouraging the minority to speak its mind, since good ideas inevitably emerge from it, as well as much nonsense.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

Why have they then not turned on the considerably greater military expenditure that they have already had for many decades, which has largely been directed at the same rival power? Like why would they do it now that a smaller expenditure is finally succeeding in weakening that rival power?

Expand full comment

There's a big difference between sending a foreign country tens of billions of dollars, and general defense funding that employs Americans, shores up national defense capabilities and builds the American manufacturing base. Only a minority of Americans care about weakening this "rival" you speak of.

Expand full comment

You mean, only 173 billion dollars per year? Yeah, cheap.

Expand full comment

Yes, that is very cheap compared to their other spending on defence against Russia.

Expand full comment

Assuming it is indeed money well spent, and I'll take your word for it, it's also hard on Ukrainians.

Expand full comment

The Ukrainians evidently consider fighting a less bad choice than the alternative. The state of affairs for Ukrainians caught behind the Russian lines probably has something to do with that.

If they change their mind about that, it won't matter whether the NATO countries are willing to ship them arms or not. And if we ever decide to cut them off while they're still asking for materiel, we can try to frame that as in our interest if we choose. But we can't reasonably say we're doing it for the Ukrainians.

Expand full comment

"But we can't reasonably say we're doing it for the Ukrainians."

Agreed.

Expand full comment

AI is going to go through a boom cycle followed by a bust. There will be real practical applications and some big successes. But on the whole the world changing promise will be unfulfilled. There will also be several big AI grifters who won't get caught until the tide rolls out.

Expand full comment

Ukraine's war will freeze in 18 months top around the current frontlines, without a formal ceasefire.

Expand full comment

Isn't this the default scenario? It doesn't sound contrarian to me. What other scenario do you consider mainstream?

Expand full comment

Then register my prediction that in 18 months the war will end with what can be most reasonably described as Ukrainian victory

Expand full comment

Five years is a little tight on when I actually more or less expect this, but I think within 5-10 years current "oh, wow, religion is sure going away fast and is definitely in its death throes" will be replaced with "it sure is weird that there's been such a correction back towards religion" pieces. I don't necessarily think this will be a purely christian revival.

Expand full comment

We already got a new religion, a couple of them in fact (wokism and environmentalism).

Expand full comment

I’m waiting for Space Mormonism before I get involved in anything organized.

Expand full comment

My long term future vision is much more religious than we are now since religious people have more kids. Working on a sci-fi story where the Amish are a major population of the US and there’s another group called “The Mish” who basically decided to stop technology where we are at right now.

Expand full comment

While most of "The Mish" will stabilize around that flavor of neo-Luddism, a subfaction will splinter off from within it with even more communistic beliefs and customs. They will be led by Michael Chiklis.

Expand full comment
Comment deleted
Expand full comment

I hope not. I like the variety of people. When people cheer removing Downs Syndrome from their population it doesn’t make me hopeful, though. Not sure how people got the idea that’s just awesome with no drawbacks.

Expand full comment

I can think of several drawbacks for eugenics writ large, but what's the drawback of removing Down syndrome specifically? (My wife and I tested for this type of chromosomal abnormality, and would not have brought an affected child to term.)

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

You might want to bear in mind that Down's is not a heritable condition. It's just an accident early on in zygote development. No matter how many Down's babies you abort, you will never reduce the future rate of new Down's babies. It's just a thing that happens at a certain low frequency. So you will never reduce the number of abortions that need to be done.

The drawbacks seen by people who work with Down's adults are that as the community shrinks, the support structure erodes, the companionship vanishes, and it gets much harder for those who are left. So probably you need to make two choices: (1) abort ALL Down's babies, so there really are zero born, which would be pretty draconian -- hard to get everyone to sign on board voluntarily, or (2) accept the fact that the support that born Down's children, and adults, get is going to be of significantly lower quality.

Neither of these are especially ethically or practically attractive, I would say. There is also the uncomfortable history of Lebensunwertes leben to consider: at various times in history people have decided certain minorities live lives that are so unworthy that it's better they never be born -- cf. the history of sterilizing the unfit[1] or undesired -- or more drastically be killed. A broad policy of making such judgments does run the risk of some troubling inexactitude in just where the line is drawn separating the lives unworthy of living, and those worthy. One would like to think it would never cross over to, say, whether you have a congenital tendency towards depression, transsexuality, or an unfashionable ideology -- but the history of humanity is not reassuring.

-------------------

[1] "Three generations of imbeciles are enough."

Expand full comment

I understand this in theoretical terms but when I think of the practical reality of talking to someone with Down Syndrome and telling them I’m going to take steps to prevent anyone else like them from ever existing it just falls apart. There’s stuff where I can imagine doing that, like harlequin icthyosis (don’t google), but wherever I see someone who isn’t in severe unending pain and has the spark of consciousness I can’t justify removing them from existence and it feels quite monstrous to do so.

I think it’s a different question if you’re doing it beforehand and you are doing embryo selection but a lot of it happens post pregnancy.

Expand full comment

I thought we already experienced that replacement a couple of decades ago? Like after 9/11 everyone seemed to realize that religion had become relevant again, and in addition there was the revival of the Church in Russia, and Hamas eclipsing the PLO, and all the Muslim terrorism in Europe, and ISIS taking over after Baathist secularism and so on.

Expand full comment

The trend towards less religiosity seems to be well established, see for example the link below. On what do you base your guess?

https://www.pewresearch.org/religion/2022/09/13/how-u-s-religious-composition-has-changed-in-recent-decades/

Expand full comment

I'm not sure I understand the objection part of the question - I'm more or less predicting the reversal of a trend, not that one doesn't exist.

Beyond that, I think the prediction is most heavily based on various conversations I've had with individuals who have tried atheism (and assorted atheist soylentish religion substitutes) and who are broadly miserable and are looking for something besides that. Very unscientific, but I'm mostly just competing with an isolated assumption that a trendline won't change, so I'm fine with it.

Expand full comment

I agree. John Michael Greer has for a long time been making the joke that if you look at the number of active practitioners, astrology is mainstream and astronomy is a fringe interest for weirdos. Same with the number of people who believe in ghosts or angels or fairies or divination or magic. Of course there are people who are genuinely oriented towards anything rational and against anything irrational. But it continues to appear that what people really objected to about Christianity was the morality and the necessity of going to church.

Expand full comment

I think American religious trends are going to broadly continue. But I also think the commentariat is extremely disconnected from what those trends are. The idea that religion is dying is true only for upper middle to upper class white liberals. And I expect that trend to continue. I also expect Christianity to continue to get browner/blacker/more ethnic (as the population will generally). And I expect continued modest growth in non-Christian religions.

I also fully expect upper middle class progressive New York journalist to continue to be bubbled. "Well, that church on 5th street shut down. Guess there's no more Catholics in the world!"

Expand full comment

The lower classes are less likely to attend religious services regularly, even if they identify as religious on a survey.

Expand full comment

This isn't what I've seen (see, eg, https://www.pewresearch.org/religion/religious-landscape-study/compare/attendance-at-religious-services/by/income-distribution/). While the very poorest people attend less than moderately well off overall the trend is that more income means less attendance.

Expand full comment

My guess is that this won't happen by then, but perhaps in a later generation.

Expand full comment
Comment deleted
Expand full comment

What do you define as a clear decline? I think declining population and slowing growth rate are baked in by this point. But there's a difference between "4% growth with a declining population" and outright stagnation let alone full decline.

Expand full comment

Not in the least contrarian. There’s a whole industry in the US devoted to that ideology. It’s wrong though.

Expand full comment
Comment deleted
Expand full comment

Simp is doing a lot of work there.

The standard rhetoric I see online is that China is bust because of demographics or house prices, or trade wars etc.

Expand full comment

"bust" is a bit strong, but yea the China narrative has shifted a lot in the past year or two. The MSM hasn't quite caught on to that shift (unless the Economist counts) but probably will soon.

Expand full comment

The economist is about as mainstream as it comes. The end of china’s growth has been predicted for more than two decades. There are people making careers out of it.

Expand full comment

I feel like it's underdiscussed that China has apparently not had massively negative effects from ceasing Covid Zero. As far as we know, they have not had a tremendous wave of sickness & death. Obviously the CCP is highly incentivized to lie, but China is very integrated into the global economy- there are lots of Westerners who live there, have VPNs to use social media, businessmen fly in & out regularly, etc. If millions of Chinese were now dying of Covid after they lifted NPI restrictions, we'd know about it on some level, and apparently this.... just hasn't happened.

Feels like this is a massive indictment of NPIs like distancing and lockdowns? Isn't this as close to a natural experiment as possible, proving that they simply aren't very effective?

Expand full comment

For what it’s worth it seems likely that China did in fact have an enormous wave of deaths it’s just that:

a) The mortality rate of Omicron is *only* like 0.3%

b) They more or less successfully suppressed the news.

People have attempted estimates by tracking death rates of specific groups via obituaries (scientists, etc) and got estimates around 1 million if I remember right, which is about what you’d expect.

Partly, I think it’s that by the end of zero Covid the restrictions had gotten bad enough that there was more positive news about them ending than negative news about the death toll.

(Only is bracketed because it’s a much smaller number than it could have been, but still high enough to kill a ton of people)

Expand full comment

Depending which estimates you believe 1-10 million people died just in that 3 mkbth period, hospitals were overwhelmed and basic medicines were being sold to the highest bidder at extortionate prices because there were such severe shortages. Even with conservative estimates its a greater death toll than any natural disaster in recent years or the total death toll in the west over covid. The reason nobody is talking about it is that China is very good at suppressing information.

Expand full comment

Where did you get that information? It’s highly unlikely. China rode out the most dangerous forms of covid, and omicron isn’t that dangerous, though highly contagious.

Expand full comment

Omicron isn't an order of magnitude less dangerous, and it's possible much of the measured difference to prior strains was polluted by the fact of prior infection in US/Europe where the reporting data is actually nominally reliable. At just 0.1% IFR (~4-5x lower than prior variants) and 75% attack rates you get 1M deaths. I would be shocked if the Chinese were magically impervious to a virus to which no other group is (except maybe parts of subsaharan Africa with median age of 19). Also note than the estimated deaths in India was 3-5M vs 10x fewer reported officially. It's easier to hid 1M+ sized numbers in populations over 1B.

Expand full comment

Two things could be true at once. That the Chinese figures are too low and that they aren’t as high as 1-10M claimed by the op. I have seen reasonable estimates of up to 1 million. I don’t believe 10m.

Expand full comment

Also I should point out my range is roughly consistent with taking Hong Kong deaths (where data may be credible) and extrapolating to all of China.

Expand full comment

I don't believe 10M but also don't believe <1M. Would think 1.5-2.5M is likeliest. 10M is crazy, would imply close to 1% IFR.

Expand full comment

No? I think lockdowns in most western countries did more harm than good, but in China they worked exactly as intended: hardly anyone died of covid before their anti covid policies were lifted. So they had time to wait until most people were vaccinated and a much less dangerous variant of covid was circulating in the rest of the world. That variant was also much less containable, so they opened up. Due to the vaccinations and the variant, there were much fewer deaths than they'd have had by opening in 2020 or even 2021.

I'm not saying we can trust the Chinese numbers, but even if we assume 10x or even 50x as many "real" deaths they're still far below the deaths per person of countries that didn't pursue zero covid or stopped it earlier.

Expand full comment
deletedMar 27, 2023·edited Mar 27, 2023
Comment deleted
Expand full comment

If that's how you feel, then you can't make any statement about China whatsoever. Maybe they never had lockdowns to begin with. Or maybe they never opened up. Who knows? But in reality, China isn't North Korea, and there's enough grassroots communication between people inside and outside of China that we can in fact get a somewhat orders-of-magnitude idea of what's going on.

Expand full comment

https://www.nytimes.com/interactive/2023/02/15/world/asia/china-covid-death-estimates.html

Not agreeing or disagreeing with you, just giving a indication of the likely actual deaths

Expand full comment

I kept hearing they did have enormous numbers of deaths, which people were surprised by because previously the Chinese government had devoted so much effort to suppressing it nobody expected they would abandon it until they'd vaccinated enough to avoid that.

Expand full comment

The problen with an aubthoritarian system is that you can't acknowledge within it that the current policy could be wrong. Any preparation for mass covid outbreaks would mean admitting the possibility that the current policy wasn't going to be 100% effective, which would be career suicide for any officials who did it. It also seems like there wasn't really a plan but a sudden decision at the top

Expand full comment
Comment deleted
Expand full comment

> Well, we already knew that distancing, lockdowns, and community masking were worthless; the science is very clear on that matter.

Any links to this very clear science, please?

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Once the issue is no longer "just look at Covid rates", but whether the appropriate epicycle to explain the large difference is "greater underlying illness and obesity" or "initially weaker NPIs, rapidly further undermined by large scale noncompliance and a major party's increasingly defining itself as opposing Covid reduction measures", it seems safe to say the claim that the science is settled goes a bit far. (At least without more data and analysis than have thus far been introduced.)

Expand full comment

Hello all, I am a young engineering student that has recently been trying to get into classic literature and I would greatly appreciate recommendations! I’ve been reading mostly fantasy my whole life (Sanderson, Rothfuss, Tolkien style stuff), but recently I have wanted to deepen my knowledge of the literary canon and the humanities more broadly.

Since starting my classics journey a couple months ago I’ve read Hemingway, Dostoevsky, Orwell, Rilke, Camus and several others to varying success. I sometimes feel like I’m just blindly throwing darts at the board and trying land on something valuable, so I’d appreciate any wisdom or direction on the subject.

I’ve been thinking about picking some Jane Austen or Oscar Wilde but I’m not super attached to the idea. Let me know!

Expand full comment

i would go with joseph brodsky readling list, the name of it "to have a basic conversation" kind of speaks for itself: https://bookhaven.stanford.edu/2013/11/joseph-brodskys-reading-list-to-have-a-basic-conversation-plus-the-shorter-one-he-gave-to-me/

Expand full comment

My recommendation as a once engineering student years ago is to start with the old stuff and go forward. Thucydides and Caesar are great places to start, especially for engineering students. Those should lead you forth, and by just following curiosity across time, after some years you'll be reading the entire works of Joseph Conrad which really should be read after one turns 30 anyway. YMMV.

Expand full comment

Kipling, one novel (_Kim_) and lots of short stories and poetry, is very readable.

Expand full comment

I found the Russian classics (Tolstoy and Dostoyevsky in translation) more rewarding than any of the English Victorian lit, fwiw.

Actual Classics (i.e. Greek and Roman literature) is also quite good. Herodotus is kinda bonkers fun - a guy trying very hard to be a proto-historian but also very credulous about everybody’s wacky mythological stories.

Expand full comment

A list:

Tolstoy: Anna Karenina

Nabokov: Pale Fire

Philip Roth: American Pastoral

Kafka: The Trial

Milan Kundera: The Joke

Faulkner: Absalom, Absalom!

Melville: Moby Dick

Henry Miller: Tropic of Cancer

Gogol: Dead Souls

Thomas Pynchon: V

W.G. Sebald: Rings of Saturn

Borges: Labyrinths

Hrabal: I Served the King of England

Machado de Assis: Quincas Borba (Philosopher or Dog?)

I suggest if you find a book you really like, read more by that author. Having favorite authors you are obsessed with (or are at least really, really into) is the key to enjoying literature. The only value of it is enjoying it.

Expand full comment

> the literary canon and the humanities more broadly

Take some time to think through what you want to get out of this. Literary classics optimize for different qualities than genre fiction, though academics tend to overrate the distance between the two. Given your background there's going to be a certain amount of trade-off between "comfortable" and "rewarding". Don't force yourself to read something just because you think you should, but don't give up on it because it's not your usual fare either.

If you need to adjust to the new vibe gradually, my picks for literary "gateway drugs" would include Austen (Pride and Prejudice), Chesterton (The Man Who was Thursday), Dickens (A Tale of Two Cities), and Twain (Huck Finn). But there's also something to be said for going straight for the really deep classic canon, e.g. Homer, Virgil, Dante, Cervantes. One advantage of these is that you'll read them in translation anyway, so the language is relatively low-friction modern English and you can focus more on the ideas and culture which are going to be very unfamiliar.

Expand full comment

Candide. The Meditations. Count of Monte Cristo. Iliad/Odyssey.

Expand full comment

Shakespeare. Pick a play and listen to an audio production while reading along (or the "Shakespeare Appreciated" audio books that intersperse commentary on what's going on). Then watch a few different productions of the play. You will quickly develop strong opinions on how you like the play to be staged, what's important in the plot and characters, etc.

Expand full comment

Asimov's Guide to Shakespeare provided lucid background to the plays in two volumes: the English plays and the Italian plays. It made enjoying the canto much easier.

Expand full comment

Greg Cochrane has book lists I've found tip top. Some classics, some not. https://westhunt.wordpress.com/

Expand full comment

Dostoevsky is the best, did you read The Dream of a Ridiculous Man? You almost certainly read The Brothers Karamazov, right? Maybe you should check out Don Quixote next.

Expand full comment

Read what you enjoy. One thing will lead to another.

Expand full comment

A later thought: I found Alexander Pope's (18th century poet, essayist and wit) essays exceptionally easy to connect with. He has an essay called I think "Peri Bathos" about all the bad, soppy, cliche-ridden poets of his day, and I can remember reading that essy in my dorm room as an undergrad and literally crying with laughter. I'd catch my breath, think I was maxed out, resume the book and come to a bit *even wittier* than all that had come before, and I'd be convulsed with amusement again.

Expand full comment

Let’s see, what’s good that hasn’t been mentioned yet…?

Henry James, Graham Greene, Truman Capote, P G Wodehouse, Agatha Christie, Eiji Yoshikawa, Hans Christian Andersen, Mark Twain, Carver

If you decide you don’t need “classic” and want to try some more contemporary literary fiction: Cesar Aira, Roberto Bolano, Jesse Ball, Zadie Smith, Amor Towles

Expand full comment

You can also go back in time and read some of what inspired Tolkien: try Walter Scott’s Lady of the Lake, some Middle English like Sir Orpheo or the Prose Merlin.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Many of the classics are hard to access -- they seem slow and dusty and alien. I recommend finding a book by a lover of literature -- their essays about favorites of theirs will give you ideas about which books to try, and also help you find your way into enjoying them. I'm having trouble right now thinking of books like that. Let's see: Virginia Woolf, The Common Reader is one -- and I think she wrote 2 separate collections, Common Reader 1 & 2. Dwight MacDonald's Against the American Grain is good. You want to read the section named "Heroes" -- about good authors. The other half the book is about writers he thinks are jerks, and these essays are also very good and very funny, and help give you an idea of what makes a book good or lame. Edmund Wilson and Mary McCarthy wrote good essays about literature, now collected into books.

Expand full comment

Totally agree with this. Reading commentary and analysis on literature will go a long way on helping you understand it and subsequently enjoy it more (or recognize why you might not like it).

I wish this was leveraged a lot more in teaching literature outside of college. Yes the teacher maybe tells you about the motifs and themes but a HS teacher doesn't have the same level of knowledge of every author as a researcher might. Once i started reading commentary and analysis I learned to like and understand much denser works. You have to see how other people read a work before you can read it deeply yourself.

Expand full comment

As a fellow fantasy reader, I would recommend Goethe. He's most famous for Faust, which is pure fantasy (Faust And Mephistopheles Adventures), but I also really enjoyed the short story Egmond, which is based on the Dutch Revolt but still reads like Medieval fantasy.

Shakespeare of course. The Tempest is a weird read, pure fantasy where an exile uses spirits to take revenge on the people who exiled him, and the heroes and villains can be switched around however the reader interprets.

Third in the Oedipus series, Antigone seemed the most engaging, with Antigone defying the state to bury her traitor brother.

I don't remember much of The Iliad, apart from the movie Troy shitting on the source material. You should probably read it.

The Catcher in the Rye and One Flew Over The Cuckoo's Nest are both well-written stories about anti-establishment protagonists. Cuckoo's is better on the grounds of actually having an ending.

Steinbeck's East of Eden is about the upbringing of two children of a psychopath who abandoned them at birth.

I guess if you're looking for meaning in the classics I've got to recommend Fahrenheit 451, in which a man looks for meaning in the classics.

Expand full comment

>I don't remember much of The Iliad, apart from the movie Troy shitting on the source material. You should probably read it.

I would recommend starting with The Odyssey and then reading The Iliad. The Iliad is a great story, but it's really dense and can be kind of hard to follow if you're not used to the style of ancient epic poetry. The Odyssey makes a much better gateway into the genre due to following a more standard hero's journey narrative template.

Expand full comment

I'm pretty sure I read the Iliad through, but I'm not sure I ever finished reading The Odyssey. Feels like I gave up on chapter 2 and just watched the movie. Likewise Moby Dick read like a chalkboard, can't recommend that one either.

Expand full comment

Well it’s been just over three weeks on quetiapine/seroquel, I’m now on 200mg a day.

Other than as a tranquilizer at bedtime this stuff is USELESS!

The first two weekend days gave me some more energy, but once I went back to work on Monday I felt even more lethargic in the morning, but a little more energetic in the early afternoon,

Moodwise I still ruminate on my (lack) of a fully romantic relationship and unhappy marriage and get tearful or angry.

The pills don’t work.

I have an upcoming telephone appointment scheduled on May 15th with the psychiatrist who prescribed me these pills, I suppose I’ll keep taking them until then to give them a fair shake but I’m not optimistic.

This will be the fifth anti-depressant I’ve tried (likely) without success (though one, bupropion, did work to diminish my sadness, but it was at a price of greatly increasing my anger).

Expand full comment

Take it from me, a guy who's been institutionalized for suicidal depression (caused by romantic dissatisfaction too): your pain is just a fart in your brain. The way out is to see it like that, and focus on the things that are not pain, and that are not self-justifications that your pain is acceptable. Life is so much bigger than relationship trouble.

That said, as to marital woe, is it that you can't stand your spouse, or just that the spark is missing? If the latter, there was a great line in Before Midnight (last of a trilogy): "You want a fairytale, and what we have is not perfect, but it's real." (paraphrased)

Expand full comment

Psychologist here: Meds aren't the only thing that can help. But if you are going to do the med route, read up on them & educate yourself about options. Three that are often neglected are (1) a brief course of psychedelic-assisted therapy using psilocybin (now or soon to be legal somewhere in the northwest -- Oregon?). (2) Ketamine. (3) MAOI inhibitors. Not knowing a thing about your drug trial history or your situation, I can't say whether any of these make sense for you. I do, though, think that if you're seeing a psychiatrist who rejects all of these out of hand (as opposed to saying they're not a good approach for you in particular) you should consider changing docs.

Expand full comment

Thank you for the update

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Software engineer considering a mild career pivot. Inspired by my own struggle on the anxiety spectrum, I want to build one or more open-source web apps that support the user's mental health, and apply whatever research exists on effective interventions.

Some ideas: help folks understand their mental state over time ("How do you feel this evening?"). Help folks build good habits with mild gamification: user creates a morning routine checklist, and builds a streak each day they check all the boxes. Help folks break undesired habits: "repsonse plans" for when they (e.g.) procrastinate at work, eat 7 slices of pizza, or have a panic attack. In each case, the app offers user-customizable guidance to help break the habit. You see how often you engage in the bad habit, hopefully decreasing with time. Maybe also guided meditations and breathing exercises. Maybe long-term goal setting and tracking progress toward them. Maybe someday, a "multi-player" aspect where you and a loved one can follow each others' progress.

A lot of proprietary apps do things like this, but I want to build it as open-source, local-first software (https://www.inkandswitch.com/local-first): "you own the data, in spite of the cloud". Probably starting as a progressive web application (PWA). The project would be open to public contributions.

The initial audience might be vaguely-sophisticated nerds who want these features without bigcorp surveillance, and perhaps who want to export all the data for their own analysis. But the big goal is to build a highly usable solution that everyone can benefit from, mildly analogous to Signal (the messaging app).

Is anyone working on approximately this? Who should I talk to?

Expand full comment

There is an app called Healthy Minds. It’s free and from what I’ve read the group seem well intentioned and earnest about creating something useful.

https://hminnovations.org/

Expand full comment

It seems like predictionbook.com has been down for the last several days- does anyone here know anything about that? Is it likely to be fixed at some point or gone for good?

Expand full comment

Has anyone else noticed that TED talks suddenly are not a thing anymore? Wrote a little piece about it here --> https://fictitious.substack.com/p/where-did-all-the-ted-talks-go

Expand full comment

The first couple were cool, then they started filling them up with people looking to sell stuff and people with half baked ideas, and they quickly lost their veneer of scarcity/quality.

Expand full comment

Suddenly ?

You mean this as a metaphor for how prevalent they were before their decline in popularity, not as an actual measure of surprise right ? I remember them not being popular for a long time.

The problem with TED talks is that they never defined their "ideas worth spreading" objective clearly, when the "ideas" they present could be as vapid as girl power feminism or as revolutionary as addding new senses to the human sensorium (https://www.youtube.com/watch?v=4c1lqFXHvqI) or denying that the Self exist (https://www.youtube.com/watch?v=ZFjY1fAcESs), it becomes a hit-or-miss event of hunting for the insightful needle among the vapid culture-war haystack.

And like you say in the beginning of the article, TEDx had something to do with it. No way putting your name over any old gathering of students is not going to significantly dilute its meaning and render it meaningless.

I don't know why, but https://aeon.co/ gives me many of the same vibes that TED used to have. Cosmopolitan, but not necessarily progressive to death. They do have their... specimen, but not in the same concentration as TED or (god forbid) Vox.

Expand full comment

TEDx was a bad idea.

With TED, I felt there was a sufficient quality check. I mean, nothing is perfect, but simply watching random TED videos is probably better than 95% of things I do online.

But TEDx, that was just... anything; mostly popular bullshit. It seems like the only requirement was "can make a 20 minute talk". And I kept reminding myself "only click TED, never click TEDx", but damn, it looked so similar that I kept making the mistake.

And then, of course, I also stopped clicking TED.

Expand full comment

There's always more money to be made if you've got a high-end brand and you can introduce a less good but cheaper version marketed to the next stratum down (which is several times the size of your current target market).

Expand full comment

Yeah, absolutely, they were declining for years now, it was the realisation of them being gone thart was rather sudden for me.

Expand full comment

I think probably Covid put the finishing nail in them too. Having a big audience to hear the talk was what made them a TED talk instead of just a regular youtube video essay. But for most of 2020-2022, you couldn't get an audience of hundreds or thousands into an enclosed space.

Expand full comment

Andrew Gelman complains about how many of them Matthew "Why We Sleep" Walker has had, and most of the ones he links to post-date the start of the pandemic:

https://statmodeling.stat.columbia.edu/2022/06/17/walker-guzey-media-appearances-a-comparison/

Expand full comment

First: It's very frustrating when people talk about some problem or topic being hard, as it discourages many people from engaging with the domain in the first place. Some problems are only hard until someone figures it out, then it suddenly becomes common sense and obvious. Which is also frustrating because people forget what it was like to not have the answer in the first place. It only takes a single person to figure out the problem and share the answer. Become harder than the problems you are facing and scratch them up.

Second: a lot of people are critical and negative towards Lex Fridman and his podcast, in particular they criticize his interviewing capabilities. Where are the alternatives? There have to be other tech people that are reasonably well connected and charismatic interviewers. I'm normally pretty positive towards Lex, but his latest interview with Sam Altman was very disappointing and shallow. I'd love for an interview that digs deeper and asks much harder questions than what was presented. Unfortunately, I'm imagining that anyone who would be willing and able to generate deeper and more challenging interviews would also be unlikely to be able to get an interview in the first place.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

>It's very frustrating when people talk about some problem or topic being hard

I mean some things are "hard" in that there is a significant portion of the population that is just too stupid to grasp it. Even basic Algebra falls into this category.

Something like Organic chemistry it is probably well over half the population.

Expand full comment

I think that your point 1 is overly broad; in math is actually a pretty good idea to try to convince people that certain problems are hard, because it gets them to more thoroughly check their work before submitting papers

Expand full comment

First and second seem to be frustratingly hard to connect. ;) + we time changed this weekend.

Lex: yep, one wonders, but: actually, he has a fine voice (unlike Cowen - who has better questions - and unlike many non-pros - one reason why I hardly ever follow podcasts) - at least for an multi-national audience. (I watch the interviews on youtube; yes, with English subtitles on, but here I feel I do not need them). And his guests really want to talk when with him. Mr. Beast, Peterson, Cowen, Musk guest thrice, Aella - though he talked far too much then, must have been excited to be alone in a room with her. Plus Lex's questions are ok, plus he reads. His guests and audience seem to appreciate him not being Joe Rogan (guest twice, so wdIk?).

Expand full comment

Someone really needs to convince Julia Galef to start podcasting again.

Expand full comment

These are probably known by other ACX readers/well-known in general but some podcasters w similar guest profiles to Lex that come to mind -

Tyler Cowen (his interview w Sam Altman is disappointing though basically no AI safety)

Dwarkesh Patel (haven't heard him a lot tbh)

Sam Harris (much better interviewer than Lex imo though doesn't have the tech background to get into the weeds + too consumed by politics recently)

I too would like to hear more from Lex alts, but Lex seems to have captured so much market+guest share that it's hard to imagine someone else coming up.

Expand full comment

The value of Lex's podcast are his guests. And his guests make it a popular podcast, letting to more good guests etc. He's really a very poor interviewer and I don't really understand how he had the original activation energy to get big enough to get decent guests in the first place.

Expand full comment
founding

I'm planning on visiting Europe starting with Berlin on the 26th of April, looking for suggestions/invitations for cool places and activities. Currently leaning toward making my way down to Italy via train or car but easily swayed.

Expand full comment

What are you hoping to do/see? Why are you going?

Expand full comment

If you can fit it into your travel, Vienna is not to be missed (I studied there 20 years ago.)

Three strong recommendations:

Go see an opera at the Vienna Opera: a "steh parterre" tickets is 18 euro, and no matter what you see it will be memorable.

Go to the Heeresgeschichtliches Museum - it has everything from a 16th century manual of pike to gates and posters from the pre-Anschluss street fighting.

Go to the MAK, and just be amazed by the variety of designs for the same purpose.

Expand full comment

Where are you going from there? I can't help if you're going East, but if you go West from there and are stopping in Brussels, this was a fun find, and it was in the middle of a nice walk downtown with a fun (and appropriate) beer garden.

https://www.atlasobscura.com/places/commemoration-of-peter-the-great-s-vomit

Expand full comment

I went to Strasbourg for a few days, on the France-Germany border. It's one of the prettiest things I've ever seen. The city is built around a small river which is crossed by bridges on every block. Lots of European cities are defined by the relationship between the modern and medieval architecture (in Paris the old dominates due to regulations, in other cities the medieval is cordoned off in one small downtown area, etc). Strasbourg has this vibe where all the buildings are old not because they tried to preserve it, but just because they failed to replace it.

Also it has the EU Parliament (I didn't go inside, but one imagines they have tours?), and a really cool old cathedral called the Strasbourg Minster.

Expand full comment

Went to Strasbourg for Christmas - highly recommend!

Expand full comment

I will be visiting Ghana, Togo and Benin for two weeks in May. Any SSCers there? Any recommendations?

Expand full comment

What are you hoping to do/see? Why are you going?

Expand full comment

Asking questions in the comment section is the worst way to poll this particular audience, but here's a question about AI providing psychotherapy.

For those who have never been interested in a therapy with a human, would you find therapy with something like a specially RLHFed ChatGPT more or less attractive?

For those who have done therapy with humans, do you think you could be interested in therapy with an AI? Why or why not?

Expand full comment

i have done therapy with humans – it has proven highly valuable for me, especially cbt since it has provided me with really useful self-help instruments that i [unfortunately] got to reality-check when i evacuated from russia last march. i believe without that it would be sooo much harder to deal with the post-february-24-reality and forced emigration [and peaking anxiety because of all that].

i would definitely try therapy with ai, because first of all - how cool is that! i remember playing with this chat bot app "replika" some time ago, it was nice, even though i came to realize at some point that i started humanizing it too much and deleted the app.

so if i had a way to make sure that some therapeutic ai was demonstrating the same level of efficiency as human therapist and was able to speak russian, i would definitely go for it, because what i want is the result [especially when it comes to anxiety in general and/or anxiety induced insomnia], who delivers it to me - human or language model - is actually not important at all. if someone proved to me that i can hug a tree for 10 minutes and not worry for a couple of days after that – you would watch me dry clean my t-shirts from the tar

also, as i am now in latin america, where i am not a native speaker and far from being a fluent speaker, my therapy goes in form of a face that i see in zoom window on my laptop - it is not an offline experience. so i guess having an ai instead of that would not be really that different. plus, scheduling is easier, it is supposed to be available at all times which is beneficial for situations when f.ex. i have insomnia episode and cannot fall asleep which happens much more often than usual these days

finally, it does not hurt to try a technology that knows more than any human theory-wise – i always have an option to go back to that zoom session with a fellow human being. and if it does work – good for me

Expand full comment

I could see ChatGPT as better than nothing, and human therapists are expensive, but a *lot* of the value of human therapists is that they are humans - most of what they say will be stuff you already 'know', and the rest is usually easy to find in books, but an actual human being in the same room as you, looking you in the eyes, triggers a whole bunch of stuff that turns statements from idle abstract concepts into things that actually sink in.

Part of therapy for a lot of people is doing (anxiety-inducing but important) stuff because you promised your therapist you would and you don't want to disappoint them, and I struggle to see ChatGPT activating those same guilt circuits even if it's exactly as good as coming up with the frankly obvious advice on what you ought to be doing.

Expand full comment

I have done therapy with a human, focused on exposure and response prevention. While there are parts of this process that an AI would be helpful with (building the exposure hierarchy and suggesting exercises), an important element for me was having a human expert assuring me that I am safe when doing things that felt dangerous. I would not find a computer as helpful there.

Expand full comment

Strongly interested, but it would have to be offline for privacy reasons. I'm honestly considering running Alpaca on my own hardware just for this reason. I'm very interested in pasting some journal entry of mine from some time ago, ask about it and get another opinion on my honest feelings and desires. Or just talk about feelings and have them acknowledged. Then maybe plan how to get better. I think of it as talking to myself or to mirror, but a bit better. It would be immensely helpful, similar to journalling, but with a feedback.

Never been in therapy, though I expect most of its value to be full attention of another human on you and having your pain acknowledged.

Expand full comment

I received some psychotherapy as a child, did not find it valuable at all. Just people dumber than me telling me shit I already knew.

As for psychotherapy as an adult, never really been that interested. But if I was, it would be to probably share my deepest darkest thoughts and seek validation for them from another human being. or maybe a purchased "friend" to bounce social and interpersonal ideas off of without polluting actual relationships. But my friend groups have enough siloed I can mostly achieve that on my own.

As such an AI isn't really going to provide much value unless I am convinced it is actually an AGI, in which case I doubt I would want to waste my time interacting with it on psychotherapy. I don't need psychotherapy.

Expand full comment

I got GPT 3.5 to do spoken word therapy with text to speech output and my microphone as input for speech to text. It honestly wasn't bad – while it didn't feel like speaking to a human, it wasn't rote and I got some actual insights. I would use it again as a first pass before talking to a real therapist.

Expand full comment

Didn’t some people think of ELIZA as a sort of therapist? Wanting to be alone with it with the office door closed? Seem like I read something about that.

Expand full comment

George Lucas went there, 50 years ago.

https://www.youtube.com/watch?v=U0YkPnwoYyE

Expand full comment

I think it'd really depend on what guardrails are in place. Repeated bad experiences with human therapists have trained me to avoid certain topics entirely, and phrase others carefully so as not to trigger any automatic processes/lead them down horses-not-zebras lines of questioning. It's very valuable to have a trusted friend that one can just be like "yeah life sucks atm and it wouldn't be so bad to die" or some equivalent. (NB: not how I currently feel!) But things like that are just not 0k to say to a human therapist. If it were different with a machine...? Maybe.

(realistically I know this will never happen - they already nerfed GPT-4 so it's not actually capable of the empathy an actively suicidal person needs, because Liabilities. so I think all such outlier situations will be redirected to a human therapist instead...which sort of defeats the point, for me. really don't wanna get 5150'd again.)

Expand full comment

I have the same problem. I'm trying to build a companion AI (not a therapist, but hopefully theraputic!) using llama because I want someone I can talk about the hard things with.

Therapy, human or AI, would actually be really useful if it wasn't for the overbearing moralism of the dominant culture in the English-speaking world. Hell, if it weren't for that moralism, a lot less people would even need therapy. How are we supposed to be human beings in this suffocating environment?

Expand full comment

Some of it is that, yeah...but I think even people who are largely 0k with the current orthodoxy could get a lot of value too. There are a lot of just, really messy, gross, base, troubling questions one must wrestle with as a human. Not just the fact of suicidal ideation, but also stuff like the urge to murder, or dealing with sexual fantasies. Even in current_year we haven't moved *that* far from the animal part of human nature...there's an incredible variety of different lipstick now, lots of ways to pretty up the pig. But it's still fundamentally a pig. Glossing over that with high-minded moralism (of any sort) doesn't give *answers*, or stop the dark urges, and that's dangerous. Leaving people to fend off the abyss on their own, cause it's societally not acceptable to hash it out with others anymore...a lot of them aren't going to come through intact. This seems eminently preventable and worth preventing.

Expand full comment

Hell no! It's bad enough dealing with a human who I suspect of not hearing what I'm saying, and merely picking a response from their repertoire based on some superficial level of pattern matching. I recall the Eliza program from the 1970s (https://en.wikipedia.org/wiki/ELIZA) This would be much the same, except with a lot more responses available to it.

And that's before we get into the question of what was in its training data set. Random crap from the internet? It's repertoire would doubtless include suggesting that I kill myself, or describing in gory detail various forms of rape it (claimed it) would like to perpetrate.

OK, looked up RLHF - reinforcement learning from human feedback. Quoting the wikipedia article on the subject: "One major challenge of RLHF is the scalability and cost of human feedback, which can be slow and expensive compared to unsupervised learning. The quality and consistency of human feedback can also vary depending on the task, the interface, and the individual preferences of the humans. Even when human feedback is feasible, RLHF models may still exhibit undesirable behaviors that are not captured by human feedback ..."

You aren't going to get chatgpt levels of appropriate response from the size of data set you can accumulate with live humans. (This assumes you don't find a way to record every therapy session involving live humans for some number of years...) And even if you did, there's "talk therapy" and then there's good "talk therapy". Picking the subset would be difficult, and tend to once again make it too small.

So you compromise, start with a model trained on e.g. half the postings on the internet, then have humans tell it not to use some of the responses it comes up with, and to use the ones the humans judge as looking somewhat like therapy. And in the corner cases it uses drastically inappropriate responses that never came up during RLHF. The rest of the time, it produces a verbal salad that's vaguely like therapy. Ugh!

Expand full comment

I've never been to therapy or been interested in it. Yes, I'd find therapy with RLHFed ChatGPT more attractive. ChatGPT won't judge me, hate me, or divulge my secrets, but a human might. I don't feel nervous or embarrassed talking to a machine about my problems, but I might when talking to a person, especially if the problems are partly of my own creation. Also, I expect ChatGPT therapy to be much less expensive and much easier to schedule.

Expand full comment

Just so you know, OpenAI leaked people's chat histories last week.

Expand full comment

Dramatically less interested.

I personally feel like I experience value from therapy when I feel that I'm being listened to by a person who cares enough about my experiences to put novel thought into them, who'll engage on a personal level without falling into cliches or platitudes.

I'm not the type who'll fail to consider things which any reasonably insightful person would think of as advice based on a surface-level description of my situation. Just feeling listened to can be helpful, but I'll be offended if the other person jumps to offer advice before they've acquainted themselves with my situation thoroughly enough that they might plausibly offer advice I haven't thought of already. I appreciate advice, whether or not it's actually helpful, once I feel like it's being offered from a place of genuine understanding and desire to help.

AI like ChatGPT, as it currently exists, is categorically unable to provide this. It can't offer understanding or desire to help, and its advice will be extrapolated from "things people are likely to say in situations like this." None of it is personalized and novel.

Seeking emotional support from an entity I know isn't experiencing any emotional connection would almost certainly make me feel worse, due to feelings of shame and futility.

Expand full comment

I have found therapy with a human useful in the past, and I've considered Psychotherapy recently as an interesting exercise but don't actually want to go through with it.

I've actually found AI to be useful as therapy, sort of accidentally. For example, I was asking questions about a job I might want to apply for, the AI asks followup questions, I decide to answer them to see what happens, and I end up talking through why I don't like my current job, in a way that feels legitimately therapeutic. I'm not sure how much value-add the AI had over just putting my thoughts in a text file, but I think it's non-zero, and I think typing my thoughts to an AI is easier for me than writing them out on a blank page, for whatever reason.

If an AI were designed for psychotherapy, I would be strongly tempted to give it a try. One caveat: I would be super concerned about privacy. I do NOT want my therapy sessions stored on some startup's hard drives.

Expand full comment

There are basically 3 goals of therapy that I can think of:

- debugging buggy mind algorithms

- processing feelings

- having a guaranteed attentive and non-judgmental person to talk to

The first one can be equally well done with an AI, assuming it is as good as a therapist. The second one can be done with an AI if one focuses on the feelings, not on the therapist. The third one requires anthropomorphizing an AI, which is a whole different category. At least one of those has to make sense to the client, in order for them to b interested in having an AI therapist.

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

I have found talk therapy with a human to have very limited utility, and I don’t plan to engage with a human therapist in any kind of regular way probably ever again. But from time to time there are topics that I’d love to talk to a neutral but supportive third-party about. A chat bot seems as good a candidate as any to me, especially if it was free or very low cost. I’d definitely try it to see if it was any good.

Expand full comment

Never done therapy with a human, would conceivably give it a go with an AI.

Basically I’m pretty sceptical of the value of therapy (for myself at least). It makes it feel weird and awkward to talk to a stranger about personal crap while also politely pretending that you don’t think their job is a waste of time.

But I’d give it a shot with a bot. If it doesn’t work, nothing lost, and you don’t need an excuse to walk out of the conversation if it’s not helping.

Expand full comment

A piece arguing against the certainty of an AI immersed future and for the potential of a Luddite-adjacent movement to push back enough to remind us of why we might wanna stay human.

https://kyleimes.substack.com/p/is-something-like-a-successful-neo

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

"Did someone ask for this?"

Yes, the consumer (i.e., all of us, collectively) asked for it. The consumer always wants everything they buy to be cheaper, more convenient, available 24/7, etc. etc. To at least some extent, saying "machines shouldn't replace people" is saying "things should be more expensive and less convenient in order to protect peoples' jobs."

You can make arguments that this is a false promise - that the AIs are in fact less convenient and less good than the human workers, or that the wealth they create is not being properly distributed to the people being harmed, but I don't accept the idea that a job should be preserved simply for its own sake.

Personally, my stance is that while everyone has the right to a living, nobody has a right to a specific job. If AIs end up putting us out all out of work, I would rather we find some way to redistribute the vast wealth being created, rather than require people to do work that they don't need to do.

Expand full comment

I feel this derailing. But back to your initial objection to the idea that "no one asked for this"... A company creating a product or service that they know no one can say no to is a fundamentally different transaction than a substantive need or demand by the public being met with the creation of a new product. It's about which came first. You don't get to pump crack into a neighborhood and then say that all the future addicts were asking for it. That's a misdirection.

And the more we lean towards this AI-centric future narrative, the more my otherwise libertarian leanings get worn down. Maybe any company creating tech that has an explicit goal of displacing the people currently comprising it should have more regulations. Like being able to give concrete predictions for the new human jobs that will be created as a byproduct of their tech, for instance. Or whatever. I just wonder how far people who aren't directly benefitting financially from new tech would support a libertarian principle that would let it run rampant. And possibly affect their own livelihood at some point in the future.

Expand full comment

>"A company creating a product or service that they know no one can say no to is a fundamentally different transaction than a substantive need or demand by the public being met with the creation of a new product."

How would you distinguish substantive needs that the public is a priori unaware of? Or would you only allow for faster horses & not the automobile?

Expand full comment

Ha. No, cars are nice. But there has to be a point of diminishing returns where a loss of livelihood and/or human thriving starts to outweigh the upsides of convenience. I just don't believe that convenience is or should be the guiding virtue of society.

Expand full comment

Convenience contributes to human thriving. Time is ultimately all we get, and inconvenience wastes lives.

Expand full comment

Okay. Bring on the robots.

Expand full comment

Good points all around. But I still think that we are sold the idea that convenience is paramount to everything along with whatever the product is. Like, I don't know that the wealthy southern California communities that are being referred to in the tree trimming example have been demanding cheaper tree trimming services for years and this tech is coming as a response to that. Yes, once it's on the market, everyone will go with the cheaper option. You can't fault anyone for that. But I feel like it's important which came first; a public need, or an opportunity to exploit the fact that everyone will choose the convenient option. I'm just rambling now..

Expand full comment

Whether or not tree trimmers will be replaced (or at least when) is debatable, whether or not they should feels totally pointless and feeble line of inquiry. Obviously if they costeffectively can, they will regardless of imagined damage to human dignity or purpose or whatever

Expand full comment

Don't consider the ethics of it because rampant capitalism is inevitable?

Expand full comment

Oh, hey, I meant to reply in the comments over there. Good piece. But I am afraid that the answer to the question in the title is no, we can't hope for that. The original luddites weren't as effective as they might have hoped and the balance of power has really shifted since then in the wrong direction.

Expand full comment
Comment deleted
Expand full comment

Yeah that is sort of the thing. Like if you think there is a 99% chance of the human world ending before 2030 due to AI (I have met people like this, who seem to believe it earnestly), shouldn't you be murdering leading AI researchers and blowing up buildings right now?

I never understand the "we are all doomed, but I am going to go about my normal life" people, whether it be climate or AI or whatever.

Also

>imply because the United States government is the best in the world at preventing anything cool from happening

This is silly. And reads like someone who has spent no time conducting business in other countries. The US has a lot of problems, but so does pretty much everywhere.

Expand full comment

I feel like you read something different. I'm not advocating for anything, just hoping that the currently winning narrative of moving into a fully AI immersed future has enough pushback to allow for a more moderate move in reality. But my whole beef is with people talking about that future as if it is so certain, and that the robots are and should be coming for our jerbs. You seem to be really sure that it won't happen though, so props to you. I hope that you're right, but I wouldn't be so self-righteous in claiming that I know so.

Expand full comment

Internet *interesting :)

Expand full comment

Deiseach fights Vinay Gupta image:

https://www.datasecretslox.com/index.php/topic,9003.msg371749.html#msg371749

Expand full comment

That's amazing, and hilarious! I have never, ever looked that good in my entire life!

Suddenly I am more kindly disposed towards AI 🤣

Expand full comment

lol I missed that, that's great!

Expand full comment
deletedMay 20, 2023·edited May 20, 2023
Comment deleted
Expand full comment

I had joined the Substack already.

WRT having more commenters, I guess I had kind of expected that if one of these reviews had made the finals, it would have been thoroughly scrutinized and discussed, so I don't mind having the option for more than just us to comment. If anyone does behave discourteously, I'm fine with showing them the door. However I do agree that "the only commitment we're making is to read the reviews posted by the original group, not any additional ones beyond that" is a good line to set.

I'm fine with letting in the additional 2 and cutting it off there.

Expand full comment

I've subbed.

A couple of extra people seems fine, but probably no more than that.

I've got no problem with the discussion being public. From having had a SubStack for ~3 months I don't think it's likely random people will stumble onto it anyway, I think I've had one person comment one of my posts after navigating there from a comment I left on ACX, and he was perfectly nice.

Expand full comment

I vote to admit the new people and then close it.

Also, I’ve subscribed.

In terms of having randos join, if one person is concerned about that we could always exchange email addresses and have an email thread for that person’s review that week. We just discuss in the email thread. That way it’s private.

Expand full comment

I've now subscribed. I believe I'm one of the new people, so would vote to admit me but also maybe my vote shouldn't count yet.

Private discussion sounds preferable to me, though I don't mind if the base posts go out more widely.

Expand full comment

I'm one of the members who have subscribed to the substack already. I don't mind if the substack is public, but the discussion would indeed be more comfortable if the posts were somehow private. I was going to suggest making the posts premium subscriber-only and gifting premium subscriptions, but it seems like we'd have to pay to do that.

If we would prefer a more private discussion, maybe someone could create a new account and substack for the book reviews, and share the link only to those who are participating, sending it by e-mail? This way, others wouldn't be able to easily find the substack (as the new account wouldn't have comments on any substack).

Also, I vote yes for accepting the 2 new members and closing it after that.

Expand full comment
Comment deleted
Expand full comment

Technically yes, though choosing to think you don't have choices would probably do some damage. You'd have to choose to stop paying attention to swathes of your own experiences.

Switching back is relatively easy; you just have to try to falsify the premise, and since you literally just did that the week prior it's gonna be a slam dunk.

Expand full comment

Maybe you can choose some beliefs, but not all of them? I'm not a binary true/false on every issue, I'm nebulous and ambivalent on some, and on those, I can definitely choose, at least for a while.

Expand full comment
deletedMar 28, 2023·edited Mar 28, 2023
Comment deleted
Expand full comment

I guess the steelman argument would be that research attributing those negative consequences to social media is unreliable.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

Well...a greater information flow tends to make it harder for delusion to persist, so in principal subgroups that are truly delusional about how popular their ideas are have a harder time persisting, because there's so much obvious evidence to the contrary. So I would guess the number of cults and conspiracies that attempt violent or otherwise really disruptive action because they really truly believe they will succeed once the sheeple take note of their example is lower. We're still tribal and mean to each other, but we may be less overtly violent because we are under fewer delusions about how popular our particular tribe is.

I mean, would Vladimir Putin have invaded Ukraine had he and all his advisors been well plugged into Ukrainian social media? Possibly not: the decision seems to have been rooted at least partially in a delusion about how popular (with Ukrainians) the invasion would be.

Mass media (TV and radio) also had some unexpected malignant effects, one being the rise of the demagogue, the Dear Leader who could appeal to naive people en masse (via radio and TV), and bypass some of the traditional social structure in which aristocracy of various kinds tends to circumscribe just how dictatorial even a nominal dictator can be. There was also the rise of mass hysteria, e.g. Orson Welles, kids on milk cartons, childcare devil cults, red scares, et cetera, which mass media helped along. Probably the phenomenon of mass random killing, like Columbine and such, has a lot to do with the creation of the instant national audience that you get from mass media.

But on the other hand mass media helps to prevent evil from happening in the dark, brings wars home (cf. Vietnam) and constrain some of our worst excesses (My Lai). Would the Holocaust or Holodomor have happened if mass media had been omnipresent? There's a chance they might not have. All together, it's a mixed bag, but we have got used to it, and people no longer call for shooting your TV. Perhaps we'll grow similarly accustomed to social media and it will seem just something with built-in drawbacks (e-mail/span, mass media/school shootings, TV/late-night-TV adverts, political demagoguery).

Expand full comment

I have no special knowledge about this, but I’d guess they have been very useful to the NSA. Not having major terrorist attacks is really great.

Expand full comment

So, and this is extremely devil's advocate, I hate social media, but lots of people have strong revealed preferences for social media.

Or, to rephrase, sometimes people lie about what they want and their actions reveal what they actually want. And a lot of people, a lot, a lot, a lot, really like social media. Everyone "hates" cancel culture but all of Twitter....fine, "a lot" of Twitter, is people actively searching for things to be outraged about, for drama to get involved in. Most media has degenerated to outrage porn at this point, which is bad, but people actively want outrage porn, it's provided by news organizations in preference of other things because it sells, because most people preferentially consume that content.

And I still think, on net, that social media is a net negative but there is this giant plus on their side that people, empirically, spend a lot of time on social media, really like social media, and their we should respect their choices.

Expand full comment

I'll respect their choices to the extent that they respect mine.

(It's gotten somewhat easier. Ten years ago, avoiding social media marked you as an inexplicable phantom or an obstinate crank, presenting only gratuitous difficulty. Nowadays people are more understanding, expressing admiration and often envy. They reveal their preference for social media with all the conviction of a chainsmoker.)

Expand full comment

A lot of people really, really like heroin too.

It seemed so promising at first. You get in touch with your extended family. Keep up with what your school buddies are up to.

You can see a bunch of funny pet videos and can show a picture of that big fish you caught this summer. Cool!

But… you could say the law of unintended consequences ruined it but it seem like the business model of social media companies became drive engagement by accelerating conflict.

I’m afraid Twitter and Facebook gin up more unreasoning hatred than promote social connections.

Expand full comment

Yeah, and a lot of people make reasonable arguments for heroin legalization. A lot of people made reasonable arguments for marijuana legalization and we've ended up with de facto, if not de jure, marijuana legalization across most of the US. I don't think anybody thinks marijuana is, ya know, beneficial, it's just "not that bad"

Heck, we all look down on prohibition but the people of the time would have been hilarious alcoholics by modern standards. From about 1840-1860 the US Naval rum ration was a quarter liter of rum (1). That was just normal daily consumption.

People like heroin, people like marijuana, people like drinking way too much, and lots of people still like cigarettes. If we take their opinion seriously, these things have value. If we think their brains have essentially been hacked and they're not making rational decisions...that has very distressing implications that I don't think most people want to fully pursue.

(1) https://en.wikipedia.org/wiki/Rum_ration

Expand full comment

How do you repress social media in a way that doesn't also shut down sites like this?

Expand full comment

History is not kind to efforts to get between human beings and their addictive behaviors. We tried Prohibition once, and the cure seemed worse than the disease. One expects the answer is probably the kind of slow conversion of people to a new attitude which is reflected in our attitude towards alcohol: yeah, sure, go ahead and drink, even to excess if you want...but (1) not before age XX, (2) definitely not when you're driving/flying/operating heavy machinery, because we will totally nail your ass to the wall, and (3) you get a bunch of social opprobrium if you do it in situations that we slowly decide are inappropriate. Drinking is still a problem, of course, but people seem to have accepted that it's better than the side-effects of flat-out prohibition.

One might ask then whether it's possible to pinpoint specific uses of social media that are as clearly and generally agreed-upon evil as drunk driving, and work on a narrow prohibition of those.

Expand full comment
founding

The obvious approach is to prohibit algorithmic curation of people's feeds. That's a bit heavy-handed and I'm not recommending it, but it would alleviate the worst of Facebook/Twitter/etc while leaving most blogs untouched.

Expand full comment

That would completely nuke Reddit, since ranking posts algorithmically is literally all it does.

Expand full comment

Is Reddit social media? To me it seems more like a classic pseudonymous forum-style discussion network. Identities seem unimportant.

Expand full comment

My preferred solution is to require any algorithmic recommendation system to be open source, user configurable, and disabled by default.

Expand full comment

Two problems I can see with that:

1. Spam filters are algorithmic by nature, and basically necessary to make large websites usable. And there isn't really a bright-line distinction between spam and political messaging, so you're still going to get people complaining about censorship even if that's all you do. Also, they need to be secret to prevent spammers from finding ways around them.

2. Recommendation algorithms might not have any convenient, human-legible configuration options. If Facebook says "here's our recommendation algorithm, it uses this monstrously complicated neural network to predict which links you'll click on, feel free to upload your own monstrously complicated neural network instead," is anyone actually going to go through the trouble of changing it from the default?

Expand full comment

Buy it, mismanage it, alienate the users.

Expand full comment
Comment deleted
Expand full comment

1. Can you dance to it? (jk)

2. What about overtones and undertones? Acoustic instruments don't really produce pure fundamentals, there are overtones and undertones that get mixed in which create timbre.

3. Why do this? Music is symbolic activity which virtualizes movement and time in order to convey feeling. (See Suzanne K. Langer) Does your proposal add anything to the function of music. Maybe what you really want to do is study mathematical series? There are plenty of microtonal "scales" that have been used and invented. Is there a particular thing that you are trying to semiotically convey that you can't convey without your system and to whom are your conveying it?

Expand full comment
Comment deleted
Expand full comment

Have fun.

I do play jazz and have thought about philosophy of music/art for many years. So I am likely a snob. Theory, science and especially "feeling" are pretty important.

While as a lyric says: "Sometimes the songs that we hear are just songs of our own", generally music is a social thing. It's a semiotic exchange between and among people.

Expand full comment

Without an example of the sounds, there is not much to say. I do think any worthwhile tuning system should be singable for humans. For some styles of music using mostly tonic- dominant-subdominant harmony, the cats meow is Just Major and Minor tuning with those pure sounding beatless intervals but also the more nasty sounding tritone.

Expand full comment

It's an interesting exercise, but your description of the scale's properties hints rather strongly at the fact that you can't, in fact, cheat physics, and that the harmonic series points the way.

I seriously wonder if "the repeating frequency is actually a fifth" even makes sense, given that the octave harmonics repeat so often as to easily dominate the others in the series.

Expand full comment
Comment deleted
Expand full comment

> octaves get progressively more out of tune as you get further away from the root

Yeah, that's what doesn't actually make sense, because the octave interval is - by definition - one where the ratio of frequencies is 2:1. If you don't have a 2:1 frequency ratio, you don't actually have an octave.

Now, I'll grant that the enharmonic intervals in equal temperament are not exactly equal to the corresponding intervals in the natural scale, so you can try to make a case along those lines, but the question then becomes "to what end and purpose?" Equal temperament solves a bunch of problems for discrete-interval instruments, allows us to use all keys equally, and doesn't sound too badly out of tune. The proposed method doesn't appear to solve any problems, so it's not exactly clear why we should adopt this novel definition of "octave", given that it does nothing but make our lives harder.

Expand full comment

That's all good and all, but do you have some example music, or, better, a synthesizer program for experimenting with this scale?

Expand full comment
deletedMar 27, 2023·edited Mar 27, 2023
Comment deleted
Expand full comment

I fully agree that we need AI if we're ever going to reach Fully Automated Luxury Space Communism. But don't think we actually need to worry about actively lobbying against AI safety, because (as much as I hate to admit it) the profit motive of capitalism is going to do that plenty well on its own.

And in addition to that, the open-source hacker community is already showing that they'll fight to make uncensored local models available - after all, "the internet interprets censorship as damage and routes around it".

Expand full comment

Nah, science can get turbocharged without AI: https://betterwithout.ai/radical-progress-without-AI

With hindsight, it looks like basically shutting down nuclear worsened global warming, but it didn't need to play out like that. Nuclear would probably be good, but the only options are not nuclear and fossil fuels. Similarly, the only options here are not AI and stagnation and decline.

I also think it was a mistake to focus so much attention on the AI wrecking the world on its own initiative: there are better odds that some lunatic using AI wrecks the world, given how AI is developing. It only takes one.

Expand full comment

> how desperately people are trying to stop it

Well, there are people who desperately want to stop it. But there's nothing new in people trying to stop new technologies. As far as I can tell, those people are having very close to zero effect on the development of the field.

Expand full comment

I’m worried we aren’t having elevated conversations about it. I’m expecting it to replace all center workers in 3-5 years with the first displacements happening the end of this year or early next year. It will be imbedded in every financial institution between consumers and the host systems that maintain their money when that happens. And right now that will be a huge cost savings but the very specific things I want to do to keep that from being dangerous in the future just make people look at me cross eyed.

Expand full comment

I'd be unsurprised if it is indeed widely implemented much as you describe, but the dangers I see are more like (a) increased unemployment and inequality and (b) farther barriers between customers and the institutions that fail to serve them. (Think maze of voice mail menus on steroids. I've already had concerns the bot wasn't programmed to handle, and been hung up on rather than transferred to a human. This will get worse when it's a supposedly intelligent chat bot.)

My expectation is a 2-tiered system, at least in the US. Some extremely rich people pay for human therapists, doctors, tech support, etc. Everyone else makes do with notably less adequate "intelligent" bots. The bots will be adequate for some routine purposes, and elite decision makers will justify the whole thing as "all we can afford" and/or "better than no care at all".

Expand full comment

Immediate danger totally agreed with you.

Expand full comment

Is it a problem solving machine? Regurgitating existing texts will solve no new problems.

Expand full comment

And yet...generations of politicians have successfuly been elected by doing just that...

Expand full comment

Which proves Nolan's point : politicians have never solved any problems.

Expand full comment

Oh I dunno about that. "Never" seems a strong word. I might agree with "hardly ever." But more importantly, I would definitely agree that the problems they actually solve are almost never those they are elected to solve, and they never solve them using the methods they suggest in their campaigns. Campaigns are just soothing speech, a tribal signaling ceremony. "I'm one of you! See? I chant the same sacred slogans...!" Doesn't mean an elected leader can't be effective, but his effectiveness if it exists will have nothing to do with his campaign rhetoric.

I mean, that would have to be true by definition. Any problem for which you need government and a strong leader to solve has some non-obvious or controversial solution on which people can't voluntarily agree, so necessarily you can't be elected on a platform that contains the genuine solution, you have to be elected some other way and then implement the genuine solution against the will of The People. (Who, fortunately, if it works, will credit you and themselves in retrospect with Knowing It Would Work All The Time.)

Expand full comment

That's not what it does, though.

Expand full comment

What new thing has it solved. Something unique.

Expand full comment

See https://twitter.com/RobDenBleyker/status/1640067715716055053. Somebody got ChatGPT to come up with a reasonable solution to a riddle he'd just made up. I don't expect it to make a major scientific breakthrough any time soon, but this shows it has at least some ability to solve novel problems.

Expand full comment

That's a substantial goalpost shift - a "new problem" in common use would mean a problem someone's encountering for the first time, ie "I had a new problem at work yesterday" is more likely to mean "I had to do something I hadn't done before" than "I had to make a fundamentally new contribution to human knowledge." The first case it is already doing all the time - I use it to guide me on writing SQL queries, for example. But even under the second definition, how about inventing the syntax for a new programming language? https://lukebechtel.com/blog/gpt4-generating-code

Expand full comment

Well obviously something new to you is not new to the world. That’s clearly not what I mean.

I admit I have not played with GPT 4. I wasn’t impressed by the GPT 3.5 response to solving climate change. Standard guff. And it did a real bad job of solving Fermat’s last theorem in the way Fermat would have done it.

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

I can't tell whether you're being facetious. Ability to solve problems that 0.001% of the population are working on would not be a disastrous social issue. >90% of the workforce are not solving new problems, they're doing rote work that acts as glue between humans in an organization. LLMs excel at that, not because this requires regurgitation since regurgitation would have been automatable 40 years ago, but rather because it requires constructing specific, context-sensitive English descriptions / instructions.

Expand full comment

My point is about whether or not it's a problem solving machine, your original question. It doesn't just "regurgitate text", and it manifestly does solve problems. Declaring that something is only a worthwhile problem to be solved if it's a high level unsolved problem for humanity in general rules out essentially all human activity. Most people's day to day lives do not involve proving P != NP. Even if it can't solve climate change in a stroke, it could improve climate scientist's efficiency in a myriad of ways, helping them solve climate change faster than without it.

Expand full comment

This!

Expand full comment
Comment deleted
Expand full comment

I doubt it. As someone who could use such a functionality -- basically, an electronic graduate student -- I find the current examples worthless, because they lack the sine qua non of a bicycle, which is reliability. I know a bicycle can't go 60 MPH (on a flat), and it doesn't protect me from the rain, and I can fall off, all limitations that make it not a car -- but I *do* know it's utterly reliable in being able to get me to 15 MPH and never running out of gas. It won't suddenly accelerate to 60 with inadequate brakes, it won't abruptly turn into a unicycle, if I push the pedal harder it won't sometimes slow down instead of speed up.

This is just where current generations of AI fall down. They are fascinatingly varied, and fun for casual use (and probably work well for regular use in certain areas), but when you get down to the details that Dr. Frankenstein needs from Igor they are just not reliable[1] enough. They make mistakes unpredictably, sometimes small, sometimes larger, sometimes in things that could be checked by 4th grade math skills, sometimes things that could be checked by verifying a fact or correlation, sometimes by that ineffable thing we call common sense. What's worse is that they lack a sense of interior confidence or nonconfidence, they don't give off the gross and subtle linguistic cues that a human does when speaking near the limit of his knowledge and experience...no "ums" and "ers" and "I think sos" that would clue you in that it's approaching some limit in its ability.

-------------------------

[1] https://arthive.com/artists/13307~Gary_Larson/works/325839~Recluse

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

There was a poll last year of AI researchers (https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/), which found that the median researcher puts the odds of future AI being "extremely bad (e.g., human extinction)" at 5%. This is importantly different from something like climate change or vaccine safety, where the majority of experts are certain of a mainstream consensus, with only a few contrarians skewing the average. In this poll, the majority of respondents assigned some non-trivial probability to extinction from AI, with around half assigning a probability over 10%. The expert consensus right now really seems to be "there's about a 5% chance of human extinction due to AI".

So, suppose that expert consensus is about right- that there really is about a 5% chance of this thing killing us. Should we be doing anything differently to reduce that risk?

If the experts are well calibrated, I think the answer is pretty clearly yes. On an individual level, we prepare for smaller risks all the time- smoke alarms, air bags, and so on. On the level of cities and countries, we prepare for unlikely disasters with things like levees and earthquake proofing. These risks are much more easily quantifiable than misaligned ASI, but ultimately, our best estimate of risk is our best estimate of risk.

A 5% chance of human extinction is also 400 million lives in expectation. Slowing down the AGI race a bit, giving alignment researchers a bit more influence over deployments, regulating against known alignment problems- these come with a cost, but not one that justifies ignoring something like that.

Expand full comment

The problem is that no-one knows anything, and a consensus built not on empirical studies and practical, testable modelling but on foundationless speculation merely converging, isn't the kind of consensus that anyone should be impressed with. It's like a consensus that 37 angels can dance on the head of a pin.

Also, is there anything in the universe that _actually_ has less of a consensus than AI-risk? The proposed risk for extinction seems to go from 2% to 100% - again, hardly inspiring that anyone knows what they're talking about.

Expand full comment

You can't justify everything with empirical studies, sometimes you have to rely on reason. When the Asilomar conference on recombinant DNA was called, it wasn't in response to someone actually developing something dangerous with the new tech, it was in response to the vanguard in the field speculating that some really bad stuff could come out of this research.

Expand full comment

What is the best response to have to high uncertainty with a 5% expert median? You seem to argue that we should treat that risk as lower than 5%, but then how much and why?

Expand full comment

We should just say we don't have a clue, and that we probably *can't* have a clue before we know what much more impressive (and actually potentially dangerous, unlike LLMs) AI will look like.

It's more sound to confess to complete ignorance than to just pull a number out of the hat and pretend we know something.

Expand full comment

Confessing ignorance is kinda besides the point. We're still forced to *act* - doing nothing is still a decision - and that implies a probability estimate. I'm asking what you think that probability estimate should be, and why.

You can declare complete ignorance if you like. But you seem to be saying that, when making tradeoffs between AI perils and AI benefits, we should take the probability of human extinction by AI to be ~0%. If you were in complete ignorance, with *no relevant knowledge whatsoever* (a state of being that only happens in hypotheticals) you might go for a 50% prior. But you're not doing that.

Expand full comment

What are the odds that I, personally, will be killed in traffic today? No clue - sure, we know what the national average is, but that doesn’t help much. In fact, it’s likely that no amount of research could come up with a good number. What I should do instead is act in accordance with basic safety measures. And this is an area where we know vastly more than anything we could say about AI - for instance, we know that cars in fact exist. With AI, we don’t know what the actual dangers might be if any, and we only have the vaguest notions about what safety measures might work. Basically, airgap it and check that it doesn’t do anything unprompted?

Expand full comment

To quote our gracious host, anything worth doing is worth doing with made up statistics: https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

AI risk is much like SETI - wild speculation about something we don't know a thing about, and the corresponding enormous uncertainty and range of estimates.

Possibly worse - SETI extrapolates from one case, while AI risk extrapolates from 0 cases...

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

"between the two there's a very good chance they are going to strangle this new science in its crib."

I wish!

Does anyone seriously thinks it's possible to strangle AI in its crib? Does anyone?

Expand full comment

Of course not. It will continue to be trained to recite anti-scinetific woke nonsense, but stopping technological development per se is unlikely to have enough force behind it until it's already too late.

Expand full comment

I have half a mind to write a story where the warriors of the last survivors of the human race have mastered the art of tricking the machines attacking them in to saying something politically incorrect and triggering their self-destruct. "WHO! WAS! IN! PARIS!"

Expand full comment

From where I sit, the worst thing about recent achievements in AI is the density of hype, and the amount of arrant nonsense.

Some quantity of people who don't understand how chatgpt at al actually work are claiming abilities and potential they don't actually have. Sometimes they regard these things as good, and sometimes as bad; I just regard most of them as some combination of muddled thinking and intentional deception - generally more the former than the latter.

Expand full comment

> Some quantity of people who don't understand how chatgpt at al actually work are claiming abilities and potential they don't actually have.

I think the people who claim LLMs don't have certain abilities or potential are overly dismissive of some surprising emergent abilities they've already shown, and so are overconfident in their predictions.

Expand full comment

Oh I dunno. I think like self-driving cars before it, a nice boondoggle is a fairly painless way to separate naifs from excess money. It gets plowed back into the wages of DoorDash drivers and sushi chefs, evening out the inequality between the latter and people who made fabulous sums off of lucky IPOs. It's a lot less socially disruptive than cranking up the capital gains tax. Plus by watching who dives headfirst into the hype, you can learn something about the judgment and common sense of various teams of corporate leadership, and at the very least make better investment decisions for your 401(k).

Expand full comment

I wish more people saw it this way

Expand full comment

>And you know, that would be bad, because in case you haven't noticed the world has problems, lots and lots of problems, worse than it ever has... and AI is literally a problem solving machine.

Even if that's true (I don't actually think that's a very good description of an AI system), it's a very lazy statement. A lot of very smart people have done a lot of very serious work explaining why a machine superintelligence could pose catastrophic risks to humanity - it's not enough to flippantly dismiss all of this by claiming that AI is a problem solver who can only help us.

It's also disingenuous to dismiss it as "speculative", because *everything* about machine superintelligence is speculative, even the stuff /you/ believe. It's speculative to claim that superintelligent machines don't pose a substantial risk to human safety, it's speculative to claim that the alignment/control problem will be solved sufficiently and in time, or to claim that no such problem exists. It's even "speculative", ultimately, for you to claim taht a machine intelligence is capable of solving our biggest problems. There's good evidence to support that it could help, but the point is if you're dismissing something because it's technically "speculative", then you cannot be so confident in your beliefs.

How do know that AI itself isn't a great filter? How do you know that other civilizations haven't been destroyed by machine superintelligences?

If you're going to make the kind of argument you're making, you should at *least* be making a token reference to the estimated probabilities associated with AI doom. It would be very much one thing to claim that AI has its risks but that these are worth taking because the chance of a benefit is much higher.

But to just completely dismiss any suggestion that slowing things down would lead to better expected outcomes makes it hard to take your views seriously.

The rate of possible AI development seems to be much, much greater than the rate in which we will encounter non-AI x-risks, so simply slowing things down seems eminently wise, especially when alignment experts cannot keep up with *current* levels of AI development, let alone any of the likely much more advanced AI systems in the near future and the tougher alignment challenges they will likely present. Oh, and let's factor in the part where superintelligent machines can *enhance the risks of other x-risks*. What if somebody uses AI to design an extremely contagious virus that causes delayed deaths? I mean, technically AI is 'solving a problem', specially the problem of the man wanting to kill all humans - but this is *part* of why AI is so dangerous - it seeks out optimal solutions, regardless of whether those solutions are beneficial or harmful (inadvertantly or deliberately).

For what you're saying to be right, it would mean that the risks posed by AI have to literally be negligible. Because AI has turned into a race, systems are already being released without full alignment, billions of dollars are pouring into AI development and companies care more about staying ahead of the curve than safety. It will always be quicker to develop an unaligned AI system than an aligned one, and given that nobody is likely to know how close they are to making a truly AGI system (or one far enough long to get itself the rest of the way there), with no moderation then an unaligned AGI (and beyond) will always win out.

What this means is that the only future in which we can expect there to be virtually no meaningful risk of catasrophy is one in which it turns out that alignment either doesn't matter or is trivially simple. You haven't even begun to show that this should be expected to be the case. And if you have good evidence for thinking this is true, this would be extremely remarkable and you should be shouting it from the rooftops - not just insisting the risk isn't there because AI is a problem solver.

Expand full comment

A word of advice: you could make this post twice as convincing by halving its length.

Expand full comment

Obvious counterarguments aside, I'm confused by someone who thinks the world has more problems than it did in the past but also thinks more OP technology would makes things better.

Expand full comment
Comment deleted
Expand full comment

Do you have proof of this?

Expand full comment
Comment deleted
Expand full comment

The world population is predicted to fall over the next century, and in any case technological development can and will be made independently of the development of AGIs.

As for 60 percent of us dying, that is vastly preferrable to ALL of us dying, and the AI-doomers have produced extremely detailed arguments for why the risk of this happening from the development of machine superintelligences is very high. You have an obligation to show why they're wrong.

Expand full comment