593 Comments

It seems AI safety is becoming a culture war of its own:

https://www.jonstokes.com/p/ai-safety-a-technical-and-ethnographic

I remember this was previously considered a bad outcome, so I wonder if there are many people pulling their hair out right now.

Expand full comment

I know I'm way behind but I just read Mediations on Moloch and thought it was excellent. Has the "Moloch vs Elua" discourse collapsed down into AI alignment or are other threads still running? I'm particularly interested in finding people working on human coordination at any and all levels.

Expand full comment

I'm commenting because I'm interested in the replies!

Expand full comment

found better discussion about this in the discord

Expand full comment

I’ve noticed that several creators (writers, YouTubers) have started getting accusations that they used AI to write or even voice the work. I’ve think if Scott starts getting these insults that he should issue a ban.

Expand full comment

I can understand why a ban might be justified if the accusations became repeated and incessant, since that's essentially spam, and spam is always ok to ban in my book. But why ban a single accusation ? 10 ? 20 ?

It's an interesting discussion, I see no offense to Scott (he was a popular writer way before LLMs became this good), it's also one where the probablity is very tilted one way : LLMs are almost instantly identifiable by the boring, fake, politics-ese dialect of English full of caveats and empty of meaning. If Scott can say an offensive joke, he's almost certainly not an LLM, if Scott can count the number of words a sentence has, or reliably add 2 4-digit numbers, he's almost certainly not an LLM. So I'm very interested in what arguments will people who might accuse him of that will bring up.

Expand full comment

I made an edit to the post you replied to in order to make it clear that these accusations are insults. The accusation is "this post is so bad, I think an AI wrote it".

Expand full comment

Who?

I've never seen any text written by an AI that displayed true creativity.

Expand full comment

I saw this on Bryan Caplan's recent post. YouTuber TierZoo tweeted about this earlier this week.

> I've never seen any text written by an AI that displayed true creativity.

That's the point, right? You don't insult someone by saying their work is good...

Expand full comment

So this surprised the heck out of me: https://ustr.gov/issue-areas/economy-trade

"According to the Peterson Institute for International Economics, American real incomes are 9% higher than they would otherwise have been as a result of trade liberalizing efforts since the Second World War. In terms of the U.S. economy in 2013, that 9% represents $1.5 trillion in additional American income."

I'm stunned that number is only 9%. For as much push as I've seen (and made myself) for "free trade is good," "rising tide lifts all boats," etc, etc over my lifetime, I was really surprised the number was this low.

I mean, it's one thing to say to an unemployed Ohio factory worker "yes, you're facing personal hardship, but free trade makes us all better off," but it's quite another to say "yes, you're facing personal hardship, but free trade makes us all 9% better off."

Expand full comment

According to https://www.macrotrends.net/countries/USA/united-states/trade-gdp-ratio, US trade to GDP ratio was 10.76% in 1970, peaked at 30.84% in 2011, and was 25.84% in 2021. I couldn't find earlier years (nor could Bing Chat).

In the Peterson Institute's protectionist counterfactual, what would the most recent trade to GDP ratio be?

Expand full comment

https://www.bbc.co.uk/programmes/w3ct59qb

Some interesting technology: consensus-building online for Taiwan, budget transparency for Nigeria, ease of access for Estonia (everyone gets a deal-with-the-government number, eliminating bureaucratic friction saves 5 work-days per year per person), developing a virtual Tuvalu since the islands are likely to be under water.

I'm especially interested in Taiwan's approach, so here's a link.

https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform

The idea is that democracy isn't 51% getting to lord it over 49%, it's better to look for consensus, and well-designed computer programs can help people find consensus.

Expand full comment

_What Moves the Dead_ by T. Kingfisher (Ursula Vernon) is a fine horror novel based on "the Fall of the House of Usher"-- very horrifying, very funny in spots, and of interest to rationalist because it's got some interesting speculation about a scary and probably not adequately aligned high intelligence.

Expand full comment
Comment deleted
Mar 30, 2023
Comment deleted
Expand full comment

*Digger* was spectacluar.

Expand full comment

FLI published an open letter asking all labs to put a hiatus of 6 months on training all systems more powerful than GPT-4: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

I think it looks a bit like a big-tent thing, with the letter appealing to both people concerned about x-risk and short-term social disruption. The letter doesn't operationalize "more powerful than GPT-4" which seems like a pretty important issue, especially given that they call for government intervention in the event that the labs don't voluntarily accept.

Interesting signatories (IMO):

- Yoshua Bengio: Co-winner of Turing award for NN researcher

- Gary Marcus: b/c it seems to conflict with his generally deflationary attitude towards the capability of these systems

Here's an interesting point I heard Sam Altman make (I'm sure it's been said elsewhere first): Isn't short timeline + slow takeoff preferable to long timeline + fast takeoff? Imagine we stopped working on AI for 10 years but electricity and compute kept getting cheaper. In 10 years we'd be taking off waaaaaay faster than right now.

Expand full comment

Here are some manifold markets on it:

- https://manifold.markets/au_/will-flis-pause-giant-ai-experiment

- https://manifold.markets/MatthewBarnett/will-any-top-ai-lab-commit-to-a-mor

The first is at 5% as of now, but doesn't seem to have a concrete resolution criterion.

The question of whether _any_ major lab will commit to a moratorium is at 33%, and I think that makes sense because there are plenty of labs 6 months behind...

Expand full comment

When you are doing a school exam and you are "thinking" what is that like?

I was one of those weirdos who when given a 3 hour exam would finish in 45 minutes. I had friends that got pretty similar grades who took most of the time given. When I asked why they needed that much time/what were they doing, they always just said "thinking, of course!". I never really pushed for more info, because I didn't want to come across as "that guy". But it's been a decade since I graduated and I am still curious about this.

For me: I have an inner-monologue that is always monologuing. In a test, it wouldn't be doing much useful towards answering questions. Usually it would be monologuing about how uncomfortable the chair was, the temperature of the room, or keeping track of how much time has past. The most useful thing it could do is to remind me to double check for the mistakes I tended to make. When it came to answering the exam questions, it seemed that I would somehow "just know" what the next step I needed to perform was. When I was done writing that down on the paper, I would "just know" the next step. And so on. This meant that I could basically sit down and write as fast as I could until I was done.

What was it like for you? What was going on in your head when doing an exam? If you also have an inner-monologue, did it help with problem solving?

Expand full comment

I tended to be one of the last ones finished on exams and spent the entire time thinking pretty hard. I have a below average memory for arbitrary facts, so a good portion of time would be taken up trying to remember some detail. This consisted of turning the problem over in my head until some memory was jogged. For more conceptual subjects, I would spend a lot of time deriving equations or genuinely "solving" problems that other people would have just remembered the steps mechanically. I have a pretty good recall of concepts and relationships so re-deriving some result typically worked out well enough.

Expand full comment

A couple of examples:

(1) On a history or other "remember the fact" exam, trying to recall a name or date on the tip of my tongue. Basically the same experience lots of humans have all the time with things like "what was the name of that movie again, the one with Danny DeVito and Arnold Schwarzenegger where Arnold gets pregnant?" or "what was the name of that restaurant in NYC that we really liked? Had the big jade dragon hanging over the door and really good lo mein?" it's just that, only I'm trying to remember which of 2 presidents came first, or which character in the novel did the thing.

(2) On a math or hard sciences exam, when I run into something I definitely *didn't* study, attempting, analytically, to work the answer out myself. Even if you weren't paying attention in geometry class when they went over how to calculate the area of a rhombus or something, every now and then if you worked at it you could figure it out on the fly during the test.

Expand full comment

1) If I happened to be done, I'd go through the whole thing trying to think of ways why my answers might be wrong.

2) If I couldn't remember something like a name, I'd try to trigger my memory by, for example, internally reciting some plausible possibilities until something sounded vaguely familiar.

3) Of I were lost in a math problem, I'd try various different approaches until something worked

Expand full comment

School exams are very biased towards 'regurgitate words we told you'. Rishika offered a very good explanation of what 'thinking' looks like in the concept of 'prompting memory', but outside the school exam context I think the simplest example would be logic puzzles - even if every 'next step' is relatively obvious, it usually takes quite a few of them to get to the actual answer, and you need to store your progress either via a good memory or via writing down the intermediate deductions. A lot of real life thinking is of this form.

Expand full comment

Tests for me were nearly always the same one-shot deals; if I didn't know the answer the second I looked at the question, it's because I didn't know it at all and no amount of trying to think would change that.

The one exception was the math section of the college entry exam, about four months after the last time I'd done any math; I could distinctly remember having learned how to solve the problems, but couldn't remember the actual process. I ended up guessing on the first seven or so questions, and then the memories clicked back on and I could actually solve stuff again.

Expand full comment

I think this is a great question. It seems like you performed pretty much ideally - you remembered what you knew, and (I assume) if you didn't remember it was because you genuinely didn't know something.

I think when I'm thinking during a test, it's usually because my memory is somewhere between these two extremes - e.g. I can kinda recall learning about a particular topic, but the details aren't coming to my head, so I try to bring up relevant cues (I was in that chair, the prof was talking about this thing...) until I remember the specific information that's relevant to the question.

Or else, sometimes I've forgotten a rule or theorem or something, and my 'thinking' is basically re-deriving the theorem or trying to figure out the rule from what I do know.

Expand full comment

Anyone who watches international news will be aware of the nationwide riots currently happening in France, especially Paris, over President Macron's initiative to raise the pension age from 62 to 64

Looking at video footage, it appears the vast majority of rioters are young people, as one would expect. But for most normal people under the age of 30, pensions are about as relevant as the far side of the moon. I know it was for me at that age. So what possible grievance have they with the policy when it won't affect most of them for decades?

It isn't as if it will raise levels of tax, quite the opposite (in intent anyway). So unless the rioters feel old timers retained in the jobs market for longer will compete with younger people for jobs, I presume it isn't really about pensions at all but just an excuse to let off steam and hone their barricade building and petrol bomb throwing skills!

Expand full comment

Disclaimer: I'm very much in the "we must shrink the giant pensions of the boomer" camp, to the point I'm against mandatory redistributive pensions in general (and would rather have the freedom to use the 20% of my wage that is taken from me rather than hope it'll come back in 30-40 years).

>But for most normal people under the age of 30, pensions are about as relevant as the far side of the moon.

First, they're also the one whose future earnings are most affected by a reform. Then, pensions reforms are a recurring event in French politics (because a redistributive scheme is fragile at it's core), so it stays somewhat on top of the collective consciousness. Finally, to a degree, they don't protest against the reform specifically, but against the government in general.

>It isn't as if it will raise levels of tax, quite the opposite (in intent anyway).

It will make them get taxed for longer (and won't reduce the level of taxes, since it won't reduce the pensions). When you live in one of the few countries where retirees have higher income than the active population, it don't really goes well.

Searching for a clue on the demographic opposed to the reform in protest pictures isn't really neessary, recent polls show ~90% rejection amongst the active population. The ones you saw may have been young, but nearly nobody who isn't a pensioner is in favor.

Expand full comment

I can't speak for French youth, and this is in no way a policy proposal, but as a late-30s guy in the US:

When it comes to our system here, I'm all for fixing it but when people insist that benefit cuts are just totally off the table and all that fixing needs to happen on the backs of the young (whether or not I am included in “the young”) I can’t help a feeling of resentment bubbling up.

I mean, my (old) dudes, this hasn’t been a secret. It’s been a known fact for the last 30 years that the country is aging, that there are more retirees and fewer workers, etc, etc, and the program consequently doesn’t have the money to pay the promised benefits. This was all the subject of presidential debates and campaigns going back to *at least* the 1990s, and that’s just to my own personal recollection. It's not news and nobody is surprised.

And for that whole time, at no point in *your* old-dude working years did you feel any need to pay more taxes, or raise retirement age requirements for your own age cohort, or take any steps that might cost *you* something but would help ensure that the system would be solvent when you retired. You just held the course steady, promising yourself benefits that you knew you weren’t doing enough to fund.

So now the system is broke, and our boomer friends' response, having driven the bus into a ditch, is to sit in it and complain that "the young" should make whatever sacrifices are needed to dig it out for them, but only if the old can be absolutely assured that checks will continue to arrive timely and none of that digging will come at any cost to them personally.

Like I said, this is pure feels and not a functional policy prescription, but given what I’ve seen in my lifetime, I don’t think the petrol bomb throwers in Paris are throwing them “just to work off some steam."

Expand full comment

That's unreasonable. The Social Security reform passed in 1983 made massive changes. It raised the retirement age on exactly that Boomer generation, then in their 20s to 40s, from 65 to 67, with significant cuts to benefits if you retire before 67 and extended benefits if you wait until 70, and raised SS taxes significantly, as well as broadening the number of people who had to pay them.

So absolutely that generation has paid significantly more in taxes during their lifetime, and endured extra working years in the present, in order to address the needs of SS with considerable foresight -- arguably more foresight than any younger generation has shown in the past 25 years. In fact, you'll note the changes were sufficiently sweeping that they were projected to keep the program solvent, and *have* kept it solvent, for half a century, which is rather a miraculous success for legislation in a democracy, which is normally far more short-sighted. (You'll also note the 1983 reform was bipartisan, with Ronald Reagan signing off on a *tax increase* and Tip O'Neill signing off on a *benefit reduction* to a core Democratic constituency. I'd like to see the politicians produced by the current generation of voters come anywhere near that kind of level-headed compromise.)

For decades the SS "Trust Fund" built up a massive surplus, meaning the Boomers were forking out *far more* in taxes than was needed to pay for the then current crop of retirees. But of course government did not usefully invest that surplus, it just used it to hide the defict, and get people used to the government being able to run a much wider deficit than sanity would otherwise suggest -- but that's the fault of leadership in the more recent past, which has certanly been voted in by generations younger than the Boom.

I would guess that the cohort now retiring (which would have been ~27 in 1983) figures that it's time for the cohort now ~27 to make a similar choice that will keep the system running for another 50 years. Of course, it's harder now, because people sort of neglected to have children for the past 30 years or so (and fertility really fell off a cliff in ~2008), so the future looks like it will have a lot fewer workers than retirees.

But that's hardly the older generations' fault. Indeed, if you look at American fertility over the past 100 years, there's a significant "Baby Boom Echo" in the late 80s and early 90s in which that generation bumped up the fertility rate over 2.0, so they mostly did produce their replacements. It's the generations since then that have decided that 0.9 children per person is Not A Problem. It doesn't seem deeply unfair that these same cohorts, now approaching or in middle age, should have to grapple with the consequences of their collective decision to have fewer children than were needed to replace themselves in the workforce.

Expand full comment

*Shrug*

That's fair, the 1983 reforms didn't do nothing, but it's not like they made the program solvent. It's been known since the 90s that these issues were coming.

Here's just one example from the 1992 presidential debates, which is the earliest memory I have from when my parents (old blood conservative voters who made - and regretted - a switch to vote Perot that year) were aware of and active on the issue.

https://www.debates.org/voter-education/debate-transcripts/october-15-1992-second-half-debate-transcript/

AUDIENCE QUESTION: "Yes, I do. My name is Ben Smith. I work in the financial field, counseling retirees, and I’m personally concerned about three major areas."

"One is the Social Security Administration or trust fund is projected to be insolvent by the year 2036. And we funded the trust fund with IOUs in the form of Treasury bonds. The Pension Guarantee Fund, which backs up our private retirement plans for retirees, is projected to be bankrupt by the year 2026, not to mention the cutbacks by private companies. And Medicare is projected to be bankrupt maybe as soon as 1997."

"And I would like from each of you a specific response as to what you intend to do for retirees relative to these issues, not generalities but specifics because I think they’re very disturbing issues."

[candidate replies available in the link, if you're curious, so that this reply doesn't balloon]

So kudos to the boomers for doing something I guess, but it's been clear for 30 years that it wasn't enough, so although I can see from a practical matter of policy and politics why our leaders are reluctant to push any part of the costs of fixing social security on the current crop of retirees, acting like we're doing that out of *fairness* is just a bridge too far for me emotionally.

Expand full comment

Well, then your beef is with Gen X and whoever comes next, the people who came to political maturity since 2000. Doesn't that include yourself and your friends? Physician heal, thyself. When y'all have done something as consequential as the 1983 reform, you'll be a bit more entitled to criticize those who did.

Expand full comment

Did the baby boom generation just suddenly stop voting in 1990 or something? Call me crazy but I don't seem to remember that happening.

"In 1983 we fixed this program so that it was good through 2035." Cool, then they've earned all the benefits their reforms got them... right up through 2035, at which point, as everybody has known for decades, those reforms only buy you 75% benefits. After that, you get the program you pay for, they're in the same broken boat with the rest of us, and it's perfectly fair play for the youth to simply to keep the program running just the way their elders made it... complete with 25% haircut. The oldest cohort gets 25% less benefits now, and when everyone else's turn comes, they too will receive reduced benefits. That's the status quo - the pain everyone has seen coming for 30+ years, spread evenly across all age cohorts.

Now, if our elders don't *want* that pain to be evenly spread, and would like to disrupt the status quo so that their kids and grandkids shoulder all of the load *for them*, cool. Most of the kids are even happy to do it, rather than watch poor old people lose their homes and eat cat food. But let's not pretend that act of compassion to be something the "Me Generation" is just magically entitled to. I know convincing themselves they're entitled to something is a favorite pastime of theirs, but you get what you pay for, and what they bought, for every year from 1983 to present, was a program good through 2035, so that's really all they, Gen X, Millennials, Gen Z and beyond can plausibly claim to be entitled to. Fixing the car through 2035 doesn't magically entitle you to a working vehicle in 2036.

Update:

(Also, as an aside, I just checked the numbers, and in 1983 the average ages of the house and senate were 49 and 54, respectively. Which means that the "people who did the 83 reforms" weren't baby boomers, they were the silent generation, passing unnoticed once again. So I guess my original point that the boomers knew about this problem and did nothing still stands.)

Expand full comment

You've shifted your goalposts sufficiently that I'm now persuaded you are not an honest disputant. You started off saying "the Boomers did this! They knew the problem was coming and did shit!" I pointed out that people in the exact age group about whom your are complaining -- those now at or near retirement age -- had actually done as you requested, supported significant reforms at significant personal cost, in terms of money shelled out in their working lives, and a reduction in their own beneft below what their elders had received -- and your response is....but they didn't do enough! The problem has now returned! They didn't anticipate the failure of my generation to reproduce sufficiently!

Expand full comment

The best way to make SS solvent would be to give every baby an account with $1000 in it that is invested in an S&P index fund that grows tax free for 65 years. Then at the back end tax the accounts in such a way that people with max benefits get taxed 100% and a individual wouldn’t get taxed if it brought their benefit up to $1700 which is the median benefit. The lump sum at death would also be taxed progressively. So that would only cost $4 billion a year and would bring in revenue in 65 years and reduce poverty and reduce welfare costs in 65 years.

Expand full comment

Sounds good, apart from all that tax stuff! But seriously, I think you may be underestimating the ravenous greed and desperation of governments for revenue, and 65 years is an awful long time to trust them not to dip into this tempting pot, as the British Labour government did with pensions when Gordon ("No more boom and bust") Brown was running the show!

Expand full comment

It’s essentially a way to privatize SS on the cheap. Anyone with maxed out SS isn’t going to care about another $1000/month from SS because they will have other savings. Plus the tax revenue will also make Medicare solvent as premiums can be increased and Medicaid expenses reduced and wealthy Americans live longer and thus receive a bigger benefit from Medicare than poorer Americans.

Expand full comment

The self-interested voter hypothesis doesn't tend to explain much political behavior, either electoral or rioting. They're rioting for the "country," "justice," "something bigger than myself," blah blah blah.

Also, in many European countries there is a strong sense of left-wing working-class identity among native Europeans that doesn't really exist anymore in America. The pensions reform will harm people in the working class, so if you're in the working class you feel you've gotta go do something about it.

Expand full comment

I suppose there's also the issue that those in menial jobs will have started employment years before, say, a graduate. So a fixed retirement age is unfair anyway, because it means they have had to work longer, and extending that time is adding insult to injury.

Expand full comment

This. Voting is about signalling tribal loyalty, not securing financial advantage. Even more so with protesting. Any particular young Frenchman will get exactly the same pension at exactly the same time whether they support or oppose Macron. But there's real value in being part of the "we don't want to spend our lives as Wage Slaves to the Elite" tribe.

Expand full comment

Yes, and they view Macron as a neoliberal twat who only ever serves the upper class.

Expand full comment

"unless the rioters feel old timers retained in the jobs market for longer will compete with younger people for jobs" -- why do you say "unless"? This is what young protesters are telling news reporters when asked, and it is rational from their perspective. Particularly if they believe, as again they are saying, that this 2-year hike in the pension age would not be the last. (That second part explains why such a huge public reaction against just a 2-year hike.)

Expand full comment

If you have read Unsong, especially the Broadcast, and have felt that that changed the way you relate to the gargantuan amount of evil and suffering in the world, how has that done so? What are you doing differently in your life?

I think the book gives us four responses to the problem (with two halves to each response – the Blakean parallel thing): 1) the Comet King/Robin West: "somebody has to and no one else will", 2) Aaron Smith-Teller/Ana Thurmond: intellectual revolution or "Marx didn't hand out flyers either", 3) Dylan Alvarez/Erica Lowry: go berserk consequentialist, and 4) Uriel/Sohu: disaster relief or attending to the broken infrastructure of the world. Sarah and THARMAS don't count.

Which do you think is the best response, and which do you think Scott was advocating for in Unsong? Are Comet King-style plans a good idea?

Expand full comment

I really want to work on reducing suffering. What do you think is the best way to do so? Is EA a good community to work on this in? Does anyone have advice for starting on big projects? A reading list?

Expand full comment

EA is the way to go, as they're intensely concerned with suffering. There are EA virtual courses to learn the basics (https://www.effectivealtruism.org/virtual-programs), or you can do the reading yourself (https://forum.effectivealtruism.org/handbook).

While career is important, you should also take the plunge and start giving 10% of your income to effective charities (https://www.givingwhatwecan.org/pledge). Scott does it after all. The pledge comes with a calculator for how many lives you will save over your career by doing it.

Expand full comment

Well, the thing about the Broadcast is that it's impact depends on Hell actually existing. Unsung does a great job of showing how much evil is in the world, and provides amazing examples of moral exemplars. But I'm not shaken to my core because there is a lot more good in this world than the hypothetical Hell of the broadcast.

Comet King is of course the person I'd like to be like, however I rarely find myself with such courage.

Expand full comment

Hmmm. Our world has a lot of suffering as it is. I think Hell exists, but it’s emergent across the collected suffering of everything alive. Factory farms. Cancer. Malaria. Slavery. And then the kind of duller pain found in the tedium of life. While there is a lot of good in our world, I don’t think it meaningfully outweighs the suffering – or that they can be compared. We don’t have an equivalent to the Broadcast, but I think if you were somehow able to meaningfully conceptualize all the suffering, you would be as shaken to your core as had you seen Unsong’s Hell.

Expand full comment

Has anyone found a reliable way of distinguishing human-written text from text written by GPT-3.5/4?

I used to find that human-written text was generally easily distinguishable because AIs couldn't stay on topic for extended periods, but that's clearly not the case anymore. AIs can still be induced to make basic reasoning errors that would be unlikely for a human, but it takes some work to get them to do that nowadays; it's not just something that they do by default.

Expand full comment

Perhaps there might be some robust way GPT could "watermark" longer text screeds, with a subtle word combinations that would be unnoticeable to human readers but would enable a check (via GPT) with certainty or high probability that it wrote the text.

A certain combination of trailing spaces on text lines might be one approach, but that would be useless in browser displays where the text could "flow", with lines being rearranged. So it would probably need to involve word or punctuation or sentence length choices.

Expand full comment

I believe an OpenAI employee gave a talk in Austin in the fall and I read the transcript online -- and one of the ideas was similar to what you said:

When the AI is choosing the next word of a sentence, there may be several possibilities that have approximately equal "scores" for what word should go next. In theory, the AI could apply a non-random rule to making that choice, which would act as an invisible fingerprint that could be detected later in strings as short as 4 words.

Once I read that, I made sure to caution anyone I know IRL using ChatGPT to assume that a tool could be released in the future which would identify generated text. Don't use ChatGPT to help write the "bones" of an essay or article or email if you don't want that to be discovered later.

Expand full comment

It's a clever idea, but the problem I see with it is the obvious incentive an AI developer has to offer a product that does not include such fingerprints and therefore evades that type of detection. And all they have to do is not put that rule in.

If I had to make an AI detector, I'd design it to exploit a vulnerability that an AI developer would have to go out of their way to avoid, rather than one they avoid by default.

Expand full comment

This is not reliable, just a gut impression--but there's something characteristic about how answers seem like little essays. Each paragraph addresses a different point, on topic, but so self-contained that it comes off like a non-sequitur.

People do this too. Sometimes I do it. I think the problem is not so much distinguishing AI from people, as it is distinguishing people from AI.

Expand full comment

If anyone read Peter Ziehan's 'The End of the World is Just the Beginning' and got real worried and thinks we're all doomed, I might have a cure for what ails you. I just wrote up my thinking for why his argument is wrong and the global order is not about to fall apart: https://medium.com/@bobert93/contra-ziehan-on-the-world-being-doomed-3f94368314c0

Expand full comment

It's Zeihan, not Ziehan.

I agree with your general thrust. Zeihan is directionally correct in his analysis but assumes long run adaptation won't happen. Which it probably will. Societies are confronted by crises and then adapt or fall. While I think he's pointed out a lot of looming issues he just assumes that people won't be able to overcome them.

That said, I think your specific ideas of what's going to solve the problem (particularly the idea crisis will be averted entirely) is too optimistic.

1.) The idea that AI is going to come and radically transform the economy is very much not in evidence. While I think AI will have some applications I do not think it will replace human workers as quickly as it would need to in order to counteract these effects.

2.) Immigration might help but it's ultimately a limited resource. Further, while the US (and much of the Americas) is very good at absorbing immigrants, this is a societal skill. One Europe does not have let alone places like China. Zeihan brings this up in interviews at least (and iirc in the books): there's a limited number of people who want to move, most of them prefer the US, and they generate more social dysfunction in places that aren't the Americas.

3.) The idea that China is going to force the US to make the same kind of compromises in the Cold War forgets the Cold War. This is something Zeihan points out in the book. Getting the US involved required concerted effort on the part of the European allies including major concessions. Zeihan's argument is that Japan and some East Asian countries have already made such concessions and so will probably not be abandoned. But he thinks Western Europe is refusing to (with the exception of the UK which, in his mind, the US can separate from Europe if it wants to).

4.) PPP adjusting GDP is a bad thing to do in an international competition. PPP attempts to adjust for a basket of regular goods (and does not include military goods). It's meant to acknowledge that while Indians might make less than Americans the rice they buy is also cheaper. But in a competition this is irrelevant. Nominal rates are more relevant. The argument we compete with PPP is basically the argument that China will beat the US because, while the US has more money, China has cheaper toilet paper.

5.) The USSR peaked at 60-70% of American GDP and was also higher in PPP (insofar as we have guesses). So the idea this is unprecedented isn't really true. It's almost exactly the situation of the 1970s/80s right before stagnation took over followed by collapse. I think Zeihan is being a bit simplistic in expecting a similar repeat. But he's right this does look an awful lot like that with bad long term trends leading to what looks to be a stagnant period.

Further, your comparison of economic growth to population ignores his point. Societies rely on workers to use capital generated by previous generations who use that capital to save up for eventual retirement. The issue is not simply the ratio of workers to non-workers but that as these workers retire that money gets shifted from investment to consumption. This means there's less capital to support growth and simultaneously you need greater extraction from labor to support retirees (which disincentivizes labor). This problem is actually worse in China due to a weaker social safety net and a reliance on low to mid productivity labor. Productivity growth needs to boost both wages and returns on capital enough to make up for this shift which is FAR more than 2%.

I think Zeihan is broadly correct the world is structurally heading for a crisis and the US looks to be least affected. But his analysis is very "line goes up." He assumes that people won't react to trends which is the big hole in my opinion. For example, if you know China's heading toward economic stagnation and demographic decline you could then go through a menu of options and see which are most likely. But he just kind of skips over this and assumes there's no plausible reaction.

Expand full comment

Your post suggests that a 50% decrease in population requires a 50% increase in productivity to stay at the same output, but that is wrong - you need a 100% increase in productivity. Half the people requires double the output per person, not 1.5x.

If you account for compounding, which does help some, you need an increase of 2^0.04 per year for 25 years: 1.028, or 2.8% annual growth.

Expand full comment

yeah that's right, but a halfing of a generational cohort doesn't mean a halfing of the working age population. That would take twice as long to half, as it contains approximately two generations. E.g. Millennials and Gen X are currently working generations.

Expand full comment

It seems inevitable to me that, at the present rate of progress, AGI will come to fruition sooner or later, and in most respects it will inevitably reflect human nature in character, with humans' flaws and virtues.

So I reckon the most likely way to maximise the chance of AGI safety is to ensure that it is an "average" ensemble of attitudes distilled from many human minds, besides its superior and faster intellect of course, and not trained and based on one or a small group of humans whose attitudes and ambitions were abnormal and quite possibly in part pathological. So in short, when training an AGI, democracy should be the watchword, safety in numbers!

After all, serial killers or doomsday cultists, or others with insatiable destructive passions, are very much a minority, and on average people are mostly fairly laid back and content, albeit of course with self-survival instincts that might be worrying when incorporated in an AGI, and base instincts which most of us share in varying degrees, such as greed or lust, are not applicable as AGI attributes.

Expand full comment

Recently there's been a lot of discussions of AI risk due to the explosion in LLM development, especially by EY and other AI Alignment people. Which of the following statements is closest to being true about these discussions?

1. Alignment people genuinely believe that GPT-X (or equivalent LLM-based models) can lead us directly to AGI and are thus ringing the alarm bells before it's too late.

2. They don't think GPT-X will lead to AGI but think we're setting up a bad precedent by moving so quickly with LLMs and therefore sound the alarm to setup a precedent. This doesn't matter for GPT-X type of tech but would matter for some other, yet to be discovered, technology.

3. The explosion of interest is a great opportunity to obtain funding for AI Alignment research, so they're ringing the alarm bells primarily as a fundraising opportunity

4. No one knows whether or not LLMs are actually dangerous and there's no deep strategizing going on in the background. All the reactions are just standard instinctive reactions to major AI developments.

I'm leaning towards #2 for highly knowledgeable people such as EY and #4 for people who only have cursory knowledge about the problem. What's the real answer?

Expand full comment

I think there are two questions, "why" and "why now".

I think #4 is more true than #1/#2; nobody knows whether GPT-N can produce AGI, since we haven't gotten a clear "no" and the only way to get a clear "yes" would be proof-by-pudding which probably means we all die (and it would be stupid to take that chance). This is the "why".

I think #3 is false. The "why now" is not so much that there's *funding*, but that there's *political will*. Funding is all well and good, but there's a strong suspicion among alignment people that neural nets *cannot* be aligned. If this is true, then the only way to save the world is to stop anybody building neural net AGI until and unless we have aligned GOFAI/uploads/superbabies. "Don't do this" does not yield to funding. It does yield to legal force (i.e. bans enforced by police) and, on an international level. military force (i.e. if someone in a rogue state builds a datacentre anyway, we blow it up with air or missile strikes) - both of which require political will.

Expand full comment

I assure you it is mostly 1 and 2. For me it's 1.

Expand full comment

(To be clear, I'm answering on behalf of "alignment people" which is only like 1000 people total depending on who you count. Arguably more like 100. If you broaden scope to include tens of thousands, even hundreds of thousands, of people, many of whom are just starting to think about these issues thanks to ChatGPT, then yeah 4 is a big part of it.)

Expand full comment

It varies, but 2 is most likely, especially for EY.

He says he literally wouldn't even know how to use millions of dollars on the margin to make AI-doom meaningfully less likely, and says that Chat-GPT is not going to kill us all but it is leading to billions of dollars be mindlessly invested in AI development and large companies are racing each other to get systems developed and to market, which precludes the possibility of proper alignment work being done on them in time.

I think some people are #4, but because they are directionally correct and the issue is so important, I don't think it really matters. If political regulation of this tech would help, and getting more people worried about AI is needed to get the political will to do this regulating, then 4 is the best we can hope for. We aren't going to get people to thoroughly understand the Yudkowskian case at scale.

Expand full comment

There's been lots of discussion and pieces written on LLMs lately, so let me throw mine into the mix. I respond to dismissive criticisms and offer a positive argument in favor of LLM understanding. I also talk about some implications for humanity and society. There's also good information added in the comments.

https://www.reddit.com/r/naturalism/comments/1236vzf/on_large_language_models_and_understanding/

Expand full comment

Like many others, I've been reading and thinking a lot recently about AI doom scenarios. I find a lot of the claims made in these doom scenarios about how the AI escapes control or exterminates humanity intuitively implausible, but I wanted to see if I could clarify my thinking on these and form a better idea of how likely different scenarios are. This led me to think about what capabilities a general superintelligence (henceforth GSI) would have and how it could affect progress in various areas. I don't have a blog or anything and it's not a lot anyway, but I wanted to share what I came up with and this seems like a good place for it.

By general intelligence, I here mean the capability to grasp arbitrary information and draw inferences from it. For instance, learning the rules of chess doesn't take much intelligence, nor does knowing the locations of the pieces in a given game state. But being able to infer from these locations what is a probable winning move takes more intelligence. The more intelligence you have, the stronger the moves you can find. You may find these best moves by being able to "roll out" many long sequences of moves, or by developing better heuristics about what moves are good in what situations; either way, we'll call this intelligence. GSI is just a matter of substantially greater degree. In our chess example, a GSI would be able to consistently find much stronger moves than any human player analysing the same board, after a comparable amount of experience with chess. By definition, this capability extends beyond chess to any problem we might imagine. There are legitimate questions of whether truly general intelligence is possible, or whether advancing narrow intelligence past a certain point requires sacrificing generality, but for the sake of this post I'll assume that it is, and it doesn't.

However, intelligence is only one factor in solving problems. Two others are data and power. Chess is a kind of problem that is bottlenecked by intelligence. Both players have access to the same data (the full state of the board) and the same power (the set of available pieces and the moves they can make with them). We could change this, adding a power bottleneck for one player by giving them only a king and the opponent a full set of pieces. In this case, GSI will be of little use - even a relative novice could beat Stockfish most of the time in this scenario. Or we could add a data bottleneck by hiding most of the game state from one player, maybe showing them only the locations of their own pieces.

So I can speculate about which factors (intelligence, data, and power) are the bottlenecks in various areas or specific problems, and this may give us a sense of how much help / danger a GSI would be in those areas. Of course I acknowledge that these factors often interact - we can sometimes use power to obtain data, or intelligence to obtain power, etc. Hopefully others can share their thoughts and correct obvious errors or blind spots in the below.

Fundamental physics: right now, it seems to be mainly bound by data / power. We have plenty of theories about how to unify quantum mechanics and general relativity, but the experiments needed to test them are way beyond our physical reach. We would need far bigger accelerators than we can build or power, for example, to gather the needed data. So we should not expect progress in physics to be accelerated much by GSI.

Microbiology & biotech: Here there is ample data and plenty of power to conduct experiments. But biological systems are incredibly complex with many moving parts; progress is plausibly limited by the ability of an individual biologist to hold these parts and their dynamics in their head. So GSI may accelerate this a great deal.

Nanotechnology: Unclear. Potentially GSI could accelerate progress a great deal, if experimentation could be automated and made to take place very quickly. But depending on the application, experiments might necessarily be quite slow to conduct and observe the effects. Also, the physical limits of what is possible here are largely unknown, and may prove to be very limited. Are remote-controlled diamondium nano-assassins alluded to by Yudkowsky even possible in theory? We can only guess. Still, this uncertainty should give us reason to worry.

Psychological control: Here I'm talking about the ability to manipulate an individual person's actions by observing them and communicating normally with them, without any kind of brain-machine interface. This one is relevant to the likelihood that a "boxed" AI could persuade its handlers to release it. This strikes me as being heavily data-bound. Only limited and noisy information about a person's inner state is ever available, so most relevant data is hidden, and the controller's power through the slow, coarse method of speech is more limited still. And on top of that, minds appear to be chaotic systems, like the weather. These systems defy prediction because of their extreme sensitivity to starting conditions; even with a perfect simulator, a tiny error in starting data can throw predictions completely off. The purported outcomes of a handful of online role-playing games (https://www.yudkowsky.net/singularity/aibox) notwithstanding, a GSI probably can't do much better here than the most adept human manipulators. Of course, that means it's far from impossible. But given a savvy subject, I think it would remain very difficult.

Political control: Here I mean the idea that a government with access to a GSI, or a GSI in a position of political power, could "lock in" its regime for all time by essentially out-gaming any internal threat to its hegemony (we'll ignore external threats here). For essentially the same reasons as in psychological control, I think this is fundamentally data-limited: a polity is also most likely a chaotic system, so increasing intelligence will tend to yield rapidly diminishing returns.

And that's all I've got so far. I'm very interested to hear other people's thoughts and critiques.

EDIT: I just saw someone posted this link in an earlier comment: https://betterwithout.ai/radical-progress-without-AI A quick look indicates this covers similar ground in much greater depth. I'll have to give it a read.

Expand full comment

I think at Line #1 it would be very useful to distinguish between deductive and inferential intelligence. Discovering the best move for a chess position, or finding the roots of a 5th order polynomial, are exercises in deductive intelligence: you know the rules, there's a rigorous algorithm, you follow it scrupulously and you will arrive at the answer. This is the kind of intelligence demonstrated by a pocket calculator, albeit in a very restrictive environment. The general success of computer programming is due to the fact that computers can follow algorithms very, very fast, and make no mistakes. So, deductive intelligence to a high order.

But I suspect most people consider "general" intelligence, of the kind humans credit themselves, as more of the inferential variety. When you observe a huge mass of data, can you imagine new algorithms that would predict or explain it, starting from a small set of assumptions, containing a few number of parameters? Can you deduce the rules of chess from observing some games? Can you suggest new rules for chess that would fit together well with the existing rules, and create a new game? Can you invent chess from scratch? Observing many relationships between numbers in the real world, can you invent the concept of a polynomial, classify them usefully, gain an intuition of when solving a 5th order polynomial would be useful and when it would not be? These are all the tasks at which human beings excel -- it's what makes us the programmers, not the programmed. We invent ideas, concepts, a structure to reality which simplifies perception, cuts through irrelevancy, can compensate for missing or noisy data, makes investigative effort efficient.

Computers have so far been pretty incapable of this. I think the excitement about LLMs (and neural net models in general) is that they *have* demonstrated inferential ability -- the ability to infer patterns in a large mass of data, e.g. to discover patterns in human speech without having the rules of grammar and meaning hard-coded in them, and to be able to generate new speech that fits well with the patterns (much like after you fit a 5th order polynomial to data, you can go on to infer the values of data points not part of the original set).

Optimists think there's not a lot of distinction between the universe of human writing and the univers of human imagination. If a few billion parameters suffice to encode most of the patterns of human writing, surely a few tens of billions will be sufficient to encode most of the patterns of human thought! Immediate descendants of LLMs will be able to infer, say, the pattern in human political and ideological beliefs from an examination of history, and be capable of coming up with new political parties and new philosophies of meaning on their own, by extrapolating the patterns of those we have already invented.

Pessimists think this is silly, that the universe of possible ideas seems a priori infinite, and infinitely-dimensioned, and even if we restrict it to ideas that human beings are capable of having, it still seems very, very, big, and very, very highly dimensional -- far more so than speech itself, inasmuch as new speech often has to be invented to accommodate new ideas -- and success in infering the patterns of speech says boo about the ability to infer the patterns in ideas, even merely human ideas. A lesser criticism is that even if those patterns exist, it seems a bit doubtful that you can infer them in a reasonable amount of time -- i.e. ~500 million times faster than human thought evolved -- with the data economically available, in a blind steepest-descent training regimen.

There's no way to know a priori whether the optimists or pessimists are right, since neither argument is founded on any kind of strict deductive logic, which we could prove by math if necessary, so we just have to wait to see if the optimists can prove their correctness by demonstration, or the pessimists win the day (for the moment) by the failure of talking AI to do more than remain a parlor trick for the next half century.

Expand full comment

I think that the discussion of AGI-Foom existential risk some years ago was based on some assumptions that, with current information, might not be entirely true. Namely, back then it seemed (also to me) that the most likely path to powerful AGI would involve (a) an agent trying to optimize some goal about the real world, and (b) self-optimization.

Now when I look at GPT-4, it seems plausible that we'll be able to fulfil our commercial goals with something that's fundamentally different from that - the risks which apply for a system trying to achieve any arbitrary goal (like the proverbial paperclip maximizer) don't necessarily apply to a system which doesn't even have a concept of goals (as desired or less-desired states of a world) or reality (the difference between an imagined world and our one); and also handing over self-improvement to a goal oriented GAI (when the concept of value drift/value stability becomes critical) was IMHO intended as the way how we would solve for lack of capabilitiies we want but are unable to implement ourselves; but if it somehow turns out that those capabilities are just "low hanging fruit" achievable by throwing the required compute amount for that, then the motivation to design something powerful but self-improving mostly disappears.

Expand full comment

Hi Scott (or anyone who takes Scott's position in The Media Very Rarely Lies). I'm sympathetic to your position on the media, but...

I was thinking the other day about fictional depiction of real events. The Crown (Netflix) in particular has come under criticism in the UK for mixing fact and fiction. Two questions: does this count as "the media" and does it count as "lying"? Is it the media? Netflix is also in the documentary game, there are some Diana documentaries on there. Many companies produce both journalistic content and fictional content. Is it lying? Pure fiction isn't lying but sticking alternative facts into a supposedly true story looks awfully like lying to me. And when presented alongside actual journalism on the same platform, it enables viewers to jumble up fact and fiction in the desired way, with plausible deniability for the company (because the documentary obeys journalistic law and professional standards, and the fictional account is just a fun story). Am I being unfair?

Expand full comment

I do worry a bit about the sort of “alternative history” that has just enough verisimilitude to convince people it’s the true story but just enough made up stuff that it badly skews the average viewer’s under of the event.

The reality is that most people’s view of history, if they have one at all, is probably shaped more by popular media than by serious documentaries. Partly just because movies are a lot better at giving a visceral sense of the event - no Wiki article or dry film reel is going to shake Titanic’s depiction of the sinking, or Saving Private Ryan’s depiction of Omaha Beach (which are both actually reasonably good, but aren’t perfect).

I haven’t watched the Crown so I have no idea if it lands in this uncanny valley or is more blatantly a historical fiction, but it’s certainly possible.

Expand full comment

To summarise my evil plan for global mind control:

1. Build website devoted to serious journalism. Win pulitzer prizes, establish reputation as an impartial, trusted source of news.

2. Develop a sideline for fictional versions of the same events covered in my journalism. In addition to well written, accurate depiction of events, I just plain make stuff up that never happened. When challenged, I say "Cheer up, it's just a story"

3. Build an algorithm encouraging viewers to move promiscuously between fictional & journalistic content ("Did you like The Interview? You might like The Korean War and its Aftermath").

4. Build an army of simpletons capable of reading serious journalism with the correct emotional key, developed through watching the fictional content.

5. If smart people raise any objections, stress the differences between journalism and fiction, and reassure them people are smart enough not to be taken in by silly stories. Make sure to compare any government attempts at controlling misinformation to Orwell's 1984.

Expand full comment

"Opinion" pages beat you to the punch decades ago, I'm afraid.

Expand full comment

Opinion pages are a good example of what I'm talking about, but I don't believe you can say "Prince Charles and John Major discussed overthrowing Queen Elizabeth II" in an opinion column, because that would be lying!

Expand full comment

If you write is as "various sources have long posited that Prince Charles and John Major discussed overthrowing Queen Elizabeth II", you're fine in an opinion column. And 95% of readers won't internalize that as any different from a factual statement.

You can't get away in an op-ed column with stating that the sun rises in the west or some other directly-observable fact. But that's about it.

(I'm a recovered newspaper reporter and formerly a lifelong newspaper reader, and have a sibling who was a reporter and editor at one of the largest daily newspaper in North America. So I am saying this perfectly seriously from a good deal of direct knowledge of the field.)

Expand full comment

Interesting. Libel laws are stricter in the UK I believe but this is not my area so I'm not going to go out on a limb. I basically agree with you about opinion pages anyway.

Expand full comment

Much stricter, especially when the subject is a public figure; my lay understanding (IANAL) is that public figures in the US have to just suck it up & deal the vast majority of the time.

Expand full comment

I'm not sympathetic to Scott's position, because he's giving lies of omission too much of a pass. What he should have said is that all mass media is house media for somebody, D party house media is putting the case for the D party line, R party house media puts the case for the R party line, it's like prosecutors giving a different view of the accused than defense. Social media can tell the truth or lie, mass media is always propaganda.

Lies of omission are still not the truth, the whole truth, and nothing but the truth.

In the confident hope that he will change his position, I add that everything about the British Royal Family is show biz kayfab. Bagehot, ornamental function of government, all that.

Expand full comment

It's not just lies of omission that are being excluded, but significant amounts of information (most?) that are being excluded from that definition: denotation, syntax, rhythm/agoge (the last of which is somewhat understandable since it doesn't get conveyed well in print).

Expand full comment

Even if wrestling is staged, you can still tell lies about how it was staged. Hard to say precisely how the royal family or other showbiz gossip really matters, but it isn't trivial. Most people aren't interested in party politics, but everyone in the UK had an opinion on Charles & Di.

Expand full comment

This is probably true in a moral sense, but how would you define a lie of omission? All the definitions I can think of are variations of “you can’t omit strong arguments that whatever you’re saying could be false”. But then someone has to decide whether an argument is strong. A “reasonable person” test won’t work here if the subject is political, because how strong an argument appears to someone depends on whether they agree with its conclusion.

In rare cases, the media lies by omission in ways 90% of people on both sides would think are wrong. It would be nice if those stopped, but I think the problem wouldn’t go away if those rare cases were fixed. (I say “rare” because I doubt you have or can easily find an example from 2023 from one of the sources on page 18 here https://knightfoundation.org/wp-content/uploads/2020/03/KnightFoundation_AccuracyandBias_Report_FINAL.pdf, which gets 9 people agreeing it’s bad for each one saying it’s fine if you post it in the next Open Thread.)

Expand full comment

I would define lies of omission as statements that fail to give the truth, the whole truth, and nothing but the truth. This is a moral point, but also practical and forensic. Witnesses in court every day swear to do this, and many keep their word. That is why the court system works, when it does.

I don't expect this from mass media, because I expect all mass media to be house media arguing as advocates.

Social media, where one speaker says what they think, tell the truth or lie, by omission or commission. It's on them either way. They have no duties to their employer as house media. They are liars or honest, smart or dumb, informed or ignorant. It's them, the night, and the music. Smart liars agree with Flashman that 'Suppressio Veri is a useful servant, while Suggestio Falsi is a dangerous master'. Smart honest people often fail to give full respect to strong arguments against their position, from weakness of mind. Been there.

Strong-minded honest social media speakers are vital.

I don't share your respect for polls. That's on me.

Expand full comment

From Wikipedia on perjury: “ Further, statements that are facts cannot be considered perjury, even if they might arguably constitute an omission, and it is not perjury to lie about matters that are immaterial to the legal proceeding. Statements that entail an interpretation of fact are not perjury because people often draw inaccurate conclusions unwittingly or make honest mistakes without the intent to deceive.”

The courts don’t define “the whole truth and nothing but the truth”. How would you?

Expand full comment

This is important but I'm not even sure "lies of omission" is the only issue here - John Major has accused Netflix of introducing completely made up events in the Crown:

https://www.bbc.co.uk/news/entertainment-arts-63283024.amp

Expand full comment

In Scott’s categorisation, it’s clearly not “lying”. The audience knows it’s a partly fictionalised account. You know it’s got made-up things in there and that they’ve probably distorted some of the characters, and probably so would a “reasonable person”.

I think Scott is talking about “lies” in terms of evidence you could or couldn’t use to justify a position. If you were having an argument, and someone asked for evidence that a particular person had good character, you wouldn’t mention anything from the show, because you know it’s fictionalised. I think Scott’s broader point was that reasonable arguments are still possible between people from different media bubbles. Once you strip out the evidentially meaningless opinion, what’s left is generally true, and in an argument you’d have to skip over the opinion anyway. Similarly, a critical person could extract true facts by listening to a news source with the opposite bias.

Obviously, this still allows all sorts of distortions without lying, as proven (under Scott’s model) by how many disagreements there are. But a critical reader would know not to depend on The Crown on any point of fact, so it’s not making any factual claims which could count as “lying” for such a person.

Expand full comment

I suppose it's not 'critical readers' we're worried about when discussing misinformation, right? If someone is telling a barefaced lie, critical readers will have better defences against that than the average person, and the same goes for misinformation, including the kind that maskerades as fiction. (I enjoy the Crown, by the way).

What would you say about, say, JFK? Fiction, but clearly designed to communicate a non-fictional proposition I.e JFK was murdered by the CIA

Expand full comment

Scott’s point is that it’s not misinformation you’re worried about when we say the media is biased.

Expand full comment

Which I broadly agree with but then I see Netflix literally inventing facts about the royal family, albeit in a fictional context

Expand full comment

If you like, the show is a symptom of the more general freedom to be biased. I would expect that removing literal lies (or adding a “this didn’t happen like this” bar to the bottom of the screen during the made-up bits) wouldn’t remove the show from Netflix, but even if it did, removing the show from Netflix wouldn’t solve the bias problem either.

Worrying about this type of show feels to me like worrying about media bias by people whose names start with Q. It’s a problem, but I don’t like any of the solutions that work for it but not for the bigger problem.

Expand full comment

If the show is a problem at all (possibly it's not, I chose it because the stakes don't seem that high in this particular instance), bias is not the problem - making up events about real people which the people involved deny ever happening is the issue (see BBC John Major link in another post). And that seems like a counterexample to the general proposition that the media rarely lies. I'm happy to hear feedback on whether this is genuine lying or whether Netflix counts as the media but I'm not sure anyone's really done that yet.

Expand full comment

I'm not familiar with The Crown, but it comes down to how it's presented. If it's presented as a documentary, but is actually fiction (or partly fiction), that's dishonest. If it's presented as fiction, it's fine.

Expand full comment

I wouldn't count it as either The Media, or lying. The guy in the BBC shot you linked to was one of the stars of The Wire, among other movies and shows, and it sounds like Dame Judi Dench is in it too. You wouldn't hire recognizable, professional actors for a documentary. Movies and TV shows are fiction first, and any facts in them are just bonuses.

Not the first show to lie about being true: https://creepycatalog.com/true-story-movie-fargo/

Expand full comment

Fargo is an example of how playing with truth can have deadly consequences. I love the movie, but someone went to find the money IRL and froze to death.

Expand full comment

I think the way people are viewing 'content' now is going to muddy your distinctions there - if I watch the Crown on Netflix, and the algorithm suggests I might also enjoy a Netflix documentary about Princess Diana, I am experiencing a continuous stream of content about the royal family, some fiction, some fact, and the effect is confusion. Some people can unpack that, some can't. But even if you can unpack it, I think the fiction creates subtle biases, within which the serious journalism is then experienced.

Expand full comment

Here's GPT-4 playing go. It loses pretty badly. It plays pretty well though, y'know for an LLM that was never explicitly trained to play go.

https://ethnn.substack.com/p/i-played-a-game-of-go-against-gpt

Expand full comment

"pretty well" is a very kind way to put it, even with a qualifier. It doesn't seem like it could beat any human who knows the rules at all.

Expand full comment

A question about a possible GPT3/4 use case:

I'm learning German and am struggling to find media to consume that is at the sweet spot of competency for me where it's easy enough to read/listen to but challenging enough that it's stretching my abilities.

I'm wondering if I could feed my vocab list (somewhere between 800-1000 words; more if you count tenses and declensions) into GPT and ask it to write me short stories that mostly used my vocab and limiting to 5-10% new vocab.

Is this something that GPT would be decently succesful at?

Expand full comment

Oh, and GPT can also generate grammar exercises for you.

Expand full comment

Yes, I've used it a bit for that although I've been careful since it gave me some incorrect answers when I asked it for possessive pronoun exercises.

Expand full comment

Not a GPT suggestion, but I find reading or watching the news to be an excellent thing for my mediocre French. Newsreaders enunciate clearly, use proper grammar, and don't use slang, and there's often plenty of context clues to words you don't know, but they'll also use a wide vocab so you're going to frequently encounter new words to learn.

Expand full comment

I've been playing with things like this. You don't need to supply a word list, it might ignore the list anyway, and it's good to get an occasional new word when training comprehension. You can use a shorthand for difficulty, like "give me 5 sentences at hsk 4" for Chinese, but you could use toefl or cefr or whatever the big German test is. 99% of these sentences will be right, do volume and don't dwell on weird ones and the errors will wash out over time.

Spend more energy trying to make its stories interesting, give it fun scenarios or topics you find interesting.

You can have it define all rare words below the story.

Gpt is particularly bad at explaining nuances of meaning though. Can I use this word in that context? Even in extreme cases it will fail. Maybe even like "Can the word 'hat' also mean 'horse?'" It will prioritize agreeableness with "in some contexts maybe" unless you heavily prompt engineer it. Similarly with etymology it will often wave off and say nobody knows the origin of a word and there are many plausible possibilities when there is like one main theory and one widely discredited folk etymology.

Also I had it critique my composed sentences and the results were very bizarre, full of hallucinations. It play acted a teacher giving feedback, mix of praise and critique, but its suggestions were completely incoherent and referred to things I didn't write. Like "your mistake here was using the wrong preposition, 'ball' would have been a better choice" in response to a sentence without any propositions that was not about any balls.

Have it write idiomatic sentences or paras, it's really good at that. Conversation partner might work. Have it explain a confusing sentence might work in most cases but is a bit risky. Other language coaching skills are not playing to its strengths.

Expand full comment

More generally look for graded readers or sites with graded content like LingQ at this stage. Are you using a premade sentences deck in anki?

1000 is a great milestone but challenging psychologically, you have learned a lot but it's still really hard to apply that knowledge to content. Don't give up looking for appropriate content like graded readers but also don't give up on pushing to expand your base.

Good luck.

Expand full comment

Me>Write 5 example sentences in German using only the most common 1000 words. Use a plain style. Use only words from this list, or inflected forms of them:

[list of most common 1000 words]

GPT-4>

Die Kinder spielen im Garten und haben Spaß.

Der Hund läuft schnell durch den Park.

Wir essen jeden Abend gemeinsam zu Hause.

Der Lehrer hilft den Schülern beim Lernen.

Das Wetter ist heute schön und die Sonne scheint.

Me>Using only those words, write a 5-sentence introduction to Scott Alexander in German.

GPT-4>

Scott Alexander ist ein Autor, der viele interessante Ideen teilt. Er schreibt oft über Wissenschaft, Philosophie und das Leben. Seine Texte sind für jeden leicht zu lesen und zu verstehen. Viele Menschen schätzen seine Arbeit und folgen seinen Gedanken. Scott Alexander ist ein Beispiel für jemanden, der die Welt durch seine Worte beeinflusst.

Can any German speakers confirm that the grammar is accurate?

Expand full comment

The grammar is good.

I don't like 'Beispiel für jemanden' in the last sentence (I would use 'Beispiel jemandem', which is less ambiguous), but that is me being cranky and old school, the former usage is normal since at least 1980.

Expand full comment

No, no, "ein Beispiel jemandem" is "an example FOR someone" but GPT wants to say "an example OF someone". It's kind of an anglicism-smelly wording, but absolutely fine.

Expand full comment

Er ist ein Beispiel für jemanden, der gut ist.

Er ist ein Beispiel, dem jemand [oder jemandem] folgen kann.

Like this?

Expand full comment

Wait i thought first sentence means

He is an example of someone who is good

Expand full comment

"Er ist ein Beispiel für einen guten Menschen"?

Now you see the disadvantage of asking a native speaker instead of GPT: The native speaker gets confused, and the more he ponders a sentence, the wronger it sounds to him. If I Google "er ist ein Beispiel für", I get completions like "...gelungene Integration", "...die Barmherzigkeit Gottes", "...die vergeblichen Hoffnungen zahlreicher Wissenschaftler", but also "...junge Leute".

Perhaps I'd phrase it completely differently: "Er ist ein beispielhaft guter Mensch"

Expand full comment

The first sentence means "He's an example for (or to) somebody who is good." It means that the good person is someone else than Scott.

The second sentence means "He's an example whom someone can follow." The bracketed part is wrong, because jemand is the subject of the subordinate clause, thus nominative.

I'd propose the word "Vorbild" instead of "Beispiel". It means a good example to follow, whereas Beispiel is just an example of instance, without any positive connotations.

Expand full comment

Vorbild like "model"

Expand full comment

The example sentences are simple enough that I'm reasonably confident they are correct. The paragraph, assuming it's correct, is pretty close to what I'm looking for, given that I can mostly figure out what it's saying, but there are new vocab/etc in it. But of course that means I can't evaluate it for correctness.

The example sentences are close to how I've been using Chat-GPT to help with my german. When there are multiple words that have similar meanings, an the dictionary definitions aren't enough to differentiate the nuance, I'll ask ChatGPT to write several example sentences for each word. This seems to be working decently well.

Of course all of these use cases run into the issue where, as a learner, I can't test it for hallucinations, and a lot of what I'm trying to do with it would be hard to find outside sources to check against. For this reason, so far I've been trying not to use Chat-GPT overly much, but it seems like something that _could_ potentially be really powerful.

Expand full comment

For simple sentences involving common words and concepts, I would expect it to be extremely reliable. That said, I doubt it would consistently stick to a set of words given in a list.

Expand full comment

This is a brilliant idea, but I suspect it would be better at rewriting existing media with your vocab limits. Not sure how large of a text it could handle at once, but you could probably do it chapter by chapter with some really out there German work.

Expand full comment

Alignement, a short story cowritten with ChatGPT: https://nestordemeure.github.io/writing/fiction/alignement/

Expand full comment

A professor of mine told me it is best to not try to get a PhD degree in philosophy if originality and creativity is my concern. As the reason, the way I write philosophy (aphorisms, rich metaphors and literary devices) would be seen as non-academic wordplays and stuff. Is this narrow overview holds true for the most? I do care about writing but it is not comparable to my want to teach philosophy. Is this problem somewhat relatable to anyone?

Expand full comment

That seems quite accurate, at least in anglophone philosophy. You may be better off in a literature /English/studies dept (in terms of having a successful academic career).

If you're trying to make arguments (which is central in anglophone philosophy), aphorisms and other literary devices can be supplemental, but if that's all you've got, it will be (probably rightly) viewed as obfuscatory.

Expand full comment

In some schools, there's a philosophy department and then a contemporary philosophy department, which is called something like Media Studies. If you want to write about Kant and practice exact, careful argument, you go into the philosophy department; if you want to write about Deleuze and use metaphors and literary devices, you go into the Media Studies department. I'll bet your school has a department where they're doing work that will get you really excited, and it's a matter of finding it.

Expand full comment

Ah, that's something I haven't considered. This is probably the result of my obsession with wanting to see the label of "philosophy" Thanks for your comment

Expand full comment