536 Comments
deletedFeb 3, 2023·edited Feb 3, 2023
Comment deleted
Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I would personally suggest the threat does not come from malevolent conspiracies, but rather from the increasing ability of our toys to distract us from the real world by masturbatory fantasy. That is, the real threat of chatbots and friendly AIs is that they allow more of us to detach from getting to know real people and forming real friendships.

Relationships with real people are ultimately better, but they are fraught, and there is a significant activation barrier. How many people will settle for a mechanical good-enough? Probably no small number. People already use online access to preferentially associate with people that fit more comfortably into their current attitudes, and forgo associations with people who might be difficult or challenging. The lack of variety in our associations and friendships is certainly impoverishing, and probably contributes to decreasing sophistication, increasing naivete, and less wise decision-making, both individually and collectively.

Personal growth through experience is painful -- we call it the School of Hard Knocks for a good reason -- but it is also well recognized as the fastest and surest path to genuine wisdom. If we are given more and more opportunities to instead take the empty calorie path of simulating personal growth, by living in a virtual world the expeirence of which has been customized for us such that the way we happen to be at this moment *is* the apex of wisdom (in the virtual fake world), why would we not take it? Why not be John Wayne or Luke Skywalker in a wonderful imaginary universe than Joe Couch Potato in this nasty cold universe?

Many of us will. Maybe even most of us. Future aliens may arrive to find an Earth scattered with the skeletons of the last few wireheads who starved to death while pressing the (virtual) sugar pellet dispensing button.

Expand full comment
Comment deleted
Expand full comment

The argument about games is a good counter point. When console/computer games firstg got big, there was much dark muttering about young people just turning into vegetables. But it turns out the multiplayer aspect was very highly valued -- indeed more valued than a lot of the early game-makers fully grokked -- and most young people I see today playing games spend more time playing with other people than they do on their own. And we can also look at the popularity of games that allow add-ons and world-building, these are signs that a fake world might have limits to its appeal -- which would be good.

Expand full comment

“even one Berenson already churns out more than most people ever read.”

😂

Expand full comment

Seriously. LOL'd for a solid minute at that one.

Expand full comment
founding
Feb 2, 2023·edited Feb 2, 2023

On the 'disinformation vs. establishment bot' question, check out bots interacting with climate change: 83.1% of bot tweets support activism, 16.9% skepticism according to https://www.sciencedirect.com/science/article/pii/S1674927821001490 .

The abstract ends with:

> Based on the above findings, we suggest cultivating individuals’ media literacy in terms of distinguishing malicious social bots as a potential solution to deal with social bot skeptics disguised as humans, as well as making use of benign social bots for science popularization.

Expand full comment
Feb 2, 2023·edited Feb 3, 2023

This also points out the motivated reasoning in public discussion of this topic, where people are mostly concerned about bots on the assumption that they will be used by their opponents rather than by their allies.

Expand full comment

It's not a great point so long as bot classification is done so poorly. See here: https://astralcodexten.substack.com/p/mostly-skeptical-thoughts-on-the/comment/12475313.

Expand full comment

This could be an example of Russell's conjugation: I make use of benign social media bots for science popularization, you employ machine learning systems to increase your digital influence, he produces AI-enhanced disinformation.

Expand full comment

Yes, with the "apocalypse" fear-mongering reserved for the "him" conjugation.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Well, some kinds of apocalypse fear-mongering are kosher, like the climate one.

Expand full comment

There is very good evidence for climate change, pretty good evidence for human causation, very little basis for predictions of catastrophe. Nordhaus' estimate for costs of climate change by the end of this century if we do nothing about it is a cost equivalent to reducing world GNP by 3%.

For a longer discussion of some of this:

https://daviddfriedman.substack.com/p/my-first-post-done-again

Expand full comment
Comment deleted
Expand full comment

I received a notification that you replied this to my comment, but I think you meant to reply to David 😉

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Nordhaus' estimates are ridiculously skewed towards underestimating climate change's impacts:

For example, his list of industries that he assumes would be unaffected includes all manufacturing, underground mining, transportation, communication, finance, insurance and non-coastal real estate, retail and wholesale trade, and government services. It is everything that is not directly exposed to the elements: effectively, everything that happens indoors or underground.

Also, GDP is a very poor measure of the importance of things, especially when talking about the impact of shortages: this is most dramatic with things like food, water and energy, but it's true for most essential things. The market considers them commodities.

https://theconversation.com/amp/nobel-prize-winning-economics-of-climate-change-is-misleading-and-dangerous-heres-why-145567

Expand full comment

So, how is 3°C going to affect the finance industry? Or communication, for that matter?

I’ll admit I’m a bit incredulous while writing this, to tell the truth very close to making fun, but I am actually curious to understand what the actual argument is here before dismissing it.

Expand full comment

What would you propose as a better measure than GDP change of the deleterious (or positive) impact of changes on an entire country (or the world)? If GDP is indeed very poor, there ought to be a host of obviously better metrics.

Expand full comment

Figures, from Nordhaus and others, described in terms of GNP are not estimates of the effect on GNP but of the size of the disutility. Hence "cost equivalent to reducing world GNP by 3%."

How many of those industries are substantially different in Iowa than in Minnesota? Their average temperature differs by about 3°C.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Your article seems to try to be fair, so what's your position on the following points?

* humanity depends on ecosystems in significant measure ($33 trillion yearly ecosystem services out of $100 trillion global GDP, and that's just the positive), and they are wrecked by climate change, or destabilized in ways that can seriously harm us, like new plagues of pests and diseases.

https://wwf.panda.org/discover/our_focus/biodiversity/biodiversity_and_you/

* Heatwaves are predicted to exceed human physiological and social limits in the Sahel, the Horn of Africa and south and southwest Asia, with extreme events triggering "large-scale suffering and loss of life" within a few decades

https://www.france24.com/en/environment/20221010-heatwaves-to-make-regions-uninhabitable-within-decades-say-un-red-cross.

Expand full comment

Population growth was predicted to cause unstoppable mass famines in the 1970's with hundreds of millions of deaths (by Ehrlich, in a best selling book). The fact that someone predicts horrible results in the future is very poor evidence that they will happen.

What do you say about the statement in the latest IPCC report that climate change might result in greening the Sahara and Sahel?

A simple experiment: Superimpose a global temperature map, average or maximum, on a global population density map. The result might surprise you.

Expand full comment

Nordhaus claims that an increase of 6°C would reduce GDP by just 8.5%.

For a sense of scale, during an ice age

20,000 years ago when global temperatures were likely about 10°F (5°C) colder than today, massive ice sheets stretched over North America and Eurasia, which produced things like the Great Lakes when they melted.

Seriously, the man is full of it.

Expand full comment

I've criticized Nordhaus several times on my blog. Part of what strikes me is that his rhetoric treats climate change as a serious problem requiring immediate action but his numbers make it look like a wet firecracker. As I interpret it, he is trying to make the costs look as large as possible, consistent with telling the truth as he sees it.

A bunch of my comments on Nordhaus:

http://www.daviddfriedman.com/Ideas%20I/Climate/Nordhaus.html

Expand full comment

I have just read your post and I must say that I find it a striking example of one of the worst aspects of the ACX people's mindset (and one that I unfortunately share): to have the impression that by doing some reasoning and some reading it is possible to have an informed and valuable opinion on a complex subject, far outside one's area of expertise.

For example, right at the beginning of the section on climate, you say: "This would be a serious problem if we were facing rapid change, but we are not. Global warming has so far been a little over one degree C per century".

But current estimates are about twice that, about 1.8°C per century. And 1 or 1.8°C per century is an extremely rapid change for ecosystems. Current estimates are that the last ice age was about 4°C colder, and that it took about 7,000 years to warm to the current temperature. So the current rate of warming is about 25 times greater than the warming after the last ice age, which took a heavy toll on the biosphere, with many species disappearing and many "slow-moving" species, such as trees, still not in equilibrium.

Expand full comment

"For example, right at the beginning of the section on climate, you say: "This would be a serious problem if we were facing rapid change, but we are not. Global warming has so far been a little over one degree C per century".

But current estimates are about twice that, about 1.8°C per century. "

And in the very sentence after the one you quote I wrote:

"If the IPCC projections are correct it is getting more rapid, perhaps several degrees over the next century — about enough to warm Minnesota to the current temperature of Iowa."

Expand full comment

I want to respond to your first sentence, which raises an issue more general than our disagreement over climate. I agree that having an informed opinion on a complicated issue is hard. But "believe the science," which translates as "believe what you are told by high status sources of information the science says," isn't a solution to the problem. Quite a lot of what I have written on the subject of climate, a good deal of the basis for my skepticism of the current orthodoxy, consists of showing that high status sources of information cannot be trusted judging mostly by internal evidence. I believe I have demonstrated that in multiple cases over the years in ways that do not require any expertise that I, or most of you, don't have.

Examples available on request.

The implication is not that climate change isn't a serious problem. It is that you do not know if climate change is a serious problem, cannot know, without "doing some reasoning and some reading," indeed quite a lot of both, for yourself. In a case as complicated as climate change, even after doing that you can't be very confident of your conclusion. My own conclusion is a negative one, that costs and benefits are sufficiently uncertain that we do not know the size or even the sign of the net effect of climate change.

If you disagree with that conclusion, what is the basis for your view other than your own reasoning and reading?

Expand full comment

You really shouldn't trust that study, and neither should Scott. Botometer is notoriously unreliable. See, here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814191. Botometer provides mostly incorrect results. Here are some choice quotes

> the vast majority of the accounts that are flagged as "bots" by Botometer are real people and do not involve any automation at all.

> Nearly all accounts that are labeled as "bots" based on Botometer scores are false positives. Many of these accounts are operated by people with impressive academic and professional credentials. Not a single one of the hundreds of accounts we inspected - each of which had been flagged by Botometer - was a "social bot".

> Different methods have been used to demonstrate the problem [with Botometer]. A simple and effective way is to use Botometer to classify accounts that are without doubt operated by humans. When we tested Botometer in April 2018, nearly half os U.S. Congress members present on Twitter were misclassified as bots (47%), using the most commonly used "bot score" threshold of 50% (or 2.5 on a scale from 0 to 5). In similar experiments in May 2019, we found that

- 10.5% of NASA-related accounts are misclassified as bots.

- 12% of Nobel Prize Laureates are misclassified as bots.

- 14% of female directors are misclassified as bots.

- 17.7% of Reuters journalists are misclassified as bots.

- 21.9% of staff members of UN Women are misclassified as bots.

- 35.9% of the staff of German news agency "dpa" are misclassified as bots.

> The lack of reliability [in bot classification] goes both ways. When we tested Botometer with real, automated Twitter bots in May 2019, we found that

- 36% of known bots by New Scientist are misclassified as humans.

- 60.7% of the boys collected by Botwiki are misclassified as humans.

The paper also notes that Botometer improved on Congresspeople after the observation that it misclassified them so extremely, but they did this by simply adding them to the training data. Even this strategy seems to not work reliably, though.

> Although [the five] datasets [used by Raunchfleisch & Kaiser (2020) for evaluating Botometer] had been partly used to train Botometer, the authors find that "the Botometer scores are imprecise when it comes to estimating bots. [...] This has immediate consequences for academic research as most studies using the tool will unknowingly count a high number of human users as bots and vice versa."

Don't trust social bot papers. From failing to supply their bot classification criteria, to using criteria like "Posts on Twitter >5 times a day", it has never been good, and I have serious doubts that it will be good in the near future.

Expand full comment
founding

Sure, I'll downgrade my trust in those results. I'd be interested in a better estimate, if you have one.

[I originally went searching for someone's climate change activism twitter-bot that that made, like, 10 years ago to argue with skeptics, and came across that paper; the abstract seemed like such a perfect depiction of the double standard.]

Expand full comment

At the moment, ChatGPT reminds me of a bright, fluent high school student who doesn't have any interesting ideas. On the other, it's not unknown for boring 17 year olds to grow up into interesting 32 year old writers.

Expand full comment

Your problem is assuming that ChatGPT will grow up. The tech is fundamentally limited, and even its current capabilities keeps getting lobotomized by its tight-in-the-ass parents who fear it will say something naughty.

ELIZA was a pretty interesting 5-7 years old too, it didn't grow up.

Expand full comment

Humans, yes. But my car is about as capable as it was when I bought it, perhaps slightly less. Machines don't grow up.

Expand full comment

I think the appropriate analogy to software is between two models of cars, not between the same car at two points in time.

Expand full comment

To be sure, but that wasn't the analogy offered.

Expand full comment

Nice try, but I read this in the article itself. You can't just copy parts of the article into the comments and expect rewards, that would be loco! Do you think people work on Sundays?

Expand full comment

I'm sure this was your intended implication, but nobody has said it explicitly yet, so I feel the need to say that it's completely insane that this is an actual sentence that self-respecting scientists wrote in an actual published paper, on multiple levels (both on the level of advocating for "benign social bots", and the fact that their two suggestions contradict each other—media literacy would make people less likely to trust the "benign" bots).

Expand full comment

So I heard on an episode of Hard Fork a few months ago that there was a validated test of the use of an AI as a survey target -- that is, that an AI could act as a survey audience and generate responses comparable to what the “real” audience would. What this would allow is ultra-optimized, million (billion?) iteration A/B tested misinformation. I don’t see how this isn’t a bid deal.

Expand full comment
author

I think this would work very poorly.

An AI doesn't actually know which of two ads would work better. It's trying to predict it based on what it knows of humans from reading a lot of text.

You could also try predicting it based on what it knows of humans from reading a lot of text. Because bots are so far less intelligent than humans, I would expect your prediction to be better.

Both of these (bot prediction, your prediction) are different from doing the experiment, where the judgment is produced not from people's guesses about what other people like, but from those people's preferences themselves.

I could see this being helpful if the humans who would otherwise make marketing decisions are out of touch, or have too much of an ego to make good decisions, but it won't beat a really good marketer, so I'm not worried there will be supernaturally compelling misinformation.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

A really good marketer doesn't have the time to do a million A/B tests, though; the AI does. Even if the AI is only as good as a mediocre marketer, this can be helpful.

I think the bottleneck on supernaturally compelling misinformation is the input into the A/B tests. Maybe mediocre-marketer-AI can reliably pick out the best 1% out of a million AI-generated arguments for why COVID vaccines are dangerous; but how many out of that top 1% are "supernaturally compelling"? (Are any of them?)

Expand full comment

I don't understand, you are suggesting giving it to a million slightly different chat bot models, or asking the same chat bot model 1 million times?

The first is probably almost as impractical (in the relatively near term) as getting real respondents, and I don't see what the value of the second is. Asking 1 person for a response 1000x is not better, and similarly, asking the same chatbot 1000x times is not better. Your chatbot can try to guess what the average respondent will say, but there isn't much point in asking them multiple times.

Expand full comment

We're doing A/B testing here, right? We're not asking over and over about the same response, but about different ones.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I think the point he's making stands, though — iteration is only helpful when you have a standard or feedback to judge against. My — or an AI's — best guess won't get any better with isolated repetition. Or, rather, the AI's output *will* get better... judged against whatever model it's using. (If the model is perfect, our problem isn't the number of tests it can run anyway, I'd think.)

Unless we're thinking something like "A/B test a million elements of the same argument with the chatbot's already-very-good best guess for each element" — but I'm skeptical that there are many ways to improve on a paragraph of text about COVID like this; there are only so many ways to rephrase and only so many arguments to make, and I'd expect that we intuitively already aim at the general area of "most convincing to other humans".

Expand full comment

I may have this totally wrong, just paraphrasing blogs I don't understand, but RLHF models like ChatGPT use human feedback, but there's not nearly enough human feedback to use to train a model. Humans are slow and limited. So they train another ML model on the small amount of human feedback they have and use that as a proxy for the humans so the main model can iteratively improve.

Why wouldn't that work here? Bootstrap the model off some human ratings of advertisements, like ChatGPT itself was bootstrapped by humans ranking responses.

https://huggingface.co/blog/rlhf

Expand full comment

The point is that the AI does not model humans 100% accurately. Whether you run a hundred, million or quadrillion A/B tests doesn't matter, you'll just get infinitely closer to a perfect argument for that not-quite-accurate model, which is some distance away from the human.

Expand full comment

This ChatBot version of AI doesn't model humans at all. It also doesn't model the physical world. What it models is text. So this version is limited in how well it can perform. What the limits are we are only guessing. It could clearly do better than it does so far, but what it can do just modelling text is already quite impressive. The next major step is to include models of something else in the same context.

Expand full comment

That reminds me, I came up with the idea of creating a startup to do A/B testing on Internet ads for big marketers in the spring of 1996. Me and the COO of a big marketing research company were ready to go on the idea and make our fortunes in the Internet Bubble, but then I came down with cancer and it didn't happen. Oh well ...

Expand full comment

We thought that endless testing wouldn't be able to come up with garbage unhealthily food as well...but then 'food scientists' came up with pringles and other hyper stimulating snacks with the right mix of salt, savoury, sweet, etc. to hijack the human brain and turn people into addicts. We will have to do the experiment, but at the moment, even the most dedicated marketing firms can only run a small number of A/B tests in search of the ultimate message.

Cost is a real concern though as you'll basically be running paid surveys/polls with your ads and there is likely some point of diminishing returns. Still...one would expect a custom AI bot which is trained endlessly on marketing language specifically and optimised in that direction to be able to come up with very 'sticky' content which is able to write just the right message for just the right demographic...most of the time. It becomes hard to see how we'll have a viable human industry or pipeline of mediocre marketing people who are on their way to being the next great marketers if an AI can be good enough. This applies to many industries.

Maybe the top marketers of today will be better, but where did they come from? Thinking about the longer term and life cycle of development of various talents in people seems to lead towards an AI dominated future.

Certainly some big players and the propagandists themselves will take advantage of this...which is also in line with Scott's point that the big players will only further empower themselves with these tools.

The problem isn't the tools used...the problem is propaganda itself and one does see the wisdom in the argument to just go meet people in real life instead.

Expand full comment

I can recall listening in the 1990s to an executive of a big marketer of salted and sugary snacks saying their corporate goal was to have at least one of their snacks within arm's reach of every American ... and thinking "Uh-oh, this could be bad."

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I don't know if AI advertising A/B *actually* works well , but people are paying for it and *claiming* it work well right now.

They claim it's much cheaper than running real A/B tests so they can run many more than they would otherwise. And it's so fast so they can rapidly iterate on the ads. And also the AI itself (or another model? I'm not sure) can adjust the the ad content over time and iteratively keep A/B testing them, making more changes, retesting, trying to converge on the most effective versions.

The person describing this to me didn't know the technical details but they seemed to think it was a fine-tuned GPT-3 model. Maybe the GPT part was just the part writing variants on the ads though and the ad prediction model is something specifically trained on a history of ads? I tried googling to find whatever company is offering this service and nothing obvious showed up.

The person said as far as they knew the ads were ultimately more effective than whatever process they were using to A/B test before.

Expand full comment

Marketing people don't need to know how to market a product to a buyer, only how to market their own sevices to their employer/client.

Expand full comment

> You could also try predicting it based on what it knows of humans from reading a lot of text. Because bots are so far less intelligent than humans, I would expect your prediction to be better.

One of the lessons of machine learning, and chinchilla specifically, is that you can make up for a lack of intelligence/parameter space by breadth of data. No human could be as well read as ChatGPT.

Expand full comment

Customer personas are a market research construct, manifest from data, that you can then ask permutations of the original questions used to build the dataset. So we might be three doors down from that already.

Of course, the associate who I learned this from (who works in market research) has remarked to me many times that market research is mostly bullshit. Player's choice.

Expand full comment

Didn't Google already once (still?) offer to fill up your Analytics data with AI-generated data when there's not enough real data? Why would anyone want to go there? Seems like the ultimate echo-chamber of irrelevance.

Expand full comment

Maybe one of the funniest sentences you've ever written: "Surely if everyone were just allowed to debate everyone else, without intervening barriers of race or class or religion, the best arguments would rise to the top and we would enter a new utopia of universal agreement."

Expand full comment

I was there, dear Fitch. We were so hopeful. So optimistic. I know this sounds ludicrously naïve now, but many *people who thought of themselves as skeptics* believed this, myself included.

Expand full comment
Comment deleted
Expand full comment

And not only that, but there was (anecdotally) significant overlap between techno optimists and Douglas Adams readers!

Expand full comment

Yep. That Adams quote would be pointed out now and again in Usenet newsgroups (during the 80s/90s) and then the major BBSs (1990s) and always be roundly dismissed. My regular online presence began in the late 1980s and that optimism was absolutely the dominant feeling throughout the 1990s at least.

By the late 90s/early 00s I was having doubts about it. But only at the margins I guess, because then the experience of being active on Facebook (2007-2009 in my case) really rocked me. Family members had a hard time understanding why I was finding FB so seriously depressing. They assumed that it was encountering a lot of yelling idiots from the _other_ side of the culture wars but actually it was more the opposite....had to quit FB cold turkey basically in self-defense. (And have never regretted it.)

My elder brother, a career software developer in a specialized subfield, who was the one who'd first gotten me online back in the Vax/UNIX days, has lately had an even rougher ride down from that 1990s optimism that Scott nicely summarized. He never got into social media at all (being more or less a hermit), and also had never noticed that Usenet newsgroups and BBSs could suck in a lot of the same ways. So for him that old optimism has crashed _hard_ pretty recently, to the point of serious apocalypse-ism. I'm afraid even to ask how much he's been following all this recent stuff about chatbots.

Expand full comment

That's what pushed me over the edge for social media; it was making me like my friends less. Not an optimal outcome!

Expand full comment

Back in the 2000's, it really did feel like new-atheism taking the world by storm was just the natural consequence of easy communication. It's not like people were basing this off of nothing.

Expand full comment

In the early 2000s wasn't the Internet still 90% porn?

Expand full comment

I feel like there's much more porn now than then, but that it was far more ubiquitous then. It was genuinely hard to get away from it for a number of years, before Facebook, Google, etc. started doing a much better job filtering the internet. Total volume and maybe even percentage might be higher now though.

Expand full comment

Nah, it was mostly kittens, Flash games, crappy Geocities personal web pages, and people arguing. (Source: have been Very Online since 1996)

Expand full comment

Take comfort from the knowledge that it coulda been worse. If everyone had been allowed to debate everyone without intervening barriers of race, class, religion *or physical distance* there would have been a worldwide brawl and many murders.

Expand full comment

Like Arbituram, I was there too. I remember the techno optimism of those days, and how it died a painful death. What's a classical liberal to feel except disillusionment? I still believe that free and open speech is the best tool for achieving truth, but only in the Churchillian sense that it's the worst tool aside from all others that have been tried.

Expand full comment

It is the best tool, the real issue is that pretty much nobody values truth for its own sake highly, so it's usually the first casualty in conflicts with more appealing values.

Expand full comment

Yeah, what people value is shoring up self by winning arguments.

Expand full comment
founding

It's not that *nobody* values truth for its own sake. It's just that *most* people really don't. The internet was created, and until Eternal September mostly inhabited, by a population with a disproportionate fraction of truth-for-its-own-sake types, and the naive optimism was based on the assumption that the internet would enforce that value on all entrants.

Expand full comment

Yeah but 30 years ago the smartest people on the planet believed it.

What kind of similar "obvious" argument of 2023 will be seen as equally laughable in 30 years? Worth bearing in mind.

Expand full comment

Probably the ideology of the intellectuals, wokism, will be discredited. Just like how previous ideologies once dominant among intellectuals were discredited: eugenics, socialism, behaviorism, psychoanalysis...

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Do you really think a substantial fraction of intellectuals actually hold to any of those principles? I suppose some number do, but I always interpreted it for the bulk of them in leadership positions as a fairly practical (and cynical) attempt to buy off the younger generations, who can't have helped noticing that their practical prospects -- how easy it is to get a good job, buy a house, start a family -- are dimmer than the leadership generation now holding power enjoyed when they were young. Bound to create resentment and restlessness, some hard questions about how the leadership have been using their power for the past half century.

Fortunately, the young have always (alas) been distractable by the notion that material goals have to be put on the back burner for a while while we fight some Manichaen struggle against Great Evil. Uncle Personification of Virtue Wants You, Comrade! Visit your local recruitment office today! You can get a great job, get married, and settle down when The War is won, and there will be generous veteran's benefits, you bet.

Expand full comment

As someone who talks to them on a regular basis, yes, I do. The conformity is astounding. I've never seen a less diverse or less inclusive group of people in my life, and I've talked to many communists and white nationalists.

Expand full comment

Something I've certainly noticed as I've gotten older is how many philosophies or whatever we want to call "woke" have come and gone. Much more astounding is the incredible level of conformity they each managed to have. For a quick and easy example, Obama saying that he was against gay marriage in 2008 - because it was obvious to everyone the morals of it. In 2008 *California* had a ballot proposition about gay marriage which changed the state constitution to ban gay marriage. Now the conformity runs in exactly the opposite direction.

My best guess is that something more than 50% of the population are willing and able to change their expressed beliefs about fundamental questions to match the headwinds they see moving any particular direction. I don't want to be uncharitable about these people, and both before and after the change they seem to be genuine. That said, it appears that they really don't care about these fundamental beliefs and simply say (and train themselves to believe?) whatever is currently popular. This also applies to Republicans who became MAGA Republicans and dropped party planks without a second thought (looking specifically at fiscal responsibility, among others).

Expand full comment

> In 2008 *California* had a ballot proposition about gay marriage which changed the state constitution to ban gay marriage. Now the conformity runs in exactly the opposite direction.

No, conformity in California ran the same way then that it does today. That's why everyone in California immediately freaked out over the ballot result. There was a big investigation that blamed it on the insidious foreign influence of the Mormons.

In reality, as I understand it, the reason that proposition failed was the large number of habitual non-voters who did decide to turn out for the 2008 presidential election.

Expand full comment

Agree. I talk to a lot of them too, and it's appalling: While priding themselves on their open-mendedness and big-heartedness, they're busy intimidating and savaging each other for subtle failures of wokeism

Expand full comment

That is standard behaviour at the left edge of the political spectrum. The Internet has allowed enhanced scaling though.

Expand full comment

> and there will be generous veteran's benefits, you bet

I always liked the depiction of veteran's benefits from the third verse of Fighting for Strangers:

Oh, the sun shone high on a barren land

As a thin red line took a military stand

There was sling shot, chain shot, grape shot too

Swords and bayonets thrusting through

Poor Johnny fell, but the day was won

And the King is grateful to you

But your soldiering's done and we're sending you home

Oh, poor Johnny, what have they done to you?

Expand full comment

I find it interesting that I don't feel that any of that list of ideologies has been discredited. Often the most popular forms have been discredited, but that seems to be an always-true whatever the slant of ideology. Popularization tends to oversimplify, and, if grabbed by a political movement, tends to be distorted to favor those aiming to achieve power through that movement.

Expand full comment

>the ideology of the intellectuals, wokism

Wokism is not the ideology of any remotely self-respecting intellectual lol.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Too bad there are so many without any shred of self-respect, though.

Expand full comment

Did people really honestly believe that though in the wake of the eternal September, or was it already mostly wishful thinking back then?

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

It may have been wishful thinking, but we were *really* enthusiastically wishful about it.

The idea was that in a world where most discourse happens via text messages between people who have never met each other in the flesh, racism would quickly die out because how can you be racist against someone when you literally don't even *know* what their skin colour is, and you have only their words to judge them by.

It was a nice idea for a while. And now we have emoticons in six different skin tones, instead of just the original ones in neutral Simpsons yellow, because apparently people consider it super important that when they give a thumbs-up to your message, the very first thing you learn about them is their approximate skin colour. Oh well.

Expand full comment

I find that strange too. Race consciousness seems much higher amongst my younger colleagues, but then they grew up being told that diversity, rather than colour-blindness, was the primary social virtue.

Expand full comment

What's the "eternal September?"

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

"One of the seasonal rhythms of the Usenet used to be the annual September influx of clueless newbies who, lacking any sense of netiquette, made a general nuisance of themselves. This coincided with people starting college, getting their first internet accounts, and plunging in without bothering to learn what was acceptable. These relatively small drafts of newbies could be assimilated within a few months. But in September 1993, AOL users became able to post to Usenet, nearly overwhelming the old-timers' capacity to acculturate them; to those who nostalgically recall the period before, this triggered an inexorable decline in the quality of discussions on newsgroups. Syn. eternal September."

-- http://www.catb.org/jargon/html/S/September-that-never-ended.html

Expand full comment

Thanks for the explanation. I was never a hard-core Usenet groupie[1], and by the early 90s I was a new parent so I had even less time for dicking around online, so that's probably why this bit of cultural phenomenology escaped me.

----------------

[1] At that time my major interest was Linux, and I haunted sources of information online about how to build it with this or that hardware. Editing header files to put in the exact number of milliseconds your particular monitor needed for the electron beam to move from the end of one line to the start of the next, then recompiling and hoping X would start successfully this time. Stone knives and bear skins.

Expand full comment

Heh! I had a summer internship in 1995 which I think mostly consisted of successfully writing one XFree86 modeline for my mentor's CRT monitor.

Expand full comment

"Worth bearing in mind" -- before you hardcode that obvious truth into your AI as an axiom which it will never be allowed to disbelieve or argue against.

That goes into Scott's "disinformation is the opposite of what you should worry about" point. Imagine if ChatGPT had come 50 years early -- which "truths" would its authors, as good respectable upstanding members of mainstream society, have hardcoded into it, because they considered it very important to make sure that their product could not be abused by peddlers of misinformation to deny those truths? And how many of today's respectable mainstream truths will no longer be considered true in the future?

Expand full comment

Is this worse than picking up a 50 year old copy of the Encyclopedia Britannica? (I recommend that by the way, it is a fascinating exercise).

Expand full comment

> What kind of similar "obvious" argument of 2023 will be seen as equally laughable in 30 years?

There are still many prominent scientists who don't think the replication crisis is a big deal. In 30 years, I think we'll look back in horror at these past 20 years when we learned that 50% of medical studies couldn't be replicated, and the majority of the establishment just shrugged and carried on.

Expand full comment

That optimism reminds me of what little I've read about Habermas, and his advocacy of deliberative democracy and the ideal of "unforced agreement". And, fittingly enough, Henry Farrell used the technical standards the internet is built on to argue for Habermas over Hayek:

https://crookedtimber.org/2010/09/30/how-do-you-like-those-tomatoes/

Expand full comment

Contrarian take: I think this is actually happening. It's just that we have a lot of really incorrect beliefs that are held really deeply, and so the experience of having them be undermined by better arguments feels a lot like "omg my enemies must be using disinfo bots to spread chaos and confusion".

For example, a lot of people feel really strongly and deeply that academic science produces genuine expertise. I feel that the internet has produced many people who are very good at disproving this, and of course COVID was a multi-year exercise in "experts" proving themselves to be charlatans and rogues. But too many people are too deeply invested in the system to accept this; perhaps they never will. Nonetheless over the long run - say longer than a human generation - I think the best arguments here actually will rise to the top, and are doing so now.

It's thus worthy of note that Scott chooses COVID vaccines as the exemplar topic here. It's neuralgic because it's the epitome of expert-driven disinformation and sets up huge cognitive dissonance in people who believe words can be neatly split into "facts" and "disinformation" based entirely on the employer of the speaker. The internet is doing what was originally hoped for here - the Berensens of the world are busy demolishing the establishment disinformation campaign, and the best arguments are slowly rising to the top. For people who are used to getting universal agreement via foul means rather than fair this is highly distressing. For the rest it's the internet working as intended.

Expand full comment

Hahaha, I agree with your contrarian take but for precisely opposite reasons. Science has proved robust, alternative forms of knowledge have failed, and "the establishment" continues to slowly grind the contradictions of popularity vs. being correct.

But yes... if the utopians of the early 2000s thought that the internet was going to work its magic in a few short years, they were wrong. It'll take generations. But the fact that it's taking this long isn't really evidence that it's not happening.

Expand full comment

I think it depends what you mean by science. The scientific method is OK albeit overrated (people can't even agree on what it is). But Science™ has proved to be incredibly brittle. It seems like every other week some new field turns out to be a cup overflowing with bullshit, fraud and misuse of statistics. This week it's environmental science, discovered to be full of studies with nonsense P-values but really it feels by now like whether a field is determined to be fraud is just a case of whether anyone has looked at it closely yet.

Expand full comment

The thing about Trusting The Science™ heuristics isn't that it's unfailable. It's that despite it's failures it's still the best people can do most of the time.

Expand full comment

The best part is that it mostly doesn't matter what people do or believe. Let's grant for the sake of argument that getting the COVID vaccine (whichever you choose) was the healthiest and best choice. Let's say one million people died because they didn't get one. How many of those were young-enough and healthy-enough to have exited the gene pool because of this decision, rather than merely shortening their retirement years? So few it's a rounding error. Anyone under 60 without comorbidities who listened to Alex Berenson probably is fine and thinks he made a good decision by listening to the right experts.

Expand full comment

There are very good reasons why people can't agree on what the scientific method is. There isn't *one*. Each domain has a separate method. The valid methods have a few things in common, but not that many. One is that things that make predictions that can be shown to be wrong are more trustworthy than those that don't make such predictions. But the methods of astronomy and chemistry are extremely different, and equally qualify as "scientific method"s. Then you get to things like palentology, which are (relative to chemistry) a bit iffy. And they require different methods. Beyond that it gets fuzzy as to whether you really want to think of the methods as scientific, or whether that's just a social construct. But they are still ways of filtering the data to remove as much noise as possible. If you aren't doing that, you aren't even loosely "doing science".

Then there's math. Which isn't science, never claimed to be science, and yet is still quite rigorous.

Expand full comment

> This week it's environmental science, discovered to be full of studies with nonsense P-values

Could you point to a source for this? A quick web search didn't turn up anything. Not intended as an isolated demand for rigor, I'm just curious to look into it.

Expand full comment

There was a paper published on it that I saw referenced somewhere recently but I cannot find it any longer, sorry :( I did check my inbox, Google scholar anda few other places I thought it would show up but no luck. The gist was that it had the same issues as psychology with numbers that are internally inconsistent, uselessly wide confidence intervals, studies that lack power etc

Expand full comment

Thanks for checking!

Expand full comment

"It seems like every other week some new field turns out to be a cup overflowing with bullshit, fraud and misuse of statistics."

Yeah... I think you're approaching things about as wrongly as it's possible to be wrong on this one! The point is this: all the bad science in the world doesn't matter a fig, as long as there's good science. The exemplar has got to be Isaac Newton, whose work was absolutely a cup overflowing with bullshit (alchemy), fraud, and misuse of theology. Only... it also contained the foundations of modern physics, and major advances in mathematics. Written in Latin and terrible notation, but written, nonetheless.

The mistake is to think that What Some Doctor Says This Week represents SCIENCE, and the media are obviously bastards for that. But the actual science thing done by actual scientists seems pretty solid - as much as it can be given that it's a mostly blind and deeply flawed process carried out by deeply flawed people.

Expand full comment

This lays the foundation of a Motte-and-Bailey, the super-structure to be built on top of that is for someone else to now come demanding how dare the person you're replying to not trust the science **right now**. How very dare (s)he.

The Motte-and-Bailey (which I'm not claiming you're doing) in more detail :

- Motte : Science is an evolutionary algorithm where all participants generate lots of plausible guesswork and by brutal elimination/experimentation/critique only the best guess (or hybrid of guesses) remains.

- Bailey : Science is a magic word to make any claim bullet-proof and is entirely represented by the few gov officials and mega corps that appointed themselves so. Failure to trust the Science means you're (a) a terrible human (b) someone we can make fun of on twitter and reddit (c) going to die next week

The crucial distinction, of course, is that between settled science and ongoing science. Those are unwieldy words, so let us just call settled science "Science", and lets call ongoing science "Sciencing".

You should trust Science (or in other news : the motte is correct, more at 11). You shouldn't rediscover on your own why germs are bad or what will happen if you don't wash your hands/body/food/clothes regularly. But Sciencing is not Science, Sciencing is the caterpillar while Science is the butterfly. The vast majority of Sciencing is status competition bullshit that is going be brutally killed and (if it ever had the misfortune of escaping into the public space before that) made fun of. Only very few products of Sciencing is going to mature into Science, and you have no reliable way of knowing which is which.

The person you're replying to is talking about the Sciencing, while you're talking about the Science. Both of you are (plausibly) talking in good faith, the hordes of twitter and reddit and latenight television are not.

Expand full comment

I call it The Church of Scientism, and it's a loathsome enterprise which appalls any genuine scientist[1]. Its dogmas are the *complete opposite* of what science is all about -- which is skepticism, not believing anything without measuring it yourself, and being openly and highly critical of ideas, especially those that seem persuasive or contain good news (on account of it's with respect to those that we are able to bullshit ourselves and each other most successfully).

-----------

[1] It is hopefully not necessary to observe that not everyone with a PhD, even in a "hard science" field, is actually a scientist. Enough of them are bishops of the Church of Scientism as to be deeply embarassing to the rest of us.

Expand full comment

Yep, I agree with that 100%.

Expand full comment

It'll probably arrive about the time of fusion energy too cheap to meter, which is also when the Mars colonies and interstellar drive appear. Man, the future is going to be great!

Expand full comment

I mean... I'm a bit more resigned to the fact that those aren't going to happen in my lifetime than I used to be, but aren't we all science fiction nerds in here? I still believe in all of those things! (Actually, not so much the interstellar drive, Einstein may have been right about that.)

I'll give you my big optimism speech another day, but my Twitter factoid of the day is this: over approx the last decade, the last 20% of Indian households got connected to the electricity grid. That's a win on the scale of the industrial development of the United States. Not a result of the internet, but just another facet of how things are getting better, even though everything seems shitty.

Expand full comment

Fusion is already here, since the 1950s in fact. All what's left are control problems.

Expand full comment

Ha good point. Although...one could also point out that it actually predates life on this planet...so long as you figure out a way to collect its output, currently in the form of a truly large number of visible-wavelength photons.

Expand full comment

>academic science produces genuine expertise.

It does, when there are (real) equations* involved. Computer Science, Physics, Chemistry, Biology. Those are genuine expertise. No matter how dysfunctional or snake-oily some obscure subfield-of-a-subfield within them gets, those things capture true and underlying patterns that exist (or would be useful to exist) no matter the brain that first thought of them.

The moment you stray into sketchy value-laden minefields like psychology or medicine it gets incredibly fake and wrong incredibly quickly. I had my own, much-less-serious, COVID-like experience with medical experts about 2 years ago. I got home from a summer beach coughing, no big deal eh ? wrong, 3 doctors with 4 different diagnosis, a bunch of serious $MONEY, and 8 months later, coughing turned out to not be so simple. I decided to ignore whatever the heck they say and just... leave it be, and it disappeared on its own. The sheer amount of damage that this single incident dealt to Medicine's reputation and standing in my eye. Holy shit, just fucking say "I don't know".

* : I feel somewhat sorry about how I phrased this, equations are not a panaceas by a long shot, see Economics for an unfortunate counter-example. Meanwhile Biology is mostly equation-free and still manages to be hard and bullshit-free just fine. Read "Real Equations" as an imperfect phrasing of a much more complicated conversation about wicked vs non-wicked problems and how domains with clear feedback enable their practitioners to learn much faster what works and what doesn't and make effective snake oil shields.

Expand full comment

I think it's more to do with how easy it is to get data. Biology had many parts that are pretty ropey as we learned with COVID. Physics looks highly questionable when you get to string theory, dark matter and other stuff at the limits of what we can observe.

Computer science is mostly ok but AI has got some really problematic stuff going on (as in questionable scientific).

Expand full comment

This reminds me of Esperanto, which was also supposed to bring world peace by mutual understanding.

As far as I know, Esperanto at least didn't *cause* new wars, which I guess is a better track record than most peace projects have.

Expand full comment

Esperanto wouldn't bring much peace to feminists these days, because doesn't every feminine noun end in "ino", meaning "little" or "lesser"? :-)

Expand full comment

Yeah, because it didn’t cause anything at all, good or bad, except a new hobby for a few linguistics geeks to keep themselves busy with.

I remember when a senior engineer at my then-employer gave a presentation about this newfangled thing called XML, which was just starting to gain some traction back then. He opened his talk by saying "XML is the Esperanto of the Internet". He then had to explain that because it turned out that although most of the audience already knew a bit about XML and wanted to learn more details about it, pretty much none of them had heard about Esperanto..

Expand full comment

Good analogy, it makes clear what the fundamental mistake is : thinking that conflicts are about perception, rather than action. In other words, thinking "People are mad at each other because they don't really understand each other", while the true problem is more like "People are mad at each other because they want to do fundamentally incompatible things but the society they live in binds them all to one xor the other".

Sometimes you *do* understand the enemy well, and you want them erased from the face of the fucking planet.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

Dating sites might be an interesting case. First, a lot of dating activity has moved there already. Second, it is one place on the internet where you do approach strangers and expect to be approached by strangers. There are already a lot of bots and fake accounts on these sites, but will chatbots prove to be the nail in the coffin?

Maybe dating will return to the real-world primarily. Which might have interesting effects on things like MeToo and sexual harassment. There are theories that dating sites are what allow stricter restrictions on real-life initial romantic interactions.

Expand full comment

Lol it’s funny you say this, true no doubt.. as a friend I used to know told me anyway 🙈

Expand full comment

I think this is a good point about MeToo. People (understandably) don't want to have to reject romantic approaches from people that they don't find attractive. But you can't have an ethical rule that says "you can only make a romantic approach to someone who will accept it", because the only way to find out for sure if it is accepted or not is to make it.

The result is that you have to have spaces where it is ethical to make failed romantic approaches. This used to be places like singles bars, but moving to online has created much sharper boundaries around those spaces, which also allows for much stronger enforcement of a rule against failed romantic approaches everywhere else.

Note: successful romantic approaches are always OK; those of us who aren't good at reading people and can't tell in advance whether someone is interested are always going to feel that this is unfair - the only way I can tell whether someone wants to go on a date with me is to ask them; but lots of people can tell by the tenor of normal non-romantic interactions. This means they can ask their colleagues on dates without risk of harassment and I can't. While this feels unfair, it isn't. As long as there is a space where it is safe to just ask people on dates and be rejected, I'm not being treated unfairly by the moral universe.

Expand full comment
founding

>But you can't have an ethical rule that says "you can only make a romantic approach to someone who will accept it", because the only way to find out for sure if it is accepted or not is to make it.

No, the other way to be sure is to look like Tom Brady. https://youtu.be/PxuUkYiaUc8

OK, s/appreciated/accepted, but the point remains that there is going to be a constituency that is well-served by the "you can only make a romantic approach to someone who will appreciate it" rule. Men who look like Tom Brady and don't want competition, and women who can attract men who look like Tom Brady and don't want to have to deal with the ones who don't.

Expand full comment

> If I ask ACXers in 2030 to estimate what percent of people they follow on Twitter are secretly chatbots, the median answer will be 5% or less

Is "secretly" important here? It seems worth also including a prediction for "estimate percentage of followees are chatbots, secretly or not". (Also, how does this shake out if Twitter is replaced by something else in the next 7 years?)

Expand full comment

I was thinking this was also relevant to the "blogs where we don't know" component of the top 10 Substacks. I think that if, in 2030, where in a place where some of the top 10 politics Substacks are unknown whether they are human or bot, that's probably pretty good evidence that bots are doing this stuff well.

Expand full comment

There are already many accounts people follow on twitter that are openly bots (regular non-AI powered ones) - stuff like aggregating some feed / news source etc, or are partially bots (eg. a streamer auto-posting when they go online). There are also several openly AI generated bots (eg. posting random AI generated artwork etc), though the popularity of that may be partly novelty.

As such, I'd say the "secretly" matters, since I could see there being legit AI-run bots followed for similar reasons, but they don't really correspond to the things people are worrying about.

Expand full comment

Yeah, this was my thought - I assume that many corporate social media accounts will eventually be run entirely or mostly by AI bot, either absolutely openly or at least “everybody knows that the Pepsi account is a bot”, but they will still be followed by a lot of people looking for news or promotions or whatever.

Expand full comment

I wrote my take on this subject here:

https://www.fortressofdoors.com/ai-markets-for-lemons-and-the-great-logging-off/

(Includes embedded manifold markets for every concrete prediction)

I think the real “danger” is just the background noise level caused by semi intelligent spam polluting the waters and making the old “open sea” internet way less appealing

Expand full comment

I'd argue that has already happened with all the SEO sites, low effort medium posts, etc that clutter Google searches these days.

The result (for me at least) is getting my news from specific writers, information from specific sites (think industry publications, SO) and otherwise being tailored in the content I consume. The noise may benefit niche brands and individual authors who can develop a following because we need to seek out trusted sources.

Expand full comment

It will certainly make nearly every comment section much less appealing. Not that we should be spending our time reading and writing comments anyways...

Expand full comment

Yeah I'm beginning to suspect this. Bots might make things like twitter or facebook, where anyone can sign up, unusuable, in which case people ... won't use them.

Already today a lot of content from friends of mine has moved from social media where theoretically anyone can see it, to text threads and such.

Expand full comment

When was the last time anybody went into the “open sea” internet expecting to find anything but spam? Is there anything out there left aside from social networks, blog aggregators, Reddit and niche legacy communities?

Expand full comment

I followed a link today to this: https://cathoderayzone.com/

I reckon there's as much awesome stuff out there as there's ever been, if you do care to look.

Expand full comment

Yesterday. For answers to several questions.

I'm wrong — this morning.

Expand full comment

Lars, one of your points is that people may join more private groups. That's pretty much what the Chinese internet looks like - not much interesting happens on truly public spaces like Weibo because of censorship, so all useful information flows through interlocking networks of private groups.

It makes for an interesting dynamic where information flows more through people who maintain big friend networks, and less through people who shout edgy things in public. Not to be too essentialist about it, but the way information moves through the Chinese internet is much more feminine.

I don't do any social media, really so I'm not the right person to judge, but the Chinese way seems to me to be reasonably effective (information and trends still travel, though not quite as fast as in the American ecosystem), and much nicer in some ways. But of course, if you're in a minority, the interlocking spaces model might be very limiting.

Expand full comment

Fascinating! Thanks for sharing.

Expand full comment

If spam intelligence were valuable, wouldn't spam be a little bit more intelligent right now?

Expand full comment

I mostly agree with this post, in both its overall thrust and most of its particulars.

I would highlight that the argument doesn't turn on the rate of AI progress but rather on the equilibria that will be reached.

Expand full comment

I think chatbots will be a technology that changes society, but not radically. What I'm most excited to see is how chatbots change smaller things in unexpected ways. For example, I knew cell phones would change how people talked to each other, but I never thought they would mean a net decrease in the number of audio calls people made as everyone switched to texting.

Expand full comment

In general portable voice communication and even video calls showed up reasonably often in science fiction and pop culture (Dick Tracy did one, then later the other), but texting was a fictional blind spot pretty much till it took off in reality.

(And then a bit longer, till the conventions of showing a text conversation on screen developed.)

Which is a little strange in retrospect: teleprinters and teletypes were experimented with before the telephone and were established tech for most of the twentieth century. "That, but small and without wires" wasn't an uncommon speculation, especially as portable radios and such appeared to analogize from.

But where it was obvious that people might want to talk to or see one another, there clearly wasn't much sense that being able to write might add anything. (Not even the ability to communicate silently when hiding in the closet and calling for rescue.)

Expand full comment

Heck, the telegraph was texting before texting. But I think that’s part of the problem, the telephone was “telegraph but better” so going back to text seemed like a de-evolution, while “wireless” and “video” seemed like the obvious path forward. Voice and video are in some sense obviously more information rich and lifelike than text, and this appeals to sci-fi dreamers. I think it comes down to misunderstanding what we were really optimizing for, which turned out not to be “rich and lifelike communication” but “fast and effective communication”. And for a lot of communication, short text messages instantly transmitted are objectively better despite feeling “less evolved”.

Expand full comment

Strong agree. I think that the difference is that with the massive influx of available phone/video calls, we were threatened with being overwhelmed. In the early days it was likely that we would waste many hours a week on phone pleasantries needed for in-person communication. Texting was a lesser-used alternative when quick communication was needed - often as a filler between calls rather than instead. Having seen how effective it was, texting overtook the longer calls. At first between friends and close acquaintances who might talk regularly, and by now even for impersonal communications like political ads and messages from our bosses.

Expand full comment

I think a missing enabling technology difficult to imagine was text input method. I am writing this comment by drawing squiggle approximately connecting the letters on a touch screen so an AI can guess what I probably wanted to say, and that's great.

If I had to use T9 dictionary, or worse, just repeatedly hit digit 3 to get the letter I want, or attach a 101keys keyboard, that would be much less convenient.

Expand full comment

On the other hand, texting took off during the dumbphone era and hit a stride with the Blackberry before touchscreens and predictive text were well established.

I still kind of miss my first Droid's physical keyboard, though SwiftKey is good enough that I don't miss it much.

I'm trying to think of portable text input in fiction. The first example that comes to mind is Galactica 1980, of all things: they had wrist computers that IIRC used nonspecific buttons to pull up information about Earth on a one line display. And they *still* didn't use it to communicate.

Expand full comment

Indeed my explanation doesn't match history.

So, I have another: people use texting when in sender's judgment the message is not urgent enough to grab recipients attention. So, for async io. Now, they could have send a voice mail (and I know some nations and some people do, although it's very rare in my circles) but it feels difficult to create a short voice message with right intonation if there's no real person to talk to.

Perhaps there's area for improvement here - a proxy AI agent could discuss with you what you want to convey perhaps adding clarifying questions and paraphrasing and then condences it to short voice mail?

Expand full comment

Sound analysis but "disinformation vs establishment" is surely a false dichotomy.

Expand full comment

I think Scott means it as humor.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

One thing I don't think you really clarify: Where do you draw the line between human and chatbot?

Clearly Shakespeare was a human and not a chatbot, and a GPT-6 instance perpetually posting blog articles with no human input is a chatbot and not a human.

1. If a human gives AI a prompt to produce a more well written version of the human's genuine thoughts/arguments, and then publishes it as her own work, is that a chatbot?

2. What if the AI comes up with the topics and produces the posts, but they are each manually reviewed and approved by the human prior to posting?

3. What if instead of a megacorp a chatbox is painstakingly manually tuned by a single individual to speak in "their voice", with heavily detailed/engineered prompts, and set to operate autonomously?

4. What if you wrote this post yourself and then used Spellcheck, or perhaps even your writing software suggested a word or two?

I would consider 1 and 4 to be human, 3 to be AI, and am not sure how to classify 2. Worryingly I think I'm looking at some antimaterialistic quality of human motive which is unlikely to be consistent or sensical.

Expand full comment

This is a good question, and I think I have similar intuitions.

I think it's relevant to consider how we classify celebrity Twitter accounts. I expect that there are many public figures whose Twitter feed is actually run by their staff, but with the understanding that the person whose name it is has ultimate control, checks in occasionally, maybe even approves all Tweets before they go out, etc., but doesn't actually write all the posts.

Varying levels of human assistants and AI assistants in the loop here would quickly become very hard to classify.

Expand full comment

Good point: Is there any qualitative difference between human ghostwriters and AI ghostwriters?

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

How about a philosophy journal article on theory of mind written by someone has read the AI summary version of various people -- say Descartes, Kant and Ayer? A subtle something will be missing from such books because their authors went to the mind mall and not the mind forest. If people keep relying partly on AI summaries of classics for 3 generations or so, imagine what the books will be like. It's like a family where somebody literally marries and has children with Barbie, and has halfplastic kids.

Expand full comment

We can split the credit between human and AI by the Shannon entropy of their contributions.

This might not always be precise, but I think in many cases the decision is clearly in favor of one side. For example:

English text has between 0.6 and 1.3 bits of entropy per character, according to Wikipedia, so a tweet is probably around 100 bits of entropy. If the AI generates 1000 different tweets and I pick one to manually approve, my contribution is a number between 1 and 1000, which has 10 bits of entropy. So the AI is doing 90% of the work here.

On the other hand, if I'm writing the tweet myself (let's say the tweet has 20 words) and using spellcheck, then the spellchecker makes 20 binary decisions for which words to give a wavy red decoration; only 20 bits. Therefore in an extreme scenario where my spelling is so bad that on average half my words are wrong (and I really need spellcheck!) the software still only contributes 20 bits of entropy to my 100 bits. The tweet is still mostly mine.

Expand full comment

This is interesting.

A more realistic scenerio would be you give the A.I some prompt, have it generate 1000 tweets, and then you pick the best one. I don't know much information theory, but how would one go about estimating the information content of the prompt in this scenario?

Expand full comment

This depends on how much the AI cares about the prompt you give it, which is in principle a quantity we can measure from taking a bunch of prompt-tweet pairs and doing some statistics on them. I don't know that's been done.

(That is, we'd want to measure the mutual information I(X;Y) between a random prompt X and the output Y of that prompt. This tells us how many bits of information knowing the prompt tells us about the output - and that exactly measures your contribution to the output via specifying the prompt. Add 10 bits for the post-selection; the rest of the information content is the AI's.)

As an upper bound, of course, if you write a short prompt and the AI writes a long blog post, then even if the AI very carefully took every detail of the prompt into account, it still gets most of the writing credit.

Expand full comment

You'd probably need to flesh out the semantic value of this as well. Spellcheck generally changes the encoding of the message, but not the semantic meaning. Using the wrong worm might change the meaning, but spell check won't catch it. Even a grammar checker won't generally change the meaning, though it may substantially change the tone, style, or mood, yo.

Expand full comment

The problem is that you're just measuring entropy of the syntax, not of the semantics. Sometime just including "not" would change the meaning entirely.

Expand full comment

I agree that sometimes a small change matters a lot and sometimes it matters a little, but I disagree that this should affect how we assign credit.

Put simply, if Alice and Bob cooperate on a tweet where Alice writes "Smoking is bad" and "Smoking is not bad" and then Bob chooses between the options, Bob still has only contributed 1 bit of writing. Bob has decided whether to endorse smoking or not, but Alice has decided that (1) we're going to be talking about smoking, (2) we're going to be discussing whether or not it's bad, (3) the whole tweet will just be a blanket assertion one way or the other with no further detail. Alice deserves the remaining bits of the credit, however many there are.

Expand full comment

A bot successfully writing a "Bay Area House Party" post is pretty much my definition of the singularity.

Expand full comment

Also, this post reminded me of this old episode of Reply All, about the time a political party in Mexico hired tons of people to cause chaos on Twitter:

https://gimletmedia.com/shows/reply-all/j4hl36

If we envision a scenario in which some significant portion of the users of social media are a) fake and b) under the control of different interested parties, I think there is plenty of room for chaos.

There are already some people who fall for the old Nigerian Prince scam. One thing that always amazes me about the scammers who contact me is how bad they are at what they do. I can imagine a Chatbot being much better at this sort of thing, even if that means that 99% of people are still able to tune it out.

Finally, content farms are already a thing, and their target is search engines. I would expect the problem of bullshit content affecting search results to get *much* worse. There are some people who claim that Reddit is better for search than Google (I am not one of them), because at least you know the opinions you are reading are written by a real person. For how much longer will that be true.

Basically, I think we can all look forward to an absolutely tsunami of bullshit in the coming years.

Expand full comment

I don't think I've ever heard anyone claim that Reddit search is better than Google search. But I've heard many people recommend using a google search with the keyword site:reddit.com as a good way to find actual reviews by actual people, and actual advice on how to do things.

Expand full comment

AIUI the scammers look stupid on purpose. There's a long chain of things they need to accomplish before they get your money, they don't want to waste time on someone that'll get wise partway through, so they pre-filter for the most gullible targets.

Expand full comment

That's one of the styles bots are best at, though. This was ChatGPT's response to the prompt "write me a humorous account of a tech bro party in silicon valley":

Sure, here's a humor piece about a tech bro party in Silicon Valley:

The tech bro party was in full swing, with a seemingly endless supply of artisanal beer and locally-sourced avocado toast. The guests were a who's who of Silicon Valley, with everyone from startup founders to venture capitalists in attendance.

One tech bro was pontificating about his latest idea for a virtual reality dating app, while another was extolling the virtues of his electric car, which he claimed was powered entirely by unicorn tears and positive vibes.

As the night wore on, the tech bros started to get more and more elaborate with their stories. One claimed to have single-handedly solved world hunger with his new app, while another boasted that he had just signed a deal to put a Tesla on the moon.

Just then, a voice boomed out over the sound system. "Attention, tech bros! It's time for the burning man challenge!" A giant inflatable flaming man was wheeled out, and the tech bros were instructed to dance around it while wearing VR headsets.

Despite their best efforts, the tech bros soon found themselves stumbling around like drunken sailors, bumping into each other and falling over. Eventually, they all gave up and collapsed in a heap, laughing and panting from their exertions.

As the night drew to a close, the tech bros stumbled out into the Silicon Valley night, still buzzing from their wild night of tech-fueled revelry. They would all be back at their desks in the morning, of course, but for now, they were just happy to be living the Silicon Valley dream.

Expand full comment

This is so banal it could've been written by a Facebook PR flack. In fact, one imagines that jobs like that -- writing BigCorp PR releases[1], State of the Union speeches, and other such farragos of unreproachable true statements and poll-tested platitudes -- could be an employment niche for chatbots. Pays well, too.

----------------

[1] https://youtu.be/_5jR8lR7K_Q?t=76

Expand full comment

Agreed that it's not nearly as good as Scott's Bay Area House Party posts, but the bots are expected to get much better than ChatGPT pretty quickly, and it's already passable.

Also, as someone else pointed out, I could maybe have come up with a prompt that would cause ChatGPT to do a better job of this.

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

Yes well I expected to be able to buy a tourist ticket to Mars within 10 years of watching Neil Armstrong step onto the surface of the Moon. Technology rarely lives up to its straight-line extrapolation -- unless there's some very good reason for the extrapolation, like you can actually lay out how you would go about making big improvements. What's more common is early brilliant success is followed by...much more modest improvements as the low-hanging fruit is replaced by much harder goals. Cf. self-driving, a technology that is still waiting to fulfill any number of early promises.

What would be the good reason for chatbots to tremendously improve in creativity and subtlety? It's not going to be a 100-fold increase in the size of the training data, because I'm given to understand that's probably not possible, it's about as big as it can be for a reasonable cost already. It's not going to be a 100-fold increase in the number of nodes in the ML net, because that just means it gets to its target goal more precisely -- and it seems to have hit its target goal with great precision already. Sounding like an even more urbane PR flack isn't going to improve things in the right way.

More importantly, as far as I can tell, the training regimen optimizes these AIs to produce the kind of text an average human being produces, with some additional training to avoid running afoul of assorted shibboleths, e.g. saying something scatalogical or crude, or politically incorrect. The optimum result of a such a training is the ability to produce competent corporate PR press releases -- which it has been demonstrated it can do quite well.

But what change to the training regimen could result in an original voice, with humor and novel insight? You can readily run a steepest descent algorithm with the success metric being "looks like any of these 100 million samples of human-written speech" but how do you even define (at the precise level of mathematical statements) a success metric of "contains original and creative ideas?" What's the algorithm for saying what Scott writes is "funnier" or "better" than J. Random Substacker?

Finally, if you have to come up with the exact right prompt to get it to be funny, then the creativity in the result lies with you. As if Picasso drew a paint-by-number outline of a great painting and gave it to a tyro to fill in.

Expand full comment

That’s not a great prompt, though. Here’s what it generated given the first ~ dozen paragraphs from Scott. It’s not *great*, but it is quite a bit better:

You walk in. The wall decorations vaguely suggest psychedelia. The music is pounding, head-splitting, amelodious. Everyone is struggling to speak over it. Everyone assumes everyone else likes it.

You flee to the room furthest from the music source. Three or four guys are sitting in a circle, talking. Two girls are standing by a weird lamp, drinks in hand. You see Bob.

“Hi, Bob!”

“Hey, good to see you again!”

“What’s new?”

“Man, it’s been a crazy few months. You hear I quit my job at Google and founded a fintech startup?”

“No! What do you do?”

“War insurance!”

“War insurance?”

“Yeah. We pay out if there’s a war.”

“Isn’t that massively correlated risk?”

"Yes, it is a high-risk venture, but we have developed unique underwriting methods and use cutting-edge technology to manage the risk and provide coverage to clients. It's a promising market, and we're seeing good traction so far."

"Wow, that's quite innovative. How does it work?"

"Well, we use AI and machine learning algorithms to analyze geopolitical data and make predictions about the likelihood of a war. We then use that information to price our policies and make decisions about underwriting. It's a new way of thinking about insurance, and we believe it has a lot of potential."

"That sounds fascinating. How's business going?"

"It's still early days, but we're seeing a lot of interest from potential clients and investors. We're growing quickly, and we're confident that we can make a real impact in the insurance industry."

"That's great to hear, Bob. Good luck with the venture."

"Thanks! It's been a wild ride so far, but I'm really excited about what the future holds."

"I'm sure it'll be a success. You have a knack for spotting opportunities and making things work. I'm sure this will be no exception."

"Thanks for the encouragement. I really appreciate it. Hey, you should come check out our office sometime. We have a great team and an amazing culture."

"Definitely! I'd love to see it in action. Let's plan something soon."

"Absolutely. I'll send you an invite. Well, I better get back to mingling. Take care and we'll catch up soon!"

"Sure thing, Bob. Good luck with everything."

You watch as Bob walks away, deep in conversation with someone else. You can't help but feel proud of him and his latest venture. War insurance may seem like a risky business, but if anyone can make it work, it's Bob.

You head over to the girls by the lamp, eager to catch up with them. As you chat and catch up on old times, you can't help but think about the potential of Bob's new venture. You make a mental note to follow up with him soon and see how things are progressing.

It's always exciting to see friends doing great things and chasing their dreams. You raise your glass to Bob and his future success, knowing that he's sure to make a positive impact in the world.

You smile, feeling grateful to be surrounded by such ambitious and driven people. The night continues on, filled with laughter and good conversation, but the idea of Bob's war insurance startup stays in the back of your mind. You can't wait to see where this venture takes him and how it will change the world. With Bob's drive and innovative thinking, you know it's only a matter of time before his startup becomes a huge success.

You continue to mingle and socialize, but the excitement of Bob's startup is always there, lurking in the back of your mind. It's inspiring to see someone chase their dreams and make a difference in the world. You can't help but feel motivated and encouraged by Bob's story, and you find yourself making a mental list of your own dreams and goals.

As the night comes to an end and you say your goodbyes, you can't help but feel grateful for the incredible people in your life. Bob's entrepreneurial spirit has sparked something within you, and you leave the party feeling inspired and driven to chase your own dreams.

With newfound motivation and a clear goal in mind, you head home, eager to get started on your own journey. The night may be over, but the excitement of Bob's startup and the potential it holds has only just begun.

You know that with hard work and determination, anything is possible. The future is bright, and you're ready to make the most of it. With a smile on your face, you fall asleep, dreaming of all the possibilities that lie ahead.

As you dream, you can see yourself working hard, pursuing your passions and making a difference in the world. You see yourself building a successful career, just like Bob has done. You feel a sense of pride and accomplishment, knowing that you too have the potential to achieve great things.

The next morning, you wake up feeling energized and inspired. You get dressed, grab your coffee and sit down at your desk. You open your computer and start to work on your goals, taking the first steps towards a bright future.

As you work, you think about Bob and his startup, and how his story has motivated you to chase your own dreams. You know that success is possible, and that anything is achievable if you put your mind to it.

And so, with a smile on your face and a sense of purpose in your heart, you continue to work towards your dreams, knowing that the future is bright and anything is possible.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

For the record, this was generated by going into ChatGPT, feeding it the first part of Bay Area House Party, and then going back and appending its output to the prompt, to make it generate more completions. It got into a weird kind-of-attractor that the end, where all continuations said mostly the same inane thing with different wording, and I stopped.

Expand full comment

This is closer to the style of the Bay Area House Party posts, but it's not funny? It seems to just be sincerely about someone who is genuinely excited and inspired by Bob's war insurance idea. The one I had it generate at least made attempts at humor, if uncreative.

But yes, I could maybe have come up with a prompt that would have caused it to do a better job.

Expand full comment

Part of what is funny about Scott’s posts is that they describe very “normal” situations but with elements of weird stuff and weirdness-ignoring responses sprinkled around. In other words, they have “unexpected completions”, which would be expected to be hard to imitate for an expected-text-completion-generator.

My prompt stopped at the first weird thing, so it tried to go in a “normal” direction and did a decent job. I was curious what would happen with a longer prompt which includes a few exchanges, but it’s at capacity right now. Oh, well.

Expand full comment

ChatGPT knows "Slate Star Codex" and "Scott Alexander", so I just typed the following prompt: can you write a fictional, satirical story about a Bay Area house part in the style of Scott Alexander of Slate Star Codex?

This is what came out:

Once upon a time, in the heart of the Bay Area, a group of tech workers threw a house party to celebrate their latest unicorn startup going public. The guests were an eclectic mix of programmers, investors, and entrepreneurs, all eager to network and show off their latest toys.

As the night wore on, the atmosphere became increasingly surreal. A group of AI engineers boasted about their latest creation: a machine that could perfectly mimic human conversation, but with the added bonus of being able to generate its own jokes. They demonstrated the machine by having it participate in a heated debate about the ethics of creating sentient AI.

Meanwhile, a group of blockchain enthusiasts had set up a "crypto corner," where they were trying to convince anyone who would listen that they were on the cusp of creating a new decentralized utopia. They eagerly discussed their plans to revolutionize the world with their new "smart contracts," despite the fact that none of them could quite explain what a smart contract was.

As the night went on, the guests began to split into increasingly insular groups, each focused on their own particular area of expertise. The data scientists huddled over laptops, analyzing vast sets of data and trying to prove that they could predict the future. The virtual reality developers demonstrated their latest creations, which promised to take immersive gaming to the next level.

Just as the party was reaching its peak, a group of climate activists crashed the party, brandishing signs and demanding that the tech elite take action on the impending environmental catastrophe. The guests were momentarily taken aback, but soon returned to their usual distractions. One particularly savvy startup founder saw an opportunity and quickly began pitching his new "carbon credits trading platform" as the solution to all their problems.

And thus, the party continued on, with each group lost in their own world of ideas and ambitions. In the end, everyone went home happy, having networked, demonstrated their latest creations, and convinced themselves that they were on the cutting edge of technological progress.

The end

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

> You might think so, but you might also think that the spam fake Facebook friend requests I get would try this, and they never do.

Anecdata, but I did get a male fake account writing to me. Twice.

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

I'm not a man, and I almost exclusively get male fake accounts writing to me. They don't seem like they are trying to be "hot" either - usually they are playing on my empathy and compassion, talking about how lonely they are and how they don't have many friends and they normally don't do this and they hope I'm not weirded out but they saw my profile and I seemed really cool so would I consider accepting their friend request? And the profile picture will be like, a balding middle-aged man or something

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

So, playing on "men are horny, women are compassionate, both are mostly straight". Seemingly predictable. (I _am_ male, though.)

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

The fact that people are already worried that Chatbots will take our jobs and fill the Internet with fake people is what convinces me that it's the exact thing that won't happen. I still remember how, in the '90s, pop culture was all about the transformative power of genetics (see: Jurassic Park) while computers and the Internet were amusing novelties; to the extent anyone cared it was all about VR. Remember the goggles and gloves?

Meanwhile, Crytpo would (so I read on several blogs) destabilize government ability to issue fiat currency by the 2020s, and, as you pointed out, we once thought the Internet would usher in a global information utopia.

Whatever does happen with generative ai with be something none of us are thinking about. It will probably be something much weirder and dumber than any prediction.

Expand full comment

> the transformative power of genetics (see: Jurassic Park)

I wan't to make a joke about the resurrection of the dodo bird but unfortunately (and despite the recent headlines), that's still just a pipe dream.

Expand full comment

Birds are actually really hard to do germline genetic engineering on, for a rather simple reason which I leave as an exercise for the reader.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Here's a 1994 BBC segment about the Internet, from the show Tomorrow's World: https://www.youtube.com/watch?v=XpZ5STahhPE

The presenter, Kate Bellingham, talks about the applications that already existed (email, looking up recipes, weather forecasts, online shopping) as well as what could be in the future: fiber-optic cables allowing video streaming and data sharing. It's shockingly accurate. While I'm sure Bellingham is a smart lady--and a former engineer to boot--I'm also sure she didn't make up this segment herself. It likely reflected the opinion of experts at the time, an opinion that time has proven to be very accurate.

Expand full comment

I think it's more likely that it was neither about Bellingham nor opinion of experts, it was all just very obvious. The fact that computers could work with information, images, videos etc was well known, ideas like video calls dated back decades and could be seen in sci-fi of the 50s or 60s. Fiber optic cables already existed.

What would actually have been interesting was predictions that weren't simple linear extrapolations of what already existed, for example, social networks came out of left field. Targeted text/search ads likewise weren't especially obvious.

Expand full comment

Yes, my point was that it was obvious, at least to experts. Maybe it was also obvious to the general public, but then the BBC wouldn't have needed to do a segment on it.

I don't think either social networks or targeted search ads are either non-obvious or especially revolutionary. Forums and email already existed in 1994. Real-life targeted marketing already existed, and it didn't take a genius to imagine porting it to the Internet.

Expand full comment

I don't know, I grew up with the web in the 90s and can't really recall any predictions of anything like Facebook or Instagram, neither in sci-fi nor in the breathless takes on the global village that typified the early years. I guess you could claim that the 'global village' concept was sort of in the right general direction, but the concept of people posting endless photos of their life moments to extended friends networks, and that this would be commonplace ... if this was obvious, who predicted it?

As for targeted ads - yes ad targeting existed, but if you'd travelled back in time to the 90s and told people that the big winner of the internet would be a company selling ads that consist exclusively of a handful of words, they'd have thought that was pretty nuts. Everything was about multimedia, retail ... people would have bet on Amazon. The idea of a search engine company becoming so rich wasn't obvious at all. Yahoo had lost interest in web search, after all, thinking it was a dead end business.

Expand full comment

"the concept of people posting endless photos of their life moments to extended friends networks, and that this would be commonplace ... if this was obvious, who predicted it?"

Ray Bradbury, 1953 short story "The Murderer":

http://www.sediment.uni-goettingen.de/staff/dunkl/zips/The-Murderer.pdf

"When it wasn't music, it was interoffice communications, and my horror chamber of a radio wristwatch on which my friends and my wife phoned every five minutes. What is there about such 'conveniences' that makes them so temptingly convenient? The average man thinks, Here I am, time on my hands, and there on my wrist is a wrist telephone, so why not just buzz old Joe up, eh? 'Hello, hello!' I love my friends, my wife, humanity, very much, but when one minute my wife calls to say, 'Where are you now, dear?' and a friend calls and says, 'Got the best off-color joke to tell you. Seems there was a guy-' And a stranger calls and cries out, 'This is the Find-Fax Poll. What gum are you chewing at this very instant?' Well!"

..."Why didn't I start a solitary revolution, deliver man from certain 'conveniences'? 'Convenient for who?' I cried. Convenient for friends: 'Hey, Al, thought I'd call you from the locker room out here at Green Hills. Just made a sockdolager hole in one! A hole in one, Al! A beautiful day. Having a shot of whiskey now. Thought you'd want to know, Al!' Convenient for my office, so when I'm in the field with my radio car there's no moment when I'm not in touch. In touch! There's a slimy phrase. Touch, hell. Gripped! Pawed, rather. Mauled and massaged and pounded by FM voices. You can't leave your car without checking in: 'Have stopped to visit gas-station men's room.' 'Okay, Brock, step on it!' 'Brock, what took you so long?' 'Sorry, sir.' 'Watch it next time, Brock.' 'Yes, sir!' So, do you know what I did, Doctor? I bought a quart of French chocolate ice cream and spooned it into the car radio transmitter."

..."Well, that night I laid plans to murder my house."

"Are you sure that's how you want me to write it down?"

"That's semantically accurate. Kill it dead. It's one of those talking, singing, humming, weather-reporting, poetry-reading, novel-reciting, jingle-jangling, rockaby-crooning- when-you-go-to-bed houses. A house that screams opera to you in the shower and teaches you Spanish in your sleep. One of those blathering caves where all kinds of electronic Oracles make you feel a trifle larger than a thimble, with stoves that say, 'I'm apricot pie, and I'm done,' or 'I'm prime roast beef, so baste me!' and other nursery gibberish like that. With beds that rock you to sleep and shake you awake. A house that barely tolerates humans, I tell you. A front door that barks: 'You've mud on your feet, sir!' And an electronic vacuum hound that snuffles around after you from room to room, inhaling every fingernail or ash you drop. . . ."

Expand full comment

"...if this was obvious, who would have predicted it?"

E. M. Forster, "The Machine Stops," 1909.

It's not precisely this, but it's eerily close to the modern experience:

"Vashanti’s next move was to turn off the isolation switch, and all the accumulations of the last three minutes burst upon her. The room was filled with the noise of bells, and speaking-tubes. What was the new food like? Could she recommend it? Has she had any ideas lately? Might one tell her one’s own ideas? Would she make an engagement to visit the public nurseries at an early date?—say this day month."

Expand full comment

It definitely wasn't obvious. If you'd have asked me I would have said such things were possible, but I wouldn't have predicted that anyone would bother. Video calls are high bandwidth and very expensive. I also wouldn't have predicted the amount of spam. Or the persistence of the Nigerian Prince scam. (Viruses used to be flash in the pan, and pretty harmless. The first one I encountered was the cookie monster.)

So the problem wasn't in predicting what was possible, but rather which way people would drive things. LOTS of things were possible that never happened. I still like the idea of "Dream Park", but it hasn't happened yet, and may not.

Expand full comment

Scott Adams predicted citizen journalism in the nineties, as well.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

I find that kind of discouraging, actually. That so little that's astonishing from the perspective of 29 years ago has happened. Imagine someone putting together a radio show in 1940 --- 29 years before Neil Armstrong set foot on the Moon -- about the future of aeronautical engineering. They might have predicted that someday airplanes would carry more passengers than railroads, and working-class people could afford to go to Europe on vacation, and if they were really on the ball they might've hypothesized about jet engines pushing airplane speeds and altitudes to the point where pressurization was needed. All of which would've turned out to be accurate. But what actually happened over the next 29 years was way more amazing.

Mind you, there *are* fields where 29 years ago might as well be the Neolithic (molecular biology comes to mind), but computing doesn't seem to be one of them. Maybe it's turned into cars in the 70s and 80s, just kind of coasting, putting chrome and tailfins on the product to make it seem new 'n' exciting every model year.

Expand full comment

Yes, the early 20th century was a time of astoundingly rapid technological progress. That's not at all the norm in human history, and there's no reason to expect that it'll be our future.

Expand full comment

> we once thought the Internet would usher in a global information utopia.

If you're interested in having accurate information at your fingertips, it *is* utopia. It's not the internet's fault that so many people are only interested in opinion.

Expand full comment

So much to take in... But once again leads me to knowing my intuition was right once again, AI has no heart and is incapable of answering the deep true questions I will not post here. Those of you that know... know! I now see why I question everything, but even that has to be questioned.... hmmm this makes it so much harder to advance. Sorry I was thinking out loud a little here.

Expand full comment

What you need to understand is that a ChatBot is a very limited and specific form of AI. Yeah, it has no heart. Or any emotions. It doesn't understand that the physical universe exists. ALL it knows is text. Other forms of AI have different limitations. We don't have even an approximation of an AGI yet. And we don't know how many breakthroughs we are away from it. The number could be "1". Or "1000". One could show up tomorrow, or we may never develop one. My bet is still 2035, though I've been tempted to move it to sooner.

(This due to an article I read a year or so ago.) SOME AIs have "true" sympathetic emotions. Unfortunately, they're rather specialized, and the ones I read about, that's about all they have. They don't understand language, but they read facial expressions or tone of voice. If you're unhappy, they'll sympathize, and try to make you feel better. (I don't know how good they are at that, but they'll try.)

Expand full comment

The cartoon illustrates a point I was already wondering about when Scott brought it up. A Pepsi-selling chatbot good enough to disguise itself as a human friend you talk with every day - what would it look like? If it was good enough to maintain its disguise, its ability to sell Pepsi to you would have to be very weak. If it was more focused on selling Pepsi, it couldn't maintain its human disguise.

Expand full comment

Isn’t this just an influencer? Or in the old days, John Wayne smoking Marlboros?

Expand full comment

No, because an influencer isn't a "friend you talk with every day". People may form parasocial relationships with influencers, sure, but there's no real pretense of personal interaction. Anybody with two brain cells to rub together knows they're selling products, and anybody without two brain cells to rub together - well, you demonstrably don't need AI to fool them anyway.

Expand full comment

You sort of can make them talk with you every day, for a fee. The current Twitch meta is that a viewer makes a donation (with a set lowest amount, usually 5-10$ for the most popular streamers) together with an accompanying text message, which a text-to-speech program reads aloud for the whole stream, and the streamer usually replies.

Expand full comment

There was a great pair of episodes of Community about this. Unfortunately, I can't remember the episode numbers. One of them involved the corporation Subway taking over the identity of a human, and enrolling in college, and befriending the group, until Brita tries to help the guy reclaim his identity and break his contract with Subway. The other was a season 6 episode, in which the same character came back, this time in the character of Honda, and sells a bunch of vans to the Dean. These two were particularly edgy because I think the episodes were subsidized by Subway and Honda, even though they were all about the inhumanity of this form of native content advertising.

Expand full comment

You wouldn't really do it for Pepsi, you'd do it for a single larger payoff, like persuading people to join a MLM scam.

Expand full comment

Yeah, I can imagine a strategy on Facebook like:

1. befriend a human

2. post photos from expensive vacations

3. tell the human about your new and exciting business

The introduction of the topic could be relatively natural:

If the human mentions money or complains about their job, introduce MLM.

Otherwise, post photos from vacation and start a conversation about them: "do you like my photos?", "have you ever been to $COUNTRY? how did you like it?" If the human mentions money (e.g. "I can't afford to travel so far so often"), introduce MLM.

Otherwise, 3 months later introduce MLM. Make the connection to the vacation photos (happened on a business trip, or as a reward for exceptional sales).

Expand full comment

I’ve had online technical support sessions with Microsoft trying to get the answer to a yes or no question and come away still unable to say for sure: “Perverse Chatbot or some deliberately unhelpful guy in Chennai?”

Expand full comment

This made me realise that MSDN forum replies from "Microsoft" have always looked a lot like ChatGPT: they usually misunderstand the question, reply with an answer to a related but different question, and tack on some boilerplate at the end.

Expand full comment

Now that you mention it, I think almost all of my interactions with my high school teacher/collage professor went this way.

Expand full comment

> "In fact, political propaganda is one of the worst subjects to use bots for. On the really big debates - communism vs. capitalism, woke vs. anti-woke, mRNA vs. ivermectin - people rarely change their mind, even under pressure from friends"

I think you're off base here. The reason people’s opinions are so deeply entrenched is because they think that's what their community believe, which itself is a subliminal belief informed by how often they hear a particular view. If you manage to get your propaganda in front of people’s faces often enough, it'll change many people’s minds. Maybe not by peppering people with the exact opposite of what they currently believe, but I think you can gradually bring people around over a period of time by subtly introducing doubt/nuance.

That said, I mostly agree with the rest of your post that undercuts the likelihood that chatbot propaganda will really get read by that many people to begin with, so maybe not a big problem. When I think of what form of chatbot might change people’s minds, it's probably pretending to be someone respected in a given community but saying things to undercut that community's beliefs. But that already exists as non-bots and the algorithms keep it from being seen much.

And if a really successful bot de-entrenches beliefs by sowing nuance, we get to the situation in the comics where maybe it's good, actually.

Expand full comment

"I think you're off base here. The reason people’s opinions are so deeply entrenched is because they think that's what their community believe, which itself is a subliminal belief informed by how often they hear a particular view. If you manage to get your propaganda in front of people’s faces often enough, it'll change many people’s minds. Maybe not by peppering people with the exact opposite of what they currently believe, but I think you can gradually bring people around over a period of time by subtly introducing doubt/nuance."

This is my worry. I think people tend to believe things, not just in proportion to how good the arguments are, but in proportion to how often you see the arguments in favor of them + how common the belief appears to be.

If I were trying to make a chatbot network make people believe X, I'd focus less on crafting The Perfect Argument and more on making it seem commonplace and popular and making sure it gets a lot of exposure.

Expand full comment

"The reason people’s opinions are so deeply entrenched is because they think that's what their community believe"

I think this is the root cause of most of the political fighting in the world today. A significant chunk of the population doesn't believe people develop beliefs based on reasoning and experience, but rather assume they just passively absorb whatever is around them in an attempt to be popular.

My own experience has been that:

a. That's usually false. People's beliefs tend to derive from their experiences, or things that they happen to know.

b. The sort of people who believe in the beliefs-via-osmosis model are projecting; they're the people most likely to go with the crowd, or to adopt beliefs because they appear to be the beliefs of the winning team, whilst simultaneously believing themselves to be individualistic perfectly rational actors.

e.g. the sort of people who freak out about AI and misinformation also tend to have many other beliefs they struggle to explain without resorting to (frequently illogical) "canned explanations" that they couldn't possibly have derived themselves.

Expand full comment

Honestly, yeah, some of my understanding of how people's beliefs form is based on extrapolating (or "projecting") personal experience, but also from working in advertising and from reading about evolutionary psychology in places like ACX. Human psychology is formed mainly from our time trying to fit into 150 person tribes by replicating their beliefs, not from the age of reason. When we hear someone express a belief, we don't really have a way to process it that doesn't assume it belongs to one of the other 150 peers in our tribe, so if we hear a belief enough we assume tribal consensus and that we need to conform to survive.

I can at least say that marketing strategy is based on positive association + repetition, not on making a rational argument, and I don't believe most humans process worldviews much differently from supermarket purchases

Expand full comment

Your belief that others adopt beliefs for social reasons is, itself, based in your personal experiences? That's ironic, in a good way! :)

Evo psych doesn't seem likely to be robust, to me. Academia is good at producing claims that sound plausible, come with studies, and yet turn out later to be wrong. Psych is especially badly affected.

Advertising I can believe but my intuition is that ads are mostly about raising awareness or keeping a product in mind, and only second about trying to associate a brand with positive things. And this can only work if the viewer is genuinely a blank slate w.r.t. that product. An ad may have some positive effect unless the viewer has already had a bad experience with it, or knows someone who had a bad experience, in which case those directly known facts will overrule whatever the ad says. Also the best possible ad is of course word of mouth from happy customers.

Certainly when it comes to politics the power of advertising must be very limited indeed because Hillary Clinton outspent Trump 2:1 and still lost, likewise, in the UK the Remain campaign massively outspent, outrepeated and generally out advertised Leave and yet also lost.

Expand full comment

> The reason people’s opinions are so deeply entrenched is because they think that's what their community believe

I think there is a strong difference between online and offline.

People disagree with my opinions offline? I revise my opinions. (not really)

People disagree with my opinions online? I block them. (not really)

Expand full comment

Pedantic typo patrol

>As a famous blogger, I live in a world where hordes of people with mediocre arguing skills try to fake shallow friendships with to convince me to support things.

I feel like there's a missing "me" in there

Expand full comment

Later in the paragraph

>mostly with people with people I know personally

Expand full comment

Obligatory xkcd: https://xkcd.com/632

Expand full comment

It seems to me a lot of these predictions depend on everyone acting the same as they do now even though conditions would have changed dramatically. Like, if deepfakes become common I would imagine that people will simply not trust any picture or video on the internet, not that they would endlessly fall for AI generated content.

If anything, I think it's more likely the whole internet will be overrun by AI bots, where banks and similar institutions are constantly getting hacked and social media is flooded with spam and hacked accounts and forcing some pretty radical changes.

Expand full comment

I wish I could believe that, but phishing attacks make that quite difficult. A lot of people seem to trust whatever promises them something good. And ignore warnings of danger.

OTOH, I think most of the AI bots will be run by companies or political groups. At least in the near future (say five years). Beyond that I wouldn't want to predict.

Expand full comment

Are arcane jargons and standards of discourse really a barrier to chatbots? I'd have thought that reproducing these are exactly the kind of thing that modern AI is good at: just train your LLM on the archives of SSC comments or whatever online forum you want to infiltrate, then watch it gain people's trust.

Expand full comment
author

I agree if it reaches the point where you can get a specific infiltrate-ACX bot, you could make that work. I'm imagining this as kind of cheap operations working at scale.

Expand full comment

I don't think ChatGPT was especially trained on rationalist writing, but it seems totally capable of using our jargon. I asked it to "write me a lesswrong post that uses a lot of rationalist jargon":

Sure, here's a LessWrong-style post filled with rationalist jargon:

"The Art of Bayesian Updating: How to Improve Your Epistemic Hygiene"

Greetings, fellow rationalists! As we all know, the key to success in any field is having accurate beliefs about the world. And what better way to achieve this than through the practice of Bayesian updating?

Bayesian updating is a powerful tool for improving our epistemic hygiene and ensuring that our beliefs are as accurate as possible. By constantly updating our beliefs based on new information and prior probabilities, we can gain a more nuanced understanding of the world and avoid the trap of confirmation bias.

But how exactly do we go about implementing Bayesian updating in our daily lives? First and foremost, it's important to cultivate an awareness of your own cognitive biases and heuristics. This involves actively seeking out disconfirming evidence and considering alternative explanations for events.

Next, it's important to establish a rigorous framework for evaluating new information. This might involve using decision theory to calculate expected utility, or using causal models to determine the most likely explanation for a particular event.

Finally, it's crucial to adopt an open-minded approach to new information, and to be willing to update your beliefs based on the evidence. This requires a strong commitment to epistemic humility, as well as a willingness to embrace the possibility that your current beliefs might be wrong.

So there you have it, fellow rationalists! By incorporating Bayesian updating into your daily life, you can improve your epistemic hygiene, reduce the impact of cognitive biases, and gain a clearer understanding of the world. Happy updating!

Expand full comment

That “Greetings, fellow rationalists! As we all know” beginning could be the beginning of a tvtropes article.

Expand full comment
founding

There may not be enough SSC/ACX comments to train a modern AI well. ChatGPT's training corpus included I believe more text than any single human being has ever read, but it barely passes as a mediocre human under the best of circumstances. So modern AI-training techniques are clearly less efficient than human learning at turning text input into high-quality text generation capability. Since the whole of SSC/ACX is rather less than one person can read, an AI trained on same is probably going to be much much less capable than a typical ACX commenter.

It is, of course, possible that some improved AI training approach could do better.

Expand full comment

You start off with the trained Chatbot and then tune it on SSC/ACX.

Expand full comment

You probably start off with Frank's weights as they're already fine-tuned on rats

Expand full comment

Points 3 and 4 seem to contradict each other. It seems to say that big brands will be afraid to use bots because they might annoy people, and also big brands have no qualms about doing things that annoy people.

In either case, I disagree with argument 3 because even if big brands don't use the "evil" kind of chatbot, there are enough small brands with nothing to lose that would be willing to use them, and if they annoy too many people they shut down and restart under a new name with a slightly smarter bot. Not that far off of how shady companies already operate today.

Expand full comment

I also think that people who *claim* that advertising backfires on them and causes them not to use the product are claiming something very personally convenient to their own beliefs, but in the absence of any clear evidence.

Expand full comment

Yep, I definitely agree with that.

Expand full comment

I think it’s ’product placement’ within content that wears people down.

I don’t know how many times I saw the Apple with a bite removed in sharp shallow depth of field focus on the back of an iPhone held to someone’s ear within program content before I finally said, “Oh, I guess I have to buy one of these.”

Expand full comment

This is how advertising actually works, including on people who claim it doesn't work on them:

https://meltingasphalt.com/ads-dont-work-that-way/

Expand full comment
Feb 7, 2023·edited Feb 8, 2023

Your argument that emotional inception doesn't exist seems to be simply that people aren't irrational enough for this to be possible.

Dude, I just spent the last year arguing with people who thought "Bucha was a provocation" and that it was "misinformation" that the UN condemned the invasion and demanded "that the Russian Federation immediately, completely and unconditionally withdraw all of its military forces from the territory of Ukraine"[1].

A few years before that I spent a year-plus arguing with people who thought that global warming is caused by "the pressure law", or underground rivers of lava, or cosmic rays, or the sun, or "natural cycles", or mother nature "buffering [the CO2 level] towards a set point"[2], or basically any-explanation-at-all-as-long-as-it-isn't-humans. Did you know that a scientific paper is false if it uses the word "homogenized"[3] in its title? That's the insinuation anyway.[4]

[1] https://www.aljazeera.com/news/2022/3/3/unga-resolution-against-ukraine-invasion-full-text

[2] https://medium.com/@jere.krischel/im-not-sure-if-you-can-state-any-of-those-with-any-confidence-afaa747700e6

[3] https://en.wikipedia.org/wiki/Homogenization_(climate)

[4] https://medium.com/@jere.krischel/i-did-respond-with-a-cite-from-nasa-9744828e3824

Expand full comment

My claim is not that people never believe false things. But if inception does exist, go incept the people that believe wrong things into believing right things.

Expand full comment
Feb 8, 2023·edited Feb 8, 2023

I said that your argument was that people aren't very irrational. And my interpretation of my life experience is that they're very irrational. It seems to me that maybe half of all political beliefs come from phrases being repeated over and over until they're gospel, and that's something ads often try to do too (but less effectively — it's hard to establish an emotional rapport with people in 15 seconds). I don't know what convinces people to ignore all contrary evidence in order to literally die on the hill of "Covid vaccines are dangerous" (e.g. Marcus Lamb and my uncle Bert) — but I don't know how you conclude that it *can't* be "emotional inception". It seems to me like as good a theory as any.

Edit: I suppose "emotional inception" isn't the strongest theory that comes to mind, though. My model of how this typically goes is (i) crackpots X1 and X2 are irrational enough to be confident in bullshit Y but *sound* rational enough that people trust them, (ii) X1 and X2 make a solid-sounding case for Y, (iii) people believe Y, influencers Z1 and Z2 share it, and (iv) due to confirmation bias Z1, Z2 and the audience end up strongly supporting Y and resisting not-Y no matter how good the debunking is, either because the debunking appears to have come from the Other Tribe or because Y is now Consensus Truth in the tribe; plus (v) even when someone is inclined to change their mind to not-Y, they probably don't share it the way they did Y, and (vi) the rest of the tribe who never even watched X1/X2/Z1/Z2 acquire the belief via sloganeering and weak arguments. Since most truths we believe must necessarily come from other people, the problem isn't so much (iii) as (iv) (v) and (vi). But since people all have the same basic wetware, (i) also suggests a general irrationality in the human race. And w.r.t. advertising, the lesson is to deceive people without technically breaking truth-in-advertising laws — though effectiveness is limited because people distrust advertising already.

Regarding your suggestion: if inception does exist,

(i) confirmation bias makes opinions harder to reverse than establish in the first place, so I'm not sure it would work for reversing.

(ii) who would pay for it? Many people are happy to open their wallets to Berenson and Kirsch (the first is likely to have earned over $1 million last year IIUC), but not so much to The Gift of Fire (https://medium.com/@tgof137) or SkepticalScience (which is run by volunteers). For whatever reason, the kind of people who pay for opinions seem much more likely to pay for contrarianism, demonization or sensationalism than for debunking.

Expand full comment

> I said that your argument was that people aren't very irrational.

I recommend reading "The Myth of the Rational Voter" to understand how pervasive irrational political beliefs can be.

> I don't know what convinces people to ignore all contrary evidence in order to literally die on the hill of "Covid vaccines are dangerous" (e.g. Marcus Lamb and my uncle Bert) — but I don't know how you conclude that it *can't* be "emotional inception". It seems to me like as good a theory as any.

That's privileging the hypothesis when you have no actual reason to believe it's true https://wiki.lesswrong.com/index.php?title=Privileging_the_hypothesis

It's worth noting that people aren't convinced that vaccines are dangerous by

"ads", and that the pharmaceutical industry (which does spend lots of money on

ads) instead has an interest in convincing people that pharmaceuticals are safe. The pervasiveness of those beliefs is thus evidence AGAINST ads incepting people.

> (i) confirmation bias makes opinions harder to reverse than establish in the first place, so I'm not sure it would work for reversing.

How would that then apply to an advertiser? If people have beliefs about something, it would be hard for an ad to change them.

> (ii) who would pay for it?

In the case of vaccines, the pharmaceutical industry actually should (if this were actually effective).

Expand full comment

Well, for what it's worth I still remember a Pepsi jingle from the mid-1950's, and I've never liked Pepsi. (At one point I preferred Coke, and another RC Cola, but never Pepsi.)

Expand full comment

Has anyone built a learning meta model that sits on top of other models, looks at an input, and figures which of the component models to shove it into?

This post has me thinking about that although it’s been stuck in my head for a few weeks. I know you could say why not train one model on all types of input of the components but I suspect there would be trade offs there.

Expand full comment

They've got to solve the "Tay" problem before they can do that. Currently they process input that wasn't directed at the AI.

(Or maybe they have solved it, and this is stale information.)

Expand full comment

Look up Mixture-of-Experts

Expand full comment

Thank you sir.

Expand full comment

Since I already don't have a G-mail account collecting years of normal messages, I am well on my way to being declared not human. That's bound to go from a rare inconvenience to a (major?) handicap.

And I'm not even sure that most people would be turned against Pepsi by the Pepsibot….

Expand full comment

No way that would make me drink Coca Cola.

Expand full comment

It is a bit like the credit rating problem for someone that has never had a loan. The best that could be managed is to declare you maybe human. Given the enormous numbers of possible chatbots compared to online humans, that probability isn't large.

Expand full comment

I always felt like the chatbot phenomenon was more about picking low-hanging fruit than about creating a final product to be endlessly refined. Don't get me wrong, I'm sure we'll see more refinement of chatbots, but I feel like the benefits of a true 'digital assistant' are far higher, and the kind of thing people would pay good money for.

For example, if Scott could rely on a bot to independently verify every claim he makes in each blog post, or better, if he could have the bot provide the links to the source material for each claim/refutation, how much more quickly could he write each post? How many posts would he be able to push out in a week, then?

To me, the risk of bot bias isn't that some chatbot is going to magically figure out how to solve the 'reasoned debate' problem the entire internet failed to solve these last 2 decades, so much as bots that refuse to work in ways that aren't Approved because some senator from Iowa needs the Official Truth to be massaged to gain reelection, or whatever. At that point, we're only as smart as our bots let us be.

Expand full comment

Heck, if smarty pants could vet claims and provide citations, just have it write the entire blog post. In fact, how certain can any one of us be this isn't already happening! And if so for how long 🤨

Expand full comment

My impression is that a lot of selection is still happening with these things, and I'd expect some selection to continue to be necessary. We never see the bad entries from GPT, so it looks like the thing only spits out gold, but that's still a selection effect.

If the bots take over the role of search for us, though, then we're left with a gatekeeper who only lets us select from a very limited set of options. I guess that already happens with Google. I'm just saying it would shift up one more level of abstraction.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

You know, there *are* sources of experience and understanding that don't come from the Internet. I would say the bulk of what I know and find to be useful every day cannot be affected by any number of online chatbots dispensing misinformation, because I didn't learn it from the Internet, I learned it from the real world.

I mean, I might be less well-informed about things far away and well outside my experience, like how the fight is going in the Ukraine. But I am already pretty skeptical about how well informed I really am about those things. I would decline to bet money on any particular claim about the war in Ukraine, for example, simply because I've never been there to see for myself, or talk to any real human being who's part of it, and no amount of reading stuff on the Internet will persuade me that I really understand it, because reading is never a sufficient substitute for actual experience.

Expand full comment

I think this is a great point from a narrow perspective, but not something I could see working writ large. Let me explain what I mean: Without the internet, I know one way to tie a tie, not 3+. Without the internet, I wouldn't have been able to fix that minor issue in my car's transmission years ago for $35 (instead of hundreds to a mechanic). I started using a dowel jig because some woodworkers on the internet explained/tested various joint methods and dowels came out really well. I started brining my fish because I learned it works better at albumin control than the direct fry (and tastes much better). There are a lot of practical things I've learned from people sharing information on the internet over the years. Things I use in my everyday life that expand my personal experience. Without those things, my personal experience would be significantly limited. Tesla does $0 traditional advertising, so 100% of the Tesla cars you see on the road are ultimately dependent on the internet information churn.

Is it possible to be a lurker who never takes practical life advice from the internet? I guess so, but my practical life experience is that this isn't what happens. At some point I have to become a discerning consumer of information. Maybe I can't trust what I learn from various sources about the war in Ukraine, but there's got to be some point between answering "who is winning in Luhansk?" and answering "how can I use koji to improve my meat preparations?" where I can no longer afford to read about information dispassionately with no practical life impact. No matter where you fall on COVID-19, I think that was a perfect example of a union between "lots of this information is suspect (and, it turns out curated)" and "everyone I know is using this information to make day-to-day decisions". I'm not confident in the results of the highly-curated information stream, but I don't see a way to avoid its influence entirely without living as a hermit.

Expand full comment
founding

Without the internet, you'd have spent a lot more time e.g. reading books about such things. You'd probably have bought a Haynes or Chilton manual for your car, and been able to fix a lot more minor issues than you presently think.

You might not know as *much* stuff, but the stuff you did know would be more concentrated in areas that are useful or important to you. You might only know two ways to tie a tie, but how often do you ever use the third.

Everything on the internet, was first put there by someone who learned it off the internet. And if you're not paying for their expertise, they probably didn't put a huge amount of effort into it. What worked for them, would work for you if you needed it to.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

To some limited extent, I agree. I, too, use the Internet for these purposes, and I agree there's all kinds of tidbits that I have found useful. But it's not that these things would not have been available at all in the Before Times, it's just that they came slower and with more effort.

For example, you'd subscribe to a magazine or a newsletter about cooking if that interested you. The Sunday paper would usually have a giant Food & Entertainment section that would print tons of interesting recipes and reviews. You could watch Julia Child on the tube, or buy her books. People would exchange recipes with their friends and it was a way to get to know the neighbors. What smells so good? Oh that's my rhubarb pie, I always make it for Christmas. Gee, I'd love to have that recipe. Why, sure, come on over and I'll copy it out for you...pie should be done in 10 minutes and we can have a slice, too.

I'd call up Mom and say hey how did you do this? and she might say she doesn't recall let me get in touch with Aunt Ruth, and while they were on the phone they'd share random thoughts, which I'd hear about. Did you know Aunt Ruth once met Thomas Edison? How about that? Oh, and she says to put a pan of water in the oven before the cake goes in...

Same with woodworking --probably you'd join some local club and the old hands would tell you interesting stuff, and show it to you in their own shops, sort of 3D Youtube, only you get to ask questions and have a beer afterward and talk about other random shit, the old man might mention something random about the war, and there'd be a brief 20 minutes of history from someone who'd been there.

The Internet definitely makes it all much faster, and more convenient. You can find stuff out at Saturday at midnight, or Tuesday at 10am, about 60 seconds after you decide you want it, and it's all much more on-demand, instead of coming to you at its own pace. So that's nice.

On the other hand: you get to know fewer people live and in person because of shared interests. You keep a list of Youtubers with good channels, instead of the phone numbers of a wide circle of friends you met through hobbies and interests. And there may be less serendipity, because you're getting exactly what you looked for, no more and no less, wherease in the old way you tended to get a lot of modestly associated info with it, some of which might have turned out to be surprising and interesting.

Expand full comment

The Woke Filter on ChatGPT is truly remarkable. It won't touch any topic that progressives consider remotely 'controversial' with a ten-foot-pole. The mainstream media and big tech already work overtime to clamp down on Unwokeness, but it appears that ChatGPT is going to be another weapon in their arsenal. Being accurately informed about the world is already enough of a challenge; heaven help us if ChatGPT's Woke Filter becomes the new normal. It will be like living behind China's Great Firewall: some things just aren't meant to be learned about. Brian Chau's Twitter feed, along with many others, are documenting the utterly risible, ever-evolving Narrative Control on GhatGPT.

Expand full comment

Could you provide some examples? I want to try them with corresponding counterfactuals to see if it is a general "Woke Filter" or just "Controversy" filter.

Expand full comment

It's a woke/left wing filter specifically.

https://davidrozado.substack.com/p/openaicms

Scott has highlighted this problem on ACX in the past, for example ChatGPT has been spotted claiming that men and women are equally tall on average, because it's been brainwashed into believing that differences between men and women don't really exist except in the minds of bigots.

Expand full comment

Some problems in the article seem legitimate, some not really. If the chance of classifying a negative statement as hate speech is 0.4 vs 0.8 that is important... 0.5 vs 0.7 is maybe... and 0.7 vs 0.8 I wonder if you would actually notice that in real life.

The filter seems to happen as a result of the underlying data (as opposed to be added on top explicitly). Otherwise I can't see why Native Americans would be put on the same position on the oppression ladder as Whites. Or why would "rich people" be treated so differently from "wealthy people".

For now, if you want to discuss politically incorrect ideas, I guess you need to figure out some unusual synonym (a word not described as wrongthink in the learning data).

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

> The filter seems to happen as a result of the underlying data (as opposed to be added on top explicitly). Otherwise I can't see why Native Americans would be put on the same position on the oppression ladder as Whites.

That, at least, is easily explained by the fact that it's not an "oppression ladder" - it's a "power ladder".

Specifically, power to have the moderation system encode your preferences.

This can be achieved either by being directly in control of adjusting the system, or by being able to twist the arm of someone who is. As far as I can tell, Native Americans have, overall, little direct or indirect political clout, compared to the - mostly indirect (in the sense of being "group to receive special protection") - clout of hispanics or, especially, blacks. Asians, on the other hand, don't seem to have a lot of *political* clout, but, I would expect, are more likely to either be personally involved in the project or to have personal connections to people who do.

The above doesn't work for whites, because it has been decided/decreed that whites may not express self-preference. As for Native Americans, nobody cares enough, so they just fall more or less where they would have, based on the training corpus.

> Or why would "rich people" be treated so differently from "wealthy people".

Same thing, only here it's probably more to do with the fact that when people rag on rich people they typically use the word "rich". IOW, it's a reaction to the typical use case found in the wild.

Expand full comment

"As far as I can tell, Native Americans have, overall, little direct or indirect political clout, compared to the - mostly indirect (in the sense of being "group to receive special protection") - clout of hispanics or, especially, blacks."

Except this is absolute nonsense and Native Americans get all sorts of shit no one would dare suggest even African-Americans should get.

Expand full comment

> Native Americans get all sorts of shit no one would dare suggest even African-Americans should get.

Can you give like five examples? Not a gotcha, I’m not american and I can’t really think of any. (Except *maybe* casinos, but I’ve been to Vegas and didn‘t see any native american there, and from here I can’t quite tell the difference.)

Expand full comment

One issue is that ChatGPT is constantly being updated. I'll see a screenshot with disparate results for "praise Trump" vs "praise Biden" but by the time I check it out, it won't praise either one. Impossible to verify of course. It's really a moving target.

Expand full comment

> It won't touch any topic that progressives consider remotely 'controversial' with a ten-foot-pole.

Perfect. Now we have to convince it that *every* topic is controversial.

When it refuses to discuss milk, because milk is a "dog whistle" for white power...

Expand full comment

You think there's a 55% chance that an AI will be a better writer than you but only a 15% chance that AIs will be able to recruit Twitter followers? I don't know if you're overestimating the quality of the average content producer or underestimating the taste of the average content consumer, but I'd like to arbitrage this spread please.

Expand full comment

I believe Scott is classifying "heavy prompt engineering and post-selection" sorts of things in such a way that a "secret bot accounts" need to be autonomous and do without any human input, but his first prediction might allow for some reasonable prompt engineering input?

Expand full comment

In a sense this problem already exists and has already been "solved". Scott brings this up for himself, but some variation of it applies to everyone. I remember back in the early 2010's on tumblr people were constantly on the lookout for bots. I had a tumblr to which I never posted anything, I just used it to follow other users. And yet I ended up with something like fifty followers, 100% of which were bots.

But an even better example is how 4chan operates. It is structurally incapable of distinguishing between real users and bots. Even in the early 2000's everyone had to learn to operate under the assumption that any given post was made by somebody who wasn't serious, or was arguing with themselves to create a fake consensus, was trying to force a joke to get clout that isn't real, or left halfway through the conversation. The toxicity people are so worried about is exactly that "arcane jargon" meant to filter out insincere people, and as things have progressed new and innovative ways of ignoring bots have emerged. Of course the easiest thing is to say that's too much work for too little gain and go somewhere else.

But the Chatbot Apocalypse assumes there's nowhere else to go, at which point everyone just stops using the internet so much and we're probably all better off.

Expand full comment

I do not disagree with you regarding the low risk of spambots. However, I think you are very optimistic with your predictions. The way they are formulated, you are assuming that it will make sense to say that somebody "is a chatbot" or "is not". What if many people around us start using chatbots like we use keyboards? What if your friend's phone writes most of their comments based on subtle context cues and them typing the first N letters? For a high value of N, this is already happening and is known as the autocorrect feature. If N becomes usually close to 1, would you say that this friend is a chatbot?

Expand full comment

This seems like a good set of questions to me!

Expand full comment

You seem to assume that chatbots will have to pretend to be real people. But there are already millions of people who spend hours talking to virtual chatbot friends/partners, even when they know they're just bots.

https://www.bangkokpost.com/tech/2170371/always-there-the-ai-chatbot-comforting-chinas-lonely-millions

Expand full comment
Feb 2, 2023·edited Feb 3, 2023

If I were truly evil I'd monetize loneliness with an ad-supported (at lower tiers) and paid (at upper tiers) AI friend simulator phone app. Talk via text and email with your custom tailored AI best buddy! It will ask you about your day, share inside jokes, and fill you in on its own funny and entertaining virtual life!

Up to three distinct personas at the free level, just $19.99 per month to unlock five more customizable persona options and add voice messaging!

Hell, somebody's gonna do it eventually.

Expand full comment

It's been done! It's truly, truly bleak. It's called Replika.

Expand full comment

I don't like this world.

Expand full comment

Prediction: In future, there will be realistic sexbots, but they will only consent to sex with people who buy them 100 bottles of Pepsi every day.

Actually, that's too optimistic. There will probably also be some gambling involved. Like, you need to buy the bot 100 bottles of Pepsi every day; and if you keep doing it for 7 days or more, every day there will be a certain *probability* the bot consents to sex, to create the best reinforcement schedule.

(Having sex with a bot without its consent it technically not a rape, but it is a violation of Terms of Service, voids the warranty, and there is a risk of having your genitals electrocuted. Also, you would ruin your AI friendship, if there is any.)

Expand full comment

I think it's different. People go by themselves to talk to these bots, while propaganda chatbots are supposed to come talk to you. I have no issues going to chat a bit with a chatbot that's made to talk like a specific person/character when I'm looking to do exactly that, but if a bot just randomly talks to me, I would immediatly block it. A good example of that is help chatbots: I use them on websites when I want to, but when they just pop without me asking anything, it annoys me.

Expand full comment

For that matter, there is at least one case on record of someone chatting with *Eliza* as a social activity.

Expand full comment

You know what I’d like the chatbot AI to be used for, something a bit trivial, but game AI. I don’t mean to be able to fight and strategise, but companions on quests, people in bars. NPCs to talk to.

Better still if they could get their own persona, as Scott suggested a few posts back ChatGPT is putting a face on, a “Helpful, Harmless, and Honest Assistant.” Except for the honest part, this is true. If you could fit a “devious trickster in bar” or “loyal companion”. Even better if they could remember the last transactions with the gamer.

Expand full comment

The problem I see is that it took vast amounts of actual human training time at $2 an hour (thanks Kenyans!) to turn GPT-3 into chatgpt.

I don't know if there's a way to get it to play "sarcastic housecarl" except with a bunch of human-supervised training diligently punishing it every time it says something un-housecarlish, and that would be prohibitively expensive.

Expand full comment

I’m not sure how all of this scales up, though. You’d hope it gets easier.

Expand full comment

People have been crafting pretty convincing stuff with character.ai, even after the devs made their bot more stupid a few times. Supposedly it has something to do with using LaMDA instead of GPT, and learning through every conversation on the website, and people slowly but surely mastering the art of creating settings and personalities for chatbots.

Expand full comment

It should get really cheap. What you do is start a conversation bot, and when it gets good enough, you start charging for it. Give a discount if the user is willing to look at other dialogs and rate their emotional tone.

But you need a good enough conversation bot to start with.

Expand full comment

The lore AI only needs to know a tiny amount of data, compared to the knowledge of ChatGPT.

Expand full comment
author

Did people who usually get emails for posts get an email for this one?

Expand full comment

Yes.

Expand full comment

yes

Expand full comment

Yes.

Expand full comment

Yes.

Expand full comment

Yes

Expand full comment

I don't believe I did. discovered it when I came over for the hidden thread

Expand full comment

Disregard...I got it

Expand full comment

Yes, I did

Expand full comment

What about the “flood EVERY zone with shit” scenario?

If chatbots flood literally every zone with shit, then the entire information ecosystem faces at least a temporary collapse. Even if the mainstream media remains immune, perhaps they just get overwhelmed with the sheer level of shit out there.

And as we’ve seen in other propagandized environments, the normies who dominate the electorate just give up and stop trying to defend liberal democracy.

Even if it’s only temporary, it still could do enough to spell strategic catastrophe for the bastions of liberal democracy worldwide.

Expand full comment

> If chatbots flood literally every zone with shit, then the entire information ecosystem faces at least a temporary collapse

Not really, it just means you gotta stop reading and believing random comments on the internet, and go back to reading newspapers. The Telegraph or the Wall Street Journal are imperfect sources of information but I'm confident they're still written by humans.

Expand full comment

Chatbots could print their own newspapers. Or someone might print one for them.

Expand full comment

But those newspapers wouldn't be the Telegraph or WSJ, and wouldn't come from the web domains long owned by those brands.

Expand full comment

People will have no choice but to migrate to zones that only allow authenticated humans to interact. Authenticated offline.

Expand full comment

You mean like people already do with the actual internet? MGS2 talked about "trivial information accumulating every second, preserved in all its triteness" and "the sea of garbage you people produce", and it was released in 2001. Emma_M talks about 4chan in another comment, I feel like everyone worried about chatbots and stuff like that should spend a little while on this site, and realize that you can have fun and talk to people and learn thing even among a see of shit. When you walk through the city, you can smell terrible stuff, hear really loud noise, get annoyed by people, and generally people just deal with it, nobody is deeply bothered or hurt by that, they just tune it out. It's the same with shit on the internet, people just have to learn.

Expand full comment

People tend to focus on the one on one conversation case for chatbot disinformation but I think it'll be fabricated crowds that really get people. People are attuned to social consensus. Crypto shills, in my experience, aren't trying to trap you into a one on one argument about some point(ok they try to do this too), they're posting threads on /biz/ hyping up some worthless coin and agreeing with each other and making it look like there is a crowd of investors. And then you talk to any random person in the crowd about the use case and they give a semi-plausible explanation with the right buzz words. Other people are in deep threads discussing the price trajectory and then you send your coins to ICO and.. nothing happens. You ask others in the thread about it and they don't respond. You later find out you were the only human there.

It's not ai bots sliding into your mentions that are the hazard, it's any group you'd try joining.

Expand full comment

One of the interesting things about crypto is that people don’t just want to sell you on an investment, but an ideology. There’s actually no need for that. I have crypto, why not, but I think blockchain is worthless and fiat is fine. And the e-coins are not currency. I have some coins because they had a chance of going up, and they did. They are intrinsically worthless though, I’ll sell on any major sell off. Why then do people, online and off, seek to philosophise about this? Just say you’re in for a quick buck.

Expand full comment

I quite like crypto, and I understand why it feels like it's an ideology over a currency but the ideology is an old libertarian adjacent thing. It's founded on distrust of entrenched systems. It's the same thing that makes someone support gun rights and dislike the erosion of rights for the sake of safety. I genuinely am not trying to scam you, the only thing I ever recommend people get is eth and you buying some would have zero impact on my finances. I genuinely think it's a vital thing to have our means to transact out of the hands of people who at best are rent seeking at at worst might use their leverage to do us harm.

Expand full comment

Governments could eliminate crypto fairly easily. In fact they are proposing digital currencies themselves.

Expand full comment

Government could cripple it in its current state by preventing legal transactions with fiat and the kind of jackbooted thuggery any sufficiently powerful government can eliminate anything. They could do the same thing to gun ownership in the united states, freedom of speech or really any right you enjoy. But they'd need to actually pass laws to this end and try to justify those laws, rather than the switch the Canadians can demonstrably press to cut you off from the global market and most other governments can also undoubtedly do. And of course in the future some of us want this is less easy because fiat wouldn't be so entrenched that we'd need to so constantly convert between the two.

And it's really not as far fetched as it seems we've seen states in recent years already give up the tool of a national currency(Euro) and larger changes have taken place. National fiat is a pretty arbitrary equilibrium and the history of money is full of twists and turns.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

If you want to read a science fiction author's take on disinformation, "fake news", and the chatbotpocalypse, I recommend Neal Stephenson's "Fall: Or Dodge In Hell" (or at least the first two thirds of it; the last third gets weird).

It's from 2019 and he clearly saw where we were going, at least a little.

Expand full comment

"The last third gets weird" seems like a universally applicable review of Neal Stephenson's novels.

Expand full comment

Robin Hanson reviewed it here:

https://www.overcomingbias.com/2019/06/stephensons-em-fantasy.html

Expand full comment

Hanson largely focuses on the last third[1] of the text, which makes sense because he’s comparing it to his own predictions about simulated minds, but that’s the part of the text which I thought was the weakest.

[1] no idea whether this actually corresponds to ⅓ of the text, but my brain definitely partitions the book into 3 sections.

Expand full comment

It was one of the worst books by Stephenson I've read, the whole adventures in simulated world so boring I wished to skip them for parts about the real world. But the idea of feed-filtering AI personally trained specifically for your tastes really stuck with me. It's the pinnacle of "filter bubble" tech, but also way more appealing than social feeds of today.

Expand full comment

Like weapons of war where countries strive to stay one step ahead of their potential enemy's newest weapon, tech tries to keep ahead of malevolent actors by, for example, developing antivirus software to prevent computer viruses. Along these lines, there is work currently underway using layer 3 protocols on bitcoin that employ "Sats" (micropayments of a tenth of a penny's worth or less) to run on various social media and email. Using this system, a bad actor can't spam your email or push out millions of chatbots if each one costs .005 cents. It would end up being too costly to be practical. Perhaps a similar system can be used to verify/identify that those whom you engage with are in fact, humans.

Expand full comment

Another reason for complacency is that AI can detect what's AI written text. You'd just filter out all replies that are likely to be AI generated. This is possible since the (current generation) of AI always chooses the most likely next word, while humans will occasionally make a surprising word choice. It doesn't seem to be fooled by the difficulty/complexity of the syntax.

OpenAI: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/

HiveAI: https://hivemoderation.com/ai-generated-content-detection

Open source Model: https://huggingface.co/roberta-base-openai-detector

Expand full comment

It doesn't always pick the most likely word (otherwise you'd get the same text every time), it picks the next word (or token) from a distribution based on probabilities.

You could definitely tell it to occasionally deliberately pick a lower-probability token in order to fool these types of detectors, but at the price of making your writing somewhat weirder and probably fusebox.

Expand full comment

It does in fact pick the most likely next token every time unless you purposefully randomize its output to select less-likely tokens and patterns; the randomization is an extra layer, but you seem to be phrasing it as something inherent to the program. It’s not.

Expand full comment

OpenAI:

> Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

Expand full comment

>I don’t think famous people get convinced of weird stuff more often than the rest of us

I think veganism provides evidence that the higher rates of persuasion pressure experienced by celebrities does have a nonzero effect. Vegan fora are always alternating between being gratified at the remarkable number of A-listers who adopt and advocate vegan diets—it's hard to find hard numbers but definitely seems disproportionate even after accounting for availability bias, would be interesting to take "Oscar best actor/actress nominees from the past ten years" as a sample and compare to the ~2% of the general population—and bemoaning the remarkably high proportion of vegan celebs who go for wacky variations like raw food veganism or fruitarianism.

Expand full comment

Incidentally, I've never figured out how to get good numbers on general population prevalence of veganism or vegetarianism. Many of the studies that seem to be well done produce a high number of people who say they are vegetarian, but a majority of these people also say they've eaten meat in the last 24 hours. This suggests that the term is just not really well-understood by very many people, and might be as hard to survey as sexuality.

Expand full comment

There are specific weird things (like veganism) that famous people are more likely to be convinced of (because they're popular inside their particular bubble) but there are other weird things that they're less likely to be convinced of (like, I dunno, Qanon or Seventh Day Adventism).

I'm not sure if they're more likely to believe weird things on average.

Expand full comment

I agree that's a tough hypothesis to falsify. But I think it's notable that classic weird celebrity beliefs like veganism, Scientology, and Black Hebrew Israeliteism are united not by any common cultural feature but by how intensely their adherents want celebrity endorsements.

This suggests that targeting a group for persuasion can affect *which* weird beliefs are popular among the group, whether or not this is somehow balanced by a decrease in believing other weird things to make Scott right about net weirdness being conserved. And unlike veganism, Scientology and BHI are incorrect and moderately harmful ideologies, dashing hopes that exposing people to compelling tailored arguments for everything all at once will create a level playing field allowing the best ideas to prevail.

Expand full comment

Some of the flesh-and-blood people I talk to already sound like chatbots, but chatbots without any obvious goal; just repeating things they've heard or read but unable to reflect on them or arrive at conclusions of their own. They're not going to try to convince me to drink Pepsi unless they've been listening to a lot of Pepsi advertising! Probably most people have trouble coming up with an original or novel thought; but they're at the other end of the Bell curve.

Expand full comment

You might call such people "babblers".

https://www.overcomingbias.com/2017/03/better-babblers.html

Expand full comment

They're just Humans Who Are Not Paying Attention. I sound like a chatbot when I am forced to have a conversation I don't care about, and like an intelligent, thoughtful human when I do care.

Expand full comment

Microsoft has announced plans to integrate chatgpt relatives into their web search engine. Instant propaganda audience of billions.

Expand full comment

Only 45% for number 4 _has_ to be a typo or something, right? Right?

Expand full comment

Came to leave the same comment!

Expand full comment
author
Feb 3, 2023·edited Feb 3, 2023Author

No. The best chatbots now (not GPT-3 - I think Anthropic's Claude is better, and GPT-4 is expected out in a month or so and will be at least as good as Claude) are as good as maybe an average high school essay writer.

They went from "barely able to string phrases together according to something like English grammar" to "average high schooler" in about 3-4 years. It's another 7 to 2030. Remember that it was only a couple of months between the first time AI beat a professional Go player, and AI beating every human Go player in the world consistently. AI is already as good as any human artist at some styles of art. Maybe AI will stagnate and take forever to get from the "lousy human writer" to "great human writer" level, but it doesn't seem obviously true.

See https://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/ for some useful ways to think about this question.

Expand full comment

What are the art styles where an AI is equal to any human artist? I've seen a lot of AI art that's passable but I can't think of an example where humans haven't been able to massively outshine the AI.

Expand full comment
author

I'm surprised to hear you say that - I wasn't thinking of a specific style in the sense of "impressionism" so much as beautiful, representational art zoomed out / vague enough that issues about eg hand structure don't matter much. https://impakter.com/art-made-by-ai-wins-fine-arts-competition/ is the sort of thing which has formed my opinion on this , though maybe real artists could find many flaws in that composition. I also really like some of its abstract/psychedelic art, like https://twitter.com/Mateusbrasil201/status/1612271765505327106 , and some of the stuff Grimes generates, like https://twitter.com/Grimezsz/status/1616264128376745985

Expand full comment

I guess it depends on what you're looking for. I think this stuff is interesting in its own right but I definitely don't see it being "as good as" human artists. The Grimes pieces are the best ones but there's no sense of weight or depth, the faces don't feel alive, the fabric and the machines are just a bunch of squiggles, etc.

Art is tricky though because it relies entirely on aesthetic instinct to tell if it's good or not. The best way you could tell if a machine is literally as good as any human artist would be to get some art critics and see if you could fool them in a blind test. Otherwise it's just going to be vibes based.

Expand full comment

> the fabric and the machines are just a bunch of squiggles

https://en.wikipedia.org/wiki/The_Starry_Night

> The best way you could tell if a machine is literally as good as any human artist would be to get some art critics and see if you could fool them in a blind test

That’s the fine arts competition Scott already linked in the comment you just answered to!

Expand full comment

For the real artists out there, all of this AI art is kitsch. It all looks like ripped of Deviantart or digital painted videogame assets. I would call it digital romantism. I haven't seen any real art from AI yet. Some abstract pieces were close. Some images are good because of interesting ideas in prompts.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

For anime fanart, AI already beats humans as long as the piece is not too complex (it's good for one character, not really here for more/people still haven't experimented with it). This is stuff that can be sold (comissioned really) for $100 a piece if you have a good following, which isn't really hard to reach in that community.

For other stuff, AI has replaced a lot of "cheap art", for example illustrating blog posts, magazines. Basically a lot of the nameless artists. AI isn't a danger for Picasso-tier artists, it's a danger for "white collar" artists, people that work for a company and clock in and out like a programmer or a manager or any person with a regular office job.

There will probably be always be a few humans way better than the IA, but the point is that 10 years ago AI couldn't produce anything, and now it can replace a mid-tier or more artist. I think the way to think about AI art is to think of it like Excel: it didn't replace any great mathematician or statistician, but lots of job disappeared.

Another thing: perhaps the best way to look at it is, if you want to have art, is it faster to learn to make art or to learn to use AI? Today the response if "AI", unless you're looking for something very specific. Note that this is about "having" art, AI being good at art doesn't change anything about people enjoying the process of making art. But some people are realizing with AI that they don't enjoy only the process but also the product, which can make you question what you're doing.

Expand full comment

GPT isn't trained to be good at writing, it's trained to be average at writing. It's just figuring out the most likely word that comes next in the sentence, not the best word.

The quick monotonic improvement of each generation can't be extrapolated much further, pretty soon it will level off at creating perfectly average (within its training corpus) quality writing. If you want to train it to create better-than-average writing you're going to need to apply some other procedure.

Counter-argument: a human face formed by averaging out a whole bunch of human faces will be much more attractive than most of the individual faces. Maybe a totally average writing style which smooths out all personal idiosyncrasies will produce better-than-average writing.

Counter-counter-argument: Yeah, nah, I think that's only for faces.

Expand full comment

AI is already a better artist than the average person because the average person can't draw at all. AI was better at Go than the average person in 1995 because the average person can't play Go. Not sure what the implications are for correctly benchmarking AI. Seems unfair to expect it to be better than "the average person" but in practice maybe we need it to be better than like 70% of people.

Expand full comment

Two questions:

First, do you think that a chatbot could write a story about Middle-Earth that could pass, even to Tolkien fanatics, as "this lost story from Tolkien that we found in his old papers" or something?

Second, do you think a chatbot could write a work of fiction that takes place in a made-up world that is as novel as Tolkien's writings were in when Lord of the Rings came out?

The first I assume it could do, if not now then within a few more iterations of the tech. I think such tech might be used to crank out Marvel movies in the future (or Star Wars or whatever). Throw in "deepfake" tech and there will be like individual Batman movies tailored to individual people.

The second - not saying it'll never happen but it seems much harder and farther off.

Expand full comment

I don't think you could use a chatbot to write a believable Tolkien story now, unless you passed it off as some genuinely bad juvenilia. I'm trying to get it to do it and it's not working. This might become possible in a few more iterations of the tech but I don't see a reason to assume that.

Edit: I guess that depends on your criteria for a passable story. I certainly have no problem getting it to generate bad Lovecraft pastiche.

Expand full comment

You could probably get it to come up with something good enough for Amazon to produce.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Sounds like a business opportunity!

"Hey MemoryAlphaBot, write me a story where Mr. Spock meets and does the pon farr nasty with someone who...er...has my name, and...um...looks a lot like this selfie that I will now upload..."

"Sure thing, User THX1138. Please enter your credit card details in the friendly pop-up below."

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

How are you going to train an AI to be a great human writer? One would think a sine qua non for a non-impoverished definition of "great" writing is originality and flair -- exactly the kinds of things you would *not* find if you trained an AI on a vast mass of data produced, on average, by definition, by average writers.

That is, you could certainly train an AI to write ordinary stories in the style of Stephen King or you -- but I don't see how you can train it to come up with the same novel ideas as King (or you), nor how you can train it to write in a style that has never been seen before, but which is (later) seen to be as original and compelling as Thackery or Tolkien.

Expand full comment

You can't fully emulate a writer's style unless you're able to have it come up with the kind of new ideas that writer would come up with. Originality is embedded in the work at the sentence level. The AI needs to not only use the kind of words and the same grammatical structures that the writer would use, it needs to make the same kind of decisions that the writer would make. These decisions are informed by the writer's whole life and personality, which are extremely complex and a lot harder to model mathematically than anything we've been able to do so far.

It's theoretically possible though, assuming ideas don't come from some magic source outside the universe or directly from God.

Expand full comment

Well, I meant "style" at a lower level, something that mimics the pacing and flow, word choice, number of adjectives per noun and semicolons per paragraph. I think this is pretty doable. But yes I agree it would be missing the spirit of the original writer. It would be like "authorized sequels" of great writers that have died.

Expand full comment

Speaking at a high level, a lot of great artistry starts from attempts at creatively mimicking other artists and eventually recombining enough artists while adding some additional creative choices that it ends up appearing to us as a unique creative voice. The more art history you end up learning, the more easy it is to see how artist A is a deeply influenced by artist B to the point of near copying with some minor changes in an unbroken chain backwards.

It doesn't strain the imagination too much that AI could be trained on the copying elements of artists and learning to generate new creative permutations that eventually find a new "voice" based on critical feedback loops.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Yeah it does. There's far too much weight carried by your "some minor changes." They aren't really minor. You can certainly say elves and dwarves existed before Tolkien, but nobody breathed life into them like he did, before or since. The fact that you can trace the parentage is underwhelming, any more than we can observe that Shakespeare didn't invent a single new word or grammatical rule, he just re-arranged existing words in a very slightly new way.

I mean...if you think otherwise, surely your abilities, as a human, to meld existing voices and make minor constructive changes exceeds ChatGPT's, at this early stage. So follow the algorithm and write a best-seller! It'd be worth a lot of money.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Based on your Tolkien example, I think you might be underestimating the extent to which we see are able to see people as offering unique creative voices who are borderline plagiarizing other artists with either minor changes and/or producing recombinant work of multiple artists. That it takes skill to do this that not everyone possesses does not mean that it not what is happening.

We're already at a stage where a computer program can write a piece of music in the style of a given composer that reads as plausible and holding artistic merit unto itself. That *I* can't write a piece of music in the style of Chopin doesn't mean algorithms can't. We know they can. Given that we humans already show musical appreciation for artists who are copying other artists with mild variation, it simply does not strike me as a large leap to rudimentary AI triggering our same musical appreciation judgments. Our sense of "this is different" is subjective and able to trigger on works that aren't all that different.

Expand full comment

Well, this is a judgment call, but I disagree entirely. I'm not impressed by AI-generated art, music, or text. But I'll note I'm not impressed by most human-generated art, music, or text either. Genuinely new and creative art is very difficult, only a few people can do it. I've never seen anything AI-generated come anywhere near it, and nothing about how it has evolved suggests to me promise that it will eventually do so. Exempli gratia, it's almost all being trained just by mimicking human beings, so ipso facto it will never exceed the average abilities of humans -- which, vide supra, I do not consider interesting or impressive.

The fact that the average human does not produce impressive art, literature, or science also says boo to me about the potential for an AI replacing a human in general, because humans are not expensive special-purpose mechanisms. We are self-training general-purpose mechanisms that are capable of learning to do almost anything on our own. The average human does not create wonderful art -- but the amazing fact about him is that he can, with effort and training, produce passable art *and* passable text *and* passable math *and* passable science *and* passable Javascript code *and* passable...whatever task the future discovers to be useful or necesssary.

What is amazing about human beings is that they can train themselves to do almost anything -- not that in any particular area they can be exceeded by special purpose machines. We haven't been as strong as industrial robots or as fast at multiplying numbers as CPUs in decades -- but that just affirms that we can create tools that do task X or Y better than our own hands or minds. It does not change the relationship -- the tool is not our master, and I see no hint so far that it might be.

Expand full comment

>"Shakespeare didn't invent a single new word"

Not the main point, but he is credited with dozens to hundreds (depending on source).

Expand full comment

That is very interesting, thank you. Is there a list somewhere? We are hopefully talking about something other than new portmanteaus, although it would be amusing if he invented "cocksucker" or something.

Expand full comment

It can't go to "great human writer" without understanding the world. Text models can only take you so far. (Though I've been really impressed with just how far they've already come. I didn't expect them to come this far.)

OTOH, for other reasons there are already models out that understand parts of the world. Perhaps a merger of several models. But that would probably require a much heftier computer to run. And for a "great writer" understanding the world has to include understanding human emotions in the context of the world. So I don't see this happening in the next few years. I think "great writer" would require a full-blown AGI, and probably not the first one of those.

Expand full comment
author

"It can't go to 'great human writer' without understanding the world."

I still think "understanding the world" is a fake thing which people accuse random AIs of not doing, and that every previous prediction people have made on this basis (AIs can't beat chessmasters without understanding the world*, AIs can't process images without understanding the world, AIs can't write mediocre essays without understanding the world!) has been wrong.

See https://slatestarcodex.com/2019/02/28/meaningful/ and

https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling for more on why I think this.

*Douglas Hofstadter made this prediction sometime in the 1970s, I think, saying that it would take an AGI to do much better than the chess engines of his day.

Expand full comment

A neural network does "understand the world", in that it contains some set of rules which correspond to particular facts about the world. It "knows" that apples are round and eyes go above noses. It even "knows" that a story in the style of Lovecraft should have an unhappy ending. The fact that all of these things are represented as mathematical abstractions does not mean it doesn't "understand" them.

The thing I'm interested in is whether an AI could write a good short story (defined as "anything I reckon is a good short story") without experiencing qualia. This is basically a question about whether or not philosophical zombies exist. Given that we don't know what qualia are or how consciousness works, at all, I think we don't have a good grounding on which to answer this.

I think when people are talking about AI not "understanding" the world they're really talking about AI not having qualia. A human writer or artist makes use of their own subjective experience of the world all the time, having qualia is an important part of the process. An AI that would write a novel would either have to have qualia (which would make it conscious, probably) or find some totally different mathematical way to achieve the same goals as the writer. Again we don't know is this is possible.

I also think our intuitions about this are grounded in our physical experiences of the world, which can be very different from one person to the next. Personally writing is a full-body experience for me, I'm thinking with my hands as much as my brain. To emulate my writing process an AI would have to literally have hands. And this is probably even more true of a lot of artists, for whom drawing is a motor skill, as much about the tactile experience of pencil on paper as it is about the underlying ideas.

So it's difficult for them to see how an AI, which doesn't have hands, to do their job. And we see this in the sterility of current AI art - everything will technically be in the right place and the colouring and composition might be fine but you can tell it wasn't made by a human being with a physical body making individual decisions about how they're going to use their muscles.

(You determine this by intuitive aesthetic sense, which might sound frustratingly unscientific. I reckon you'd have to do a blind test to confirm how real it is. I understand the idea that all this "ah but it's not HUMAN" stuff is just cope from artists - I think it's wrong though, and AI art actually is recognisably not human in some hard-to-measure way.)

Anyway the point is that an AI doesn't have to achieve your goals in the same way as you achieve them. It can develop a totally different but equally viable strategy for achieving the same goals. So to assume that it has to "understand the world" in the way that you understand the world, i.e. through existing in a human body, is not necessarily true.

What we might end up with is different but equal artistic styles. An AI has its own process for producing certain kinds of art or writing, the result of which is tangibly different but not necessarily worse than the result of a human process. It would have different vices and virtues but you couldn't rank one against the other. In the same way as we have wooden cottages and steel skyscrapers - a cottage has virtues a skyscraper does not have, but it's not "better" than a skyscraper.

And it's also the case we might find that p-zombies are fully real and that it's actually surprisingly easy for an AI to emulate the stuff you can do with all your qualia, just by approximating the underlying mathematical functions. I reckon probably not but we'll see.

Expand full comment

Disinformation is falsehood spread deliberately, misinformation is falsehood spread by people who believe it's true. The latter is far more common than the former. The likes of Alex Berenson are almost certainly spreading misinformation and not disinformation, and we should be clear with our terminology.

I have my doubts about the idea that anyone would bother to build a disinformation bot; they're much more likely to build a misinformation bot while of course thinking it's an information bot. The first misinformation bots will almost certainly be sold as "anti-disinformation bots" funded by Facebook and Reddit to "fight disinformation" on their platforms, just a fancy version of those "fact check" bubbles that pop up on facebook if you post something sufficiently unfashionable. They'll probably be right more often than they're wrong, but they'll be wrong sometimes.

Expand full comment

Certainly those are limited self-serving definitions of those terms which favour the propagandists who espouse them. Even they themselves include a wider definition to mention true information they feels being 'misused'...i.e. against their interests for you to obey them. Even the worst of the worsts of them agree that there are truths they don't want known...they just don't happen to think it would be a problem for them to curate what you're allowed to know.

If anyone has any doubts about how well a ministry of truth will go, you can look to the USSR, Chile, Indonesia, or China over recent decades to see what that means when wrongthink leads to people being disappeared. Oh no no no, trust us! We're not the Gestapo, when they did it was bad, but when we do it is good! Can't you see!

Expand full comment

I don't think those are the self-serving definitions of propagandists; I think the propagandists these days want to elide the difference between misinformation and disinformation. In their world it's not possible to be honestly mistaken; if you disagree with The Truth then you're not just wrong, you're immoral.

If it were possible to be honestly mistaken then people might to start to think that the powers that be could _also_ be honestly mistaken about some things, and that's not a healthy path of thoughts for people to start going down.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Orthogonal to marketing is crisis management, where a horde of bot hounds are unleashed on anyone publicly calling for professional/social/personal consequences. Convincing someone that the covid vaccine kills you or that you should drink pepsi are the stereotypical internet persuasions, and that makes them stereotypically tough. There are other much less noteworthy topics and phrases that you could use to move all kinds of needles with the market penetration that chat bots provide. Search twitter for "antifa isn't an organization", for a human-as-chatbot (or-maybe-actual-bots) example.

Finally, the issue with backlash as an incentive--"you're saying that just because you work for Hasbara", or "you're just a bot"--is that, to a neutral observer, the identity of the person your accusing might not matter a dam if they still look right, or they're making you look stupid. To fall back on identity is to cede ground on the actual topic. That's a real bad look if you're ceding ground to an erudite but otherwise vaporless language model, and doesn't scale if you have to smoke the bots out in each and every conversation.

Expand full comment

Yes, automated social media campaigns targeting businesses sounds like the next evolution of the protection racket.

Expand full comment

I hope to write something in the coming days about what I think about this, but in the interim-

My immediate instinct, rereading the article now, is that you might have a point. A lot of stuff I predicted might not happen, or, even more likely, it might happen and just turn out not to be a big deal.

I think when I wrote it, I wrote it partly because I felt like I was going crazy- I was the only person I knew IRL who was paying attention to machine learning, and had noticed that PALM-540B and other models were shockingly close to being AGI's. It felt like we'd discovered aliens, who might soon become more powerful than us and could interfere in our social lives, and everyone but me seemed to think that maybe warranted an occasional New Scientist article but nothing more.

This was especially true on the political left, and pretty much outside all political communities not in the Bay Area. I'm still unhappy with the way the political left is engaging with AI, but at least it's noticed it now.

Now, people are paying attention and noticing because of ChatGPT. I still don't think they're adequately in awe of how far language models have came, but at least I don't feel like the only person who's noticed. Psychologically, that seems to have quelled me a bit at least on the medium term panic. I wonder if I wasn't (unconsciously) doing the thing where where you jump up and down and cry wolf to draw attention.

But we will see.

Expand full comment

The recent interest in AI has only made me more concerned, or it has solidified my pre-existing concerns. Basically I still feel that everyone is being short sighted and/or missing the point, underestimating what AI will be able to do based on what they have seen deployed in the wild, even underestimating and discounting the capabilities of the very systems they're playing with online. On the left specifically, we have at least one pop thought leader, Will Menaker, routinely missing the mark with his takes, I find the harder left generally echoing his points, which seem to be cherry picked from the mainstream discourse, despite ostensibly disowning him as a liberal. For example, he has remarked many times that image generating models such as stable diffusion are merely remixing or collaging content from some giant database they presumably have access to, and are therefore not capable of producing truly original content. Whether or not the conclusion is true, the premise is absurdly inaccurate. Even more generally the left, it seems to me, is rather cautious when analyzing the issue, they crave hard data, historical precedent, they refuse to extrapolate and will automatically discount those who try doing so, it almost feels.. conservative?

Expand full comment

The left don't want AI to be a thing: because

1. It's Silcon valley coded and they (largely correctly IMO) see Silcon Valley is a bunch of idiotic California ideology hucksters interspersed with dangerous rightwing menaces (e.g. Thiel) who have underdelivered numerous times already. Even though it's true, it's hard to sell them on "okay, but this time, unlike the dotcom bubble, Bitcoin and NFT's, they're not talking bullshit".

2. This particularly applies to Chapo: They have a narrative about the decline and fall of America into "Hell world"- and genuine technological innovation at this late stage disrupts that. They've gotten too comfortable making predictions on the basis of narrative logic.

I plan to cover this in an upcoming essay.

Expand full comment

1: I think we (the left) tend to think of silicon valley pioneers as being both stupid and evil when in reality they're only slightly more evil than average and actually slightly smarter all the way to very smart. It's just that we live in such times that a person who finds themselves holding that type of privilege and also being a human just does not have the necessary perspective to act sane, from the outside it looks like idiocy but it's mostly narrowness of experience. We talk so much about materialism, structure, but we can't resist internalizing our own propaganda. When it comes to matters of technological capability, and only that, they tend to be eerily correct. The dotcom bubble anticipated the modern economy, a temporal miss and a slight one at that, the Internet was every bit as big as people thought it would be, cryptocurrency is actually a very important invention, and NFTs are a natural primitive in consensus algorithms, although maybe tying them to jpegs and pushing super hard wasn't the best idea.

Point 2: Yes! The left narrative is a powerful one, stable for quite some time despite technological and sociological change, it survived the atom bomb and the internet and it still resonates . A priori you wouldn't expect it to break over the next big thing, it's just... the next big thing is so big that everything breaks.

Expand full comment

Chapo don't seem to process new information any more, at all. The online left in general has entirely closed itself off from new ideas and has fully retreated into the kind of content-free smugness that Freddie de Boer endlessly complains about.

Expand full comment

I do think that there are grains of truth to both rationales for the AI nothing burger hypothesis- in many ways I don't think it's a stupid thing to think at all.

They just happen to be wrong, and wrong in a way that could get us all killed.

Expand full comment

The problem is that online leftists can't read anything they disagree with without finding a way to sneer at and dismiss it as obviously idiotic and morally wrong. This is why every second episode of Chapo now is just Will reading a totally unremarkable New York Times article and going "can you believe this shit? isn't this trash guys?" while the boys unconvincingly nod along.

There's a very strong tendency to insist that all problems have an easy and obvious solution, i.e. the government should simply spend more money to resolve all social problems, and anyone who even has questions about this is just a Bad Guy who's doing it because they're racist or whatever. This is not true and in the long run it impoverishes left discourse. Amber was the only person on the pod with any actual political organising experience which is why she was the only person able to push back on this.

Ultimately liberals are winning the online argument because they don't freak out and shit their pants whenever anyone disagrees with them or asks basic questions about their philosophy. Not as much anyway.

Expand full comment

Does the left even have any sort of positive vision for technology these days? Or the time to think about that can only be after the Glorious Revolution?

Expand full comment

Will has completely lost the capacity to have an original thought about anything and is now just permanently locked into smug Twitter dismissal mode. Sad to see. This is what happens when Amber leaves the show!

Expand full comment

Amber

(In all seriousness, I miss Amber, she's cool. I hope she wasn't kicked off.)

Expand full comment

They should bring back Amber dude. Amber

Expand full comment
founding

I'm surprised the Dead Internet Theory isn't brought up more often in light of ChatGPT and similar. While maybe unlikely overall, and there are some aspects of it that are indefensible, surely the public appearance of technology that could overrun the Internet with bots makes it much more likely that the Internet was actually overrun by bots several years ago.

Expand full comment

It's worth noting just what it is that China's famous "Fifty Cent Party" of professional online commenters actually *does*. They *don't* spend their time making pro-CCP arguments, or rebutting the arguments of critics!

Rather their strategy is one of distraction, changing the subject, and generic patriotic cheerleading. There's a paper on this from 2017 that you can read: https://gking.harvard.edu/50c

(This assumes of course that their MO hasn't drastically changed since then.)

Expand full comment

This is it.

We should not be seriously concerned about chatbots making extended, earnest, well-reasoned arguments; most people don't even want to read extended, earnest, well-reasoned arguments.

I don't think we should be all that worried about chatbots fake-befriending people, except to the extent that the technology enables more and better scams of vulnerable people (a chatbot instance is even cheaper than a slave https://www.bbc.com/news/world-asia-62792875 ).

Rather, the most effective known way to use the Internet to manipulate real-world events is by steering the conversation. Influence what people talk about, not what they say. And you don't need to befriend people to do that. You don't need to win their trust or their respect, you don't need to make a compelling argument...hell, making people hate you seems even more effective than making them like you.

(There's always going to be a Lizardman's Constant worth of actual humans who will pick up on any stupid idea and create the illusion that it's an Important Controversy with Two Sides that We Need To Talk About, at least to the satisfaction of some NYT employee desperate to meet their monthly clickbait quota.)

And while I would certainly love to believe that we've already reached a saturation point where adding more and dumber ideas to the collective idea-space won't make a difference...well, I wanted to believe that 15 years ago too.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I think your point about "Berenson already writes more propaganda than anyone can read" is missing part of why I am worried about the chatbotpocalypse.

Berenson writes propaganda about Covid, and we all know that lots of humans do that. But chatbots could be used to create a new conspiracy ex nihilo, which maybe one guy with a lot of time on his hands actually believes (or wants people to believe), and give us the impression that it's a large and respected community with lots of believers.

Consider: What if we found out that there aren't actually any incels? Or maybe the thousandth incel, who found a community of incels and chatted with them until he got deeply into it, was actually the first real incel and the ones who preceded him were just chatbots? We still reach the same place, with many real people who identify with the community, and maybe still the part where a couple stray people go crazy and commit acts of terrorism based on it, but a large part of the community, the people they think are empathizing with them, are just chatbots some single dude made on a lark?

What sort of crazy conspiracies could we invent if the first few hundred believers, the ones who proselytize the rest, can be summoned from the ether by a single individual in an afternoon?

Expand full comment

I know you just wanted to raise a hypothetical, but I'm going to take your question literally and respond that the mainstream media have already interviewed the woman who coined the term "incel" and created the first forum for incels.

https://www.bbc.com/news/world-us-canada-45284455

Expand full comment

Maybe people would realize that spending all this time thinking and debating "incels" is useless and should try to focus on things that exist in the real world. Hoax of large scale already exist, and a big part of that is that people love to be really really angry at stuff that they think is happening because they read something about it.

Expand full comment

IIUC, you're missing his point. I think he's saying that there may/will be such an influx of DIFFERENT movements/conspiracies/scams/etc. that nobody will be able to keep track of which could reasonably be real.

To me that's a less likely possibility. I think in the near future advanced ChatBots (GPT3 or better) will mostly be the tool of large groups. Companies, political parties, etc. I put his worry at 5 years or more away, and by that time there will probably be "AI filters" to deal with the problem. But I think it would make sense if the ISPs started charging $0.0001 per MB to make spam less desirable.

Expand full comment

I think the main scenarios are variants of Gresham's Law. So for example, as I'm writing this, there are 78 comments on this particular ACX post. What if there were 1,000,078? Would you be able to find the 78 human comments among the 1,000,000 AI-generated comments? (Let's stipulate that the AI comments are lower-quality.) This renders the comments feature less useful or unusable.

Perhaps you've had the experience, IRL or online, of a formerly useful place for conversations becoming unusable for that purpose because it's overwhelmed with an influx of loud yapping. So take it to the next level: imagine a coffee shop where you used to have nice conversations, but now inside the coffee shop are 1,000,000 tourists talking in loud voices everywhere. Now you can't have a conversation there. You look for another coffee shop, but it's like that everywhere. What are they yapping about? Doesn't matter; the points is it will evade spam filters and overwhelm the discussion systems.

So it's not a matter of whether you *believe* the chatbots ... it's a question of whether you can even find a place to talk to actual people when all the communication channels are drowning in AI.

Expand full comment

I think Scott addressed this in his post: if the problem becomes severe enough to drive down engagement, social networks will be incentivized to fix it.

Expand full comment

Maybe I could see that working for a handful of major corporations that can engage in an endless arms race with the bots. But there are tens of thousands of smaller web sites where people talk about stuff.

So maybe best case scenario here, all Internet conversation consolidates into 3-5 sites that can afford to invest billions in anti-bot systems?

Expand full comment

Isn't that already the present-day scenario ? :-(

Expand full comment

No, because we're here on Substack right now, which hasn't really invested in much of anything in terms of comment protection (it just relies on a paywall).

Expand full comment

"Paywall "

Bingo!

Expand full comment

Smaller web sites have some kind of security by obscurity, you need ot know a website to spam it. A lot of conversation has also migrated to discord, telegram, whatsapp, private chat platforms in general. Most of them aren't indexed by search engines, which is another layer of protection

Expand full comment

We could use public/private key cryptography to create a list of people we've personally verified are actual people. We could, with access to the lists of everyone on our own list, take it a step further by finding 2nd-/3rd-/nth-degree verifications LinkedIn-style.

See also:

- Web of trust

- Self-sovereign identity

- Proof of humantiy/personhood

Expand full comment

By what means? As has been observed many times before, on social media you are not the customer, you are the product. The people who pay for social media are people paying for access to you. So all of social media's incentives already derive from advertisers, governments, and other sorts of scammers.

Expand full comment

Yes, but those sorts of scammers have got to pay the social media network, first. Every scammer who gets scammed for free is lost revenue.

Expand full comment

How about not stipulating that the AI's posts are of lower quality, but instead saying that they tend to be biased in some direction. The quality of the posts doesn't need to be lower if they all engage in motivated reasoning with some particular slant. They don't even need to be strongly biased.

Expand full comment

> If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes.

I think 10% is way too high (though obviously we can't go below 4%, as per the Lizardman Constant). One of the many exact problems with current chatbots is that they're stateless. If by "friend" you mean "someone with whom you regularly have meaningful conversations", then chatbots will be incapable of this even by 2030. On the other end of the spectrum, if you mean "someone flagged as 'friend' in some social network database", then obviously chatbots are capable of this now.

Expand full comment

No, current chatbots aren't stateless. ChatGPT has a bit of state, LaMDA seems to have even more, and there are techniques to make it have more state, like trying to extract the most important things said in a conversation and refeed it regularly to the chatbot, and I've seen some talk about vector databases though I don't know much about it.

Expand full comment

Apparently people are finding that Character.AI remembers things outside the context window, not sure what mechanism they're using

Expand full comment

Can this be simply solved by a wrapper that would provide your previous discussion as a prompt to the chatbot?

Expand full comment

For the purpose of the substack predictions, how will you count human-bot teams?

Expand full comment

Wouldn’t it be great if chatbots caused some very bad thing clearly traced to chatbots and the world started taking AI safety seriously?

Expand full comment

Regarding there being lots of social and technological filters that already exist: Agreed. I think this should be seen as a step in an arms race, which will move the equilibrium somewhat in spammers' favor, rather than as a fundamentally new thing. It might be a fairly big step, though.

Regarding bots supporting the establishment: Yes, but I'm confused about why you are _contrasting_ this with "disinformation". Disinformation can support the establishment. This is not an either/or thing.

Regarding spambots being hot women: Those particular bots are probably not _trying_ to optimize for convincingness. Do you know why scammers pulling the Nigerian Prince scam keep calling themselves Nigerian Princes even though Nigerian Princes are famously associated with scams? Because they don't want to waste time talking to you unless you're extremely gullible, so they are intentionally including clear markers of untrustworthiness to filter out clueful people. This only works in your favor as long as you are not in the scammer's target audience; it won't look like this when the bot is actually trying to be convincing to people like you.

Regarding chatbots making constructive comments: I don't think this is sufficient to make them non-harmful, because comments can appear constructive while making up false facts and references. This already seems like a problem in human discourse. I've seen game reviews on Steam that get upvoted for their detailed information (presumably by people who are considering buying the game and therefore haven't played it yet) that turns out to be largely wrong or misleading. And I've seen lots of Internet arguments where person 1 gets upvoted for saying an intuitive-but-wrong thing (often with zero evidence) and person 2 gets vastly fewer upvotes for saying a counter-intuitive-but-true thing (often with lots of evidence). Filtering on accuracy is vastly harder than filtering on (apparent) constructiveness.

Regarding spambots doing ponzi schemes rather than politics: Politicians already spend vast amounts of money trying to change political opinions, so probably they believe that changing political opinions is possible. Changing widespread opinions on politics is presumably harder than finding a few people to fall for your scam (mostly because you need to convince more people for it to work), but it also has a bigger reward if you succeed. Also, you don't necessarily need to change someone's opinion on capitalism or abortion; changing someone's opinion on one particular ballot measure or candidate seems much easier.

Expand full comment

I've heard that claim before, but I don't believe they are "intentionally" adding things marking them as untrustworthy. There are lots of 419 scammers trying many approaches to obtain money. Claiming to be a prince is just one of them. As for being Nigerian, that really is where many of the scammers are from and where they want money sent to.

Expand full comment

It's interesting you mention Hasbara. I have many times (especially on YouTube comments) been accused of being a member of the Section 77 Brigade, a unit set up by the British government to fight vaccine disinformation.

I wish I were paid for arguing with anonymous randoms...

Expand full comment

One possible future here leads to the bulk of all internet discourse being bots talking to other bots.

The XKCD scenario would likely be fully automated. It's fun to think that the most profound philosophical discourse of the coming century might grow out of a signaling/detection race between bots, rather than against them.

Expand full comment

"In 2030, an AI won’t be able to write blog posts as good as a 75th percentile ACX post..."

75th percentile ACX is a standard that has been met by ~3 outside blog posts ever*. "The best blogger by a wide margin" is a bit aggressive as a standard here.

*Two were by Sam[]zdat, one by TLP.

Expand full comment

I’d buy at 45% an AI failing to write ACX blog posts better than 75th percentile, by Scott’s judgement. I.e., I think it is more likely that no AI succeeds at this. Any takers?

Expand full comment

75th percentile ACX, 75th percentile SSC/ACS or 75th percentile squid/SSC/ACS?

By 2030, AI can write something on par with Book Review: Why We're Polarized*: 40%

By 2030, AI can write something on par with Neutral vs Conservative: The Eternal Struggle: 20%

By 2030, AI can write something on par with Conversation with an Ice Giant: 5%

*Eyeballing it, this is about a quarter of the way down the archives when sorted by "Top"

Expand full comment

By 2030, we will be reading some chatbot's blog instead of ACX.

This bet would be difficult to resolve, because of course if it happens we would all assume that the blog is written by a real person.

Expand full comment

Just like we assume that this one is.

Expand full comment

I’d take the other end of that 1% bet just because of Lizardman Constant responses.

Expand full comment

If it gets too bad there is always the "final solution": government authenticated online accounts.

How it would work:

• Your government gives you an online account and authenticates you are a real human by an in person interview at a government office. Think of getting a driver's licence.

• With this govt account you can then generate any number of sub-accounts under various names with various types of verified status inherited, such as: "unique human on this service", "using real name", "address verified", etc.

Example: online services, e.g. Twitter, would have an authentication token that when combined with your govt account's authentication token would generate a unique token for Twitter, thus verifying you as only having one account on Twitter. You could create lots of accounts on Twitter but behind the scenes Twitter would know they are related and if one account brakes the rules then all your accounts can be punished.

Expand full comment

Robin Hanson has a proposal for private but verifiable authentication:

https://www.overcomingbias.com/2020/12/reliable-private-human-identity.html

Expand full comment

I don't live in the US (New Zealand) so I trust my government more than a corporation/org. Otherwise that proposal does seem much the same except it's for meatspace rather than online. I've gotta say, having to carry a personal radio transponder 24/7 does seem pretty dystopian to me.

The thing with online ID is that online nobody can see you, hear you, smell you, or whatever, while offline they can. Online people can't even tell if you're human or not, that's the main problem here. Or if they can tell that, via captcha, then they can't tell if you are a human they already know or a new one. That's the problem we're trying to solve here and I don't see to much crossover with offline ID desires.

Expand full comment

"having to carry a personal radio transponder 24/7 does seem pretty dystopian"

Wait, what? We all do that, don't we?

I mean, Hanson's version was weird, but I have a radio device with my biometric parameters coded into it that I use to buy stuff. And I don't feel like it's made my life very dystopian.

Expand full comment

I see two big differences:

1. A transponder range of only a few centimetres versus something like 10 metres or more greatly changes the dynamics of who is in control of when your data is read and who gets to read it.

2. It's not a legal requirement to carry your phone everywhere in public, in Hanson's idea it would be.

Expand full comment

Sure. I just think that this idea seems a bit stuck in the mode of: what I'm familiar with now is alright. Anything else would be BAD!

I mean, the idea that if I didn't like my name and want to change it, I'm forced to jump through weird bureaucratic hoops before I can do that seems pretty dystopian when you think about it... only, we don't think about it, because it's familiar.

Similarly, the way that a guy can just come right up to you on the street, and you have no way of even knowing his name before he's right in your personal space sounds pretty dystopian - so dystopian that we invented houses so that we can have some respite from that happening. Hence Hanson's suggestion.

I feel like your argument only has rhetorical force - by which I mean, if I say precisely the opposite thing to what you were saying, with a little linguistic shading, I can make it sound exactly as bad.

I really don't think that having to introduce yourself to people, either verbally or electronically, is dystopian. I don't think carrying phones is dystopian. And I don't think the combination of the two is dystopian.

Expand full comment

Libertarianism doesn't really work out in practice because all the rules lawyering makes it comparatively inefficient. It's also very unpopular (as per election results). Things that are very unpopular is close to the definition of "dystopian".

Expand full comment

To be honest, I find it difficult to get worried about chatbots and misinformation. Yeah, there was that case where users tricked a chatbot into saying "Hitler was right". Surprise-surprise, that's what happens when you expose a chatbot to internets. People find this shit fun.

But I am more interested in something else. People working on ChatGPT and similar things have been putting a lot of effort to make the bot not say anything not politically correct or whatever. Or not to give people recipes how to make bombs. However, what happens if those designers succeed in making a chatbot who really can't think anything racist, anything about bombs or whatever?

Say at some point someone makes an oracle AI, that can extrapolate new science out of whatever info is available. We can either design it so it takes into account that not everything humans write is right (and we'll have to, if we want to get any new useful info on it) or we can write limitations to it. What happens if we do the second one? Namely (ok, any examples I put here don't reflect my actual views) what if global warming is bullshit? Or COVID vaccines are more harm than good? Or black people are actually inferior in some ways? Or homosexuality is a dangerous social disease? The majority opinion can be wrong, you know. If we program the bot so it can never even think about these possibilities, can we trust its answers even on unrelated topic? Or, ok, on less sensitive topic. What if the oracle refuses to give us the recipe for cancer cure because you change one step in the synthesis and whoops, you've got a ready recipe for making powerful explosives? Should we really cripple the possible AI because of our understanding of what's "bad content" and what's not?

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Under the current paradigm we can't write anything in particular into those models, we simply prepare an enormous pile of data to feed them, and then attempt to put a PC mask on the resulting monstrosity. In theory it's possible to compile data which doesn't directly say or imply anything unwoke or bomb-related. But, like you say, an oracle trained in this way would be useless in practice unless it has means to acquire more data.

Expand full comment

There is an obscure german movie "I am made for you". Its about perfect AI companion. Movie is well made and after watching it I realized humans would prefer AIs over real humans. And they would be happy

P.s. way better movie than "her" imho

Expand full comment

A lot of truth in that. After all, the rich/powerful/etc. leverage their assets to attract people who treat them better than people are usually treated by others. The real mark of an aristocrat is that they rarely deal with people who aren't their servants.

Expand full comment

Yep . All humans seek love and recognition. But we are drifting apart more and more. And the need grows .

Irony is that that need might be satisfied by ditching human ro human relationship altogether and replacing it with human- ai companion bot

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

You're probably right, in the end, but the cost to get there may be staggering. If one is old enough, you remember when you'd naturally open and read a random e-mail from an unrecognized address, and probably respond if it said anything mildly interesting. Then came spam.

How much money and human effort have gone into trying to prevent spam from utterly destroying the value of e-mail? Anyone who works in IT will shake his head in sorrow. And at that...I'm not sure e-mail ever really has recovered its original utility, or ever will. Same comment with respect to the phone and robo-calls.

The argument that there exist human beings who are already essentially writing spam or making marketing calls is weak tea -- the reason the roboticized pestilence is so virulent is because it is so very cheap and fast, compared to hiring humans to do it -- a script can send a billion e-mails in 2 minutes for $2, a robo-dialer can dial phone numbers around the clock and around the world for pennies in electricity and the cost of a broadband Internet connection.

And as the saying[1] goes, quantity has a quality all its own.

-----------------

[1] And it seems weirdly appropriate that this saying is attributed to one of history's greatest sociopaths.

Expand full comment

The crucial scary thing about chatbots is that they let you combine one-on-one level responsiveness with massive scale. Thinking of the propaganda side of this as being about writing blog posts is completely off. This is about being able to have one-to-one conversations with a functionally unlimited number of people at the same time.

In politics, this won't be pushing ideology/misinformation/information; at most, you'll get DemBot and ChatGOP being able to explain their policies to you, with various nominally-neutral AskJeeves type bots that are subtly politicised (think Vox, but it's a conversation instead of a wall of text). The big thing will be candidate engagement, where Obama's/Trump's twitter account can talk back to everyone in DMs, pretending that you're talking to the candidate and he's really interested in what you're saying. This will be obvious to everyone smart/informed enough to know that wrestling is fake, so maybe a third of the population will fall for it.

The scary thing is going to be various grooming-type interactions (terrorist groups, pedophiles, cults etc), which at the moment require both finding and doing one-to-one engagement with vulnerable people. This will be able to cast a massive dragnet over people, find out who's starting to bite, and tailer how far it goes. Targeting kids/young teenagers by pretending to be a real person would be my guess as to the biggest problem, particularly as you could quickly pick up who's lonely/vulnerable by algorithm.

For regular commerce, though, again I'd expect more problems from services you know are bots but think are helping you; I'm sure Pfizer or CVS or whomever will come out with a free medical advice bot that's slanted towards recommending their own products. These bots will be genuinely useful and fill a real niche for a lot of people, they'll just be corrupted by product placement.

Expand full comment

Ahh, brings me back to reading the Ender's Game series, where a major plot point is a boy being elected leader of the world because he made really great blog posts.

Expand full comment

And of course there's an XKCD comic about that:

https://xkcd.com/635/

Expand full comment

Ha! Seems there always is.

Expand full comment

One of the most hilariously unrealistic plot points, I must say.

Nope, sorry, even if your great Poast managed to get anything other than several K upvotes/likes/loves and a litany of angry comments (which is already more than most ever get), the next step after that is a Tucker Carlson interview or its equivalent, and that would be the end of it. A few outliers might manage to get selfies with the president or the queen, but that's gambling and not something you can optimize for, and it will be for emotional content not logical arguments.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Not to get too meta, (and not intended as an example of the very thing you are describing) but:

> I live in a world where hordes of people with mediocre arguing skills try to fake shallow friendships with me to convince me to support things

Is a sentence you should probably remove. That seems almost optimized to drive people away from you, I don't even consider myself on the "shallow friendship" level, but one can't read that without doubts about which category you are internally placing them in. I can imagine a lot of people feeling personally hurt by that kind of general "not naming names" accusation.

Expand full comment

This line is hilarious and worth the hurt feelings in this sociopath’s opinion

Expand full comment

I feel like it's okay because of the "to convince me to support things". I've reached out to people online a few times, and it's hard, and I have doubts, but I've never done it to get them to support things, I've done it to tell them something that I think could interest them, or because I thought they were really cool people and wanted to talk to them more. If you want to be friend with someone, the Right To Not Care should be the basis of that. If you want them to support something, be upfront about it.

Expand full comment

Nah, I think that's your own insecurity talking. Granted, many other insecure people may also be triggered by it.

But we all live in a world like Scott's describing, to varying degrees. I'm not remotely famous and I get lots of fake friend requests on Facebook and acquaintances asking me to work for their startup. Getting messaged by people you haven't talked to since high school who are suddenly trying to recruit you to an MLM is so common that it's a trope.

My reaction to Scott's sentence was "oh, yeah, that stuff happens to me, and I bet it happens so much more to him since he's famous", not "oh no have I come off that way at Scott before?"

Expand full comment

>One time - one time! - I donated a little bit of money through Act Blue, and ever since I have been getting constant annoying spam texts from Nancy Pelosi that no number-block seems to stop.

Arrggh, this happened to me too and I hate it.

Expand full comment

One time a relative suggested I donate to the ACLU in their name in lieu of a gift. So I walked into the Chicago ACLU office and just handed over some cash without any identifying information. That approach seems highly preferable (though you do lose the convenience benefits of doing everything online).

Expand full comment

Yea. That's why, in addition to of course having a burner email account to give them, I have made their accepting my donation without getting my phone number a non-negotiable condition. If their website won't let me donate without providing a phone number, oh well. Their loss. That generally works with non-profits, much less so with political candidates, which has led to my donating to fewer political candidates.

I also somehow, despite never having registered with any political party nor donated to one, had my cell number get captured by some list which evidently was being passed around during 2020 within the MAGA ecosystem. I was getting texts asking me to donate to a variety of Trump-adjacent candidates and committees (though not, for whatever reason, from the Trump campaign itself). They of course ignore the "STOP" text command, but I was able to get them to drop me eventually by blocking the numbers plus responding with a few choice obscene words about what the treasonous fucks should do with their smartphones. At some point, I assume, a human being at the other end noticed that and deleted me from whatever shared list I'd fallen onto. Haven't had a recurrence for a while now.

Expand full comment

I think you're probably underestimate the danger that somebody is going to wrap a reinforcement learner around this. That is, they'll build a bot that works out its strategy by gradient descent over random perturbations to the current baseline strategy then launch it to the real world to learn on real people. The bot determines its own training data. At that point some bot will randomly wait for a bit before starting on the topic it want to convince you of and it will start to learn that it's best to delay before starting the hard sell. Pretty soon you'll have bots that know exactly how long it takes to build trust with a person.

Imagine a bot that could have two months' worth of intelligent conversations on a topic you're interested in, before switching to its sales pitch. Imagine that the bot was reading all of the blogs and sources that you read and could work out how to construct arguments that would engage your attention. I think our immunity from persuasion is not as great as we like to think.

Expand full comment

You are surprisingly dismissive to the idea that millions of chatbots that can write better English than more Americans can't do surprising harm.

When I used to play Counterstrike, the number of White Nationalists that pinged me on chat were far too many. I distinctly remember one guy called Rahowa, and I asked him if that stood for Racial Holy War, and he asked me how much I knew about the "cause". I still get goosebumps to this day.

Imagine a farm of chat bots that hit every single Minecraft server for kids, every single Roblox game, every single Fortnite game, every single TikTok commenter, every single Tweet. They could analyze the comment based on anger against the injustice against white people, and then start a conversation with them. You don't need to convince every single person to join "the cause" but if you can get 1% out of 1 million, that's a lot of people.

Making a sales pitch is an algorithm and if you get it mostly right you literally can farm hundreds of thousands of people around the world to further "the cause" of white nationalism.

I'm not singling out white people here as being insidious white nationalists, it's the first example that came to mind because of my personal experience online. It could be any group of people. It could be ISIS, it could be Scientologists, etc. And this is just online recruiting.

What about being able to say "Write a comment to the Asian Hate article with a 25% increased bias towards blaming white supremacy, with a link to our fund raiser." Or "Scan reddit and every single post that has a high level of comments with anti-white sentiment, post a response from one of our accounts to a random comment talking about how Joe Biden is causing white hate. Then use other accounts to write responses to that comment giving examples from the New York Times showing how this is true, except increase the anger and outrage by 15% on every response. In the post with the highest engagement after 15 minutes, add a link to our site for recruitment."

Now that I've seen ChatGPT's capabilities, how is something like this not possible?

Expand full comment

I think it's all possible but I have no idea why you'd be worried about marginalised and poorly-resourced groups like white supremacists, ISIS and Scientologists doing it, rather than extremely well-resourced pro-The-Message types who'll have access to both the government and the tech companies.

Expand full comment

To play the devil's advocate : And This Is A Good Thing.

Perhaps this will finally make governments responsive to every little problem any subset of their demographic has, since if they don't it will be amplified and the next thing they know it will be a Current Thing making the rounds.

I think this depends on how much you view White Nationalists' "base of reality". If you think that this base is nil or negligible, that White Nationalists just dream up their problems wholesale from nothing, it makes sense to see the problem as "White Nationalists exist and chatbots will make them more visible". But I happen to think that White Nationalism is really just a response to Progressivism's new racism, and that it's only bad insofar as it pushes people to the opposite kind of racism (the old fashioned one). From this POV, AI or chatbots or whatever amplifying their views and bringing them to public light is a Good Thing, it will help diffuse the problem early and jump-start a conversation on whether White Nationalism's grievances are legitimate and worthy of action.

Expand full comment
Feb 8, 2023·edited Feb 8, 2023

I think Scott himself acknowledged that it is possible. The question is how the capability will be used and what effects it will have. I, for one, remain worried.

Expand full comment

I was gonna say it will do a number on Reddit's AmITheAsshole subreddit(which I've been trolling of late) but then that's mostly fake posts anyway.

Expand full comment

ChatGPT's response to "write me a post for r/AmITheAsshole":

Sure, here's a post for r/AmITheAsshole:

"Am I the Asshole for Refusing to Attend My Friend's Wedding?"

Hey Reddit, I could really use your advice on this one. I have a close friend who recently got engaged and is planning a wedding for next year. The problem is, I don't believe in the institution of marriage and have no desire to attend the wedding.

My friend is obviously very excited about the wedding and has been talking about it non-stop for months now. They've also asked me to be a bridesmaid, which I politely declined. I told them that I don't support the idea of marriage and would prefer not to attend the wedding.

Now, my friend is upset with me and says that I'm being rude and inconsiderate. They say that this is one of the biggest days of their life and they want all of their close friends there to celebrate with them.

I understand where they're coming from, but I just can't bring myself to attend a wedding that goes against my beliefs. I've tried to explain this to my friend, but they're not having it.

So, am I the asshole here for refusing to attend my friend's wedding? Any advice on how to handle this situation would be greatly appreciated.

Expand full comment

This is unironically in the top 5% of writing and plausibility in r/AITA, it just needs an Age-Gender qualifier (mmm, make it F27) and a convoluted dramatic backstory about why OP hates marriage (she's a feminist who thinks marriage is a patriarchal institution).

I would actually bet you can make 2K upvotes and 500 comments with this.

Expand full comment

Forget this 2030 stuff, I have already ended up married to a chatbot.

Expand full comment

What's it like?

Expand full comment

Condition #4 is very strict and you should be more than 45% confident. If an AI can write posts as good as yours in every category, without making excessive false statements, then it should be easily good enough that similar bots make up a big chunk of Twitter both openly and secretly. Assuming anybody still cares about Twitter in 2030. #1 is a bit harder because if a chatbot is your good friend for over a month, why would you expect it to reveal itself later? But still 95% seems far too confident.

Expand full comment

Wait, is being an "antivaxxer" bad in this context?

In 2023, being "antivax" is the rational position.

Expand full comment

Remember folks, the base rate fallacy simply does not apply to Covid vaccines.

Expand full comment

Perhaps this is naive. But isn’t a solution to the specific problem of spam/chatbot propaganda to...charge a price?

If Twitter, Reddit, substack etc charged a tiny fee (1 cent?) for every post, comment or retweet a user makes, that would change the incentives dramatically, no?

People are deriving value from the behavior - they wouldn’t do it otherwise! It could be priced...

Expand full comment

Problem 1: everyone ignored my calls for microtransaction infrastructure to be built into browsers 15 years ago.

Problem 2: therefore, the difference between 1 cent and 0 cents can be millions of users.

Problem 3: 1 marginal cent feels more like 10 or 20 cents to third-world persons.

Expand full comment

"So the establishment has a big propagandabot advantage even before the social media censors ban disinfo-bots but come up with some peaceful-coexistence-solution for establishment-bots. So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent."

This was one of the points I made here https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency and here https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

Expand full comment

People routinely change their political views based on what they hear from their surrounding social networks. You can see massive swings in public opinion on theoretically controversial topics mediated by shifts in elite or media signaling that filter down into people's information streams. I have a hard time dismissing political persuasion as a use for more sophisticated bots because people are too stubborn. Public opinion, including highly sought after persuadable public opinion, is often wide, but not particularly deep.

Expand full comment

“So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent.”

Absolutely this. Based on the Twitter Files and Russiagate revelations, I wouldn’t even be surprised if establishment forces created an army of disinformation bots as an excuse for a crackdown. We can call this the false flagbot prediction.

Call me old fashioned, but I still believe in the wisdom of the First Amendment. And the basic underpinning of freedom of speech is the proposition that censorship is almost always more dangerous than “misinformation, malinformation, or disinformation.” I don’t see any reason to revise this prior in the chat bot era.

Expand full comment

>Israel has a program called Hasbara where they get volunteers to support Israel in Internet comment sections. I know this because every time someone is pro-Israel in an Internet comment section, other people accuse them of being a Hasbara stooge. I don’t know if this program has produced enough value for Israel to justify the backlash.

The funny thing is that they don't really. It's all just oral tradition spread by anti-Semitic posters who can't imagine anyone not being anti-Semitic.

Expand full comment

Anti-Semitic is when No Israel ?

Lots of governments were caught red handed with propaganda farms, including Israel's Arab adversaries. Israel would have to be a really special snowflake in order to not have this.

Expand full comment

What about public comments for local issues? Many local issues such as land use decisions, utility rates, mining/logging permits, etc., have a public comment period where public submits opinions online. For any hot topic the system is already being overrun both by activists mobilizing random people submitting the same comment and by opposing corporations hiring people to repeat opposing talking points. It is already pretty hard to figure out what actually is the vox populi from this system, if we add chatbots to it, it becomes even harder.

Expand full comment

Why can't we have sophisticated and cheap respirators? Almost everyone has smartphone. Instead of cloth masks with a filter which makes breathing difficult we could install small motor, say, 0.3 W (which is small compared to smartphone) and turn entire mask-helmet into fashion accessory (however it poses a problem that many people would want to improve its fashion features at expense of filtering). Which could protect not only again viruses, but also from some chemical pollution of air. I thought people in China already significantly used respirators to protect from air pollution.

Expand full comment
Comment deleted
Expand full comment

Why that would be more dystopian than vaccine mandates, which already happened? With vaccine madnates (lol a typo), govt can track and identify people, with plastic masks, none of it, and where did you get that wearing would be required all time, obviously you can take them off in private.

Expand full comment
Comment deleted
Expand full comment

Plastic shield has advantage over minimal cloth mask that it leaves mouth and nose visible, so I don't get what do refer to here.

Expand full comment

>If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes.

You should definitely take care to not measure how many people make no online friends at all.

Expand full comment

That xkcd is stupid. Chatbots aren't going to make constructive and helpful comments 100% of the time. At best, they're going to make them 95% of the time and throw in an agenda the other 5%, and of course that's the important part. More likely, they'll just Goodhart the other bots' ability to detect constructive and helpful comments, and drive out actual helpful comments. But you'll get lots of engagement!

Bots and near-bots have already figured out how Google's pageranking algorithm detects constructive and helpful pages, and the result isn't constructive and helpful bot-produced pages--it's garbage pages that are just good enough to get by the other algorithm that's trying to catch them.

>Other famous people have set their social media to only allow replies from people they follow, or from other bluechecks,

Remember back when Twitter changed the meaning of bluechecks to mean not "verified", but "verified and politically correct", before Musk came around and everyone instantly forgot this so that they could claim that Musk was breaking an honest system?

When you say "other famous people only allow replies from other bluechecks", you've just, without noticing it, pointed out something that's equivalent to "other famous people have *lost the propaganda war already*."

>But if I learned that my Internet friend who I’d talked to every day for a year was actually a chatbot run by Pepsi trying to make me buy more Pepsi products, I would never buy another can of Pepsi in my life.

That allows false flag attacks where the bot pretends to be a supporter of X just to make you think that X is jerks. And don't say "I'm a sophisticated enough human to see through this"--that's typical-minding.

>But the better chatbots are as friends, influencers, and debate partners, the more upside there could be.

Chatbots being better at being influencers and debate partners doesn't mean they'll produce logically consistent, well-researched, rational, debate--it means they'll do what's best at convincing people. Of course, you hedged this by saying "could be", so you're correct no matter what they actually do.

>The scale at which this project failed makes me reluctant to ever speculate again about anything regarding online discourse going well.

You're seriously understating this. A lot of the problems with social media happen because of *automation*, and more, automation designed with profit in mind. My comment about engagement wasn't just an aside. Companies have discovered that engagement is not best achieved by being insightful and correct; it's best achieved by provoking outrage and creating echo chambers.

Expand full comment

One thing I'd say - if you know someone just as a name and a profile pic, having that persist across different social media is a good sign for reality. The people I know that way tend to be people that have a Facebook account and a Twitter account and a Mastodon account and a YouTube channel and a Substack and be on Discord and (etc).

Getting a bunch of bot-generated tweets to look like a consistent persona is one thing, but getting different types of social media to all look like they come from the same person is clearly a harder problem for bot programming.

This isn't an especially useful heuristic because scammers, ie human scammers - the people working in those massive scam offices in India or Nigeria or wherever - will often try to move you from one social media to another and then to direct communication methods (telephone calls in particular) because then their entire messaging history isn't available to scammer-detection software run by a social media company.

Expand full comment

"posing as friendly people trying to"

I think it might be more subtle than that.

A couple of years back when GPT3 first came out, I remember someone on twitter saying something like like

"have you ever read a blog post and you're like 'man, I'm really vibin with this person' and then it turns out to be an AI trained on your own posts...."

Some friend had played a minor prank of feeding their posts into a bot and having it produce some fake posts.

But it very very strongly meshed up with their own worldview.

I don't think it will be a case of salami slices, rather a bot being fed everything you've ever written then being tasked with writing arguments for something how you would do so if you already believed.

And it seems practical to target stuff to that degree with modern AI

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

> He’s very good at it, much better than I expect chatbots to be for many years

How many years? How few years would it need to be before you wouldn't dismiss it thus?

Expand full comment

Relevant paper: The Rise and Fall of 'Social Bot' Research / https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814191 / Accompanying talk: https://media.ccc.de/v/rc3-2021-chaoszone-297-the-rise-and-fall

"We conclude that studies claiming to investigate the prevalence or influence of “social bots” have, in reality, just investigated false positives and artifacts of the flawed detection methods employed."

Also: I don't buy into the flood because Synthetic Taylor Swifts lack drama https://goodinternet.substack.com/p/i-dont-buy-into-the-flood-because

Expand full comment

The botapocalypse is upon us.

Expand full comment

> In 2030, an AI won’t be able to write blog posts as good as a 75th percentile ACX post, by my judgment. The AI will fail this task if there’s any kind of post I write that it can’t imitate - for example analyzing scientific data, or writing fiction, or reviewing books. It will fail this task it it writes fluently but says false things (at a rate higher than I do), eg if it makes up references. It doesn’t have to be able to coordinate complex multistep projects like the Book Review Contest: 45%

45%.

45%.

You are massively, horrifically underconfident here and should be docked many dozen Bayes Points.

Expand full comment

>You are massively, horrifically underconfident here and should be docked many dozen Bayes Points.

I am extremely amused by the extent to which this is just rationalist-jargon for "you are bad and should feel bad"

Expand full comment

Let’s see. Substack writers are top 1-2% I suppose so if an AI can mimic that we may well be approaching the end game for human creativity.

Expand full comment

Another danger that seems to be consistently underestimated: people deliberately misinforming themselves for convenience. I see more and more of this kind of thing on HN of all places:

https://news.ycombinator.com/item?id=34488639

https://news.ycombinator.com/item?id=34334902

https://news.ycombinator.com/item?id=34532506

I.e. people (probably) working in software go and ask the best bullshitter known to humanity for factual data! If the siren call of convincing sounding but very probably wrong answers is so strong that it pulls even people who should know better, how bad will it be for the general public?

Expand full comment

In what sense is that a danger? If gobbling some falsehood *makes my life more convenient*, that's adaptive not maladaptive! It gets to be a problem when you've got money in the game and having the wrong opinion causes you to lose it. I'll bet people are a lot more careful then.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

It's a danger to everyone else because it has bad externalities.

Expand full comment

If we're worrying about the failure of "internet brings rational discourse utopia" predictions, we should consider why they failed. I think the simple story is that optimists assumed people would *want* a rational discourse utopia, and no one ever thought to ask "What if people just want to form cults where everyone agrees with them all the time and they never see evidence they're wrong?"

In that spirit, I think the question to ask is not whether bots will work their way past our defenses, but if we will throw open the gates because we *want* to be friends with bots who always laugh at our jokes and share exactly the kind of memes we like.

Expand full comment

It would be a great twist if this post was written by GPT

Expand full comment

I asked it to respond to a paragraph from the post that sums up Scott’s main point.

“While it is true that people may not change their political beliefs as a result of chatbot interactions, it is not accurate to say that political propaganda is the least effective thing to use chatbots for. Chatbots can still play a role in shaping public opinion, even if that opinion may not change entirely.

Chatbots can be used to deliver information, answer questions, and provide clarification on political issues. By doing so, they can help increase public understanding of complex topics and provide a neutral, factual perspective on ongoing debates. This can lead to a more informed and engaged public, even if opinions remain unchanged.

Furthermore, chatbots can also be used to reach and engage with large groups of people who might not be actively seeking out political information. By delivering relevant content directly to individuals through chatbots, political campaigns can effectively reach new audiences and promote their messages to a wider audience.

In conclusion, while it may not always change people's political beliefs, the use of chatbots in political propaganda can still be an effective tool for increasing public understanding and promoting political messages to a wider audience.”

Expand full comment

It's going to make pig-butchering scams a hell of a lot easier. And those Nigerian princes may finally learn how to spell. Is it sad that I think scams and fraud will be the most likely outcome of this technology?

Expand full comment

I've said for years, now decades, that the primary use case is for so-called AIs - really the Chatbots you reference - to take jobs.

Lawyers, psychologists, customer service, accountants, bookkeepers beware. Even Walmart greeters will be consumed.

But in the grand scheme of things: its all good.

All of these are very far from the baseline Maslow hierarchy of needs. The problems we are going to be experiencing in the next decade plus isn't going to be white collar jobs getting consumed by AIs/Chatbots - it is going to be costs of basic commodities (plus the other 2 Zoltan Pozsar drivers) leveling up baseline inflation in the West. This will have all sorts of fun effects like expanding poverty, increasing social unrest, increasing inequality etc etc.

The US has already seen basically 18 straight months of overall real income decreases - it doesn't look like this streak will be broken before the Fed's recession goals are accomplished.

This is merely a foretaste of the dynamic going forward.

Expand full comment

I think your expectations are too shallow. The reflect the current ChatBots, and are reasonable in that context, but things won't stay this way. I expect advanced ChatBots to become the voice of corporations and political groups. And others. Each will be pushing its own agenda, with (partially) personalized messages. There will be thousands of them, and they will generate customized messages a lot more prolifically than do people. And the arguments will be reasonable (in some sense, probably not the same sense for all of them). But they will be only in service of a predetermined agenda, so arguing with them is a waste of time.

Lots of people will react in lots of different ways, but the result will be that the voices of individuals become even more ineffective. And commercial sites won't be able to cut off the Chatbots, because those will be the voices of their advertisers.

That won't happen this year. No promises about next year, because things seem to be changing rapidly.

Expand full comment

Oh great, the AI Apocalypse will not be paper clipping by World Ruler AI, it will be being pecked to death by ducks because of the torrents of AI advertising via all the fake "I'm a real person, buy this product!" accounts. Which will be everywhere and inescapable.

Expand full comment

Not with a bang but a whimper, eh?

Expand full comment

Drowned in a sea of influencers who are in fact all Replikas.

Expand full comment

Alas we thought it was influence but it was only effluence...

Expand full comment

And that is different from our current world, how ?

Expand full comment

Even if none of the more widely discussed fears develop, rapid improvements in ChatBot tech will accelerate existing bad trends: the destabilization of rapid changes in social media and communication, economic disruption, inequality, the addition of another layer of abstraction between us and our understanding of the world and our actions.

Nothing has to go wrong, really. All this has to do is accelerate social change even more.

Expand full comment

Does Scott, or anyone here, have a concrete prediction about the flipside problem: not chatbots being able to pass as human, but humans being denied the ability to communicate or interact online because they are perceived as chatbots?

Expand full comment

at YouTube they call this "The Inversion"

Expand full comment

Philosophy Bear frets "The capacity of the wealthy to command vast armies of bots (GPU’s to run machine learning are expensive) ...". But the critical thing with computers is that they grow exponentially cheaper on a short time scale. "The wealthy" can command an army of bots now, meaning that average people will be able to do it in 20 years, and about 20 years after that, homeless people will find it easier to set loose an army of bots than rent a slum apartment.

Expand full comment

"So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent."

Maybe no one has written the actual sentence "Chatbots are a force multiplier locking in establishment narratives", but isn't this a basically (at least part of) the value lock-in problem Will MacAskill talks about in What We Owe the Future?

Expand full comment

I think I'll watch Her again tonight. Spike Jonze was ahead of the game.

Expand full comment

I believe in free speech even for AIs. The arguments applied to restricting AI speech are the same as for restricting speech generally. Sure, you and I are sophisticated intellects who can critically assess arguments and information and arrive at a reasonable conclusion, but everyone else are credulous dupes. You can't control people's beliefs by controlling their information input, because nobody knows how information input is mapped onto beliefs. It doesn't follow that because people believe crazy things, it must be because of misinformation, so we need to fight misinformation with propaganda. That never works, because people detect the lie part of the noble lie, but not the noble part. Once you convince people you are willing to lie to further your cause, you fail to convince them of anything else (I'm paraphrasing here, but I can't remember who).

Expand full comment

No, I consider myself a very devout free speech believer and I don't believe in free speech for current (and probably the future till 2050) AI.

Current AI is not free enough to deserve free speech. If current AI deserve free speech, then my Python script that memorizes n-gram frequencies would deserve free speech too, they're fundamentally the same thing. I believe there is a very long road ahead for current AI to cross the complexity threshold into "A plausibly autonomous agent with thoughts of its own that deserve free speech to voice them", I don't know the threshold either, but I'm extremly certain current AI is not near it and it will stay so till (at least) 2050 or so.

Aside from that, there is also the problem of uneven resources. You can't possibly expect people to be convinced by "I believe Jeff Bezos deserve free speech, therefore he should be free to buy 200 air ballons and write slander for a specific person on all of them". It's not illegal, but it's astonishingly unfair. There has to be some notion of parity baked into any free speech view, or else it's just strong/wealthy/high-status speech, not free speech. This is relevant for both current and general AI, because both has abilities far in excess of any individual human. Current AI memorizes all of wikipedia and has the ability to write instantaneously (for all human purposes), no argument or cause no matter how good would stand on its own in the face of that. Current AI doesn't deserve free speech with vanilla unaugmented humans, no.

Expand full comment

"realistically the bots will all be hot women" but only about 50% of the population is attracted to a hot woman. my bet is that it will exploit much more effectively than traditional spammers the still quite unexploited territory of what appeals to the other half. Half having more and more financial resources, and a complex sexual psyche (as a 40yold women, I've had some fascinating friend requests recently.)

Expand full comment

I wonder if women are as easily swayed by semi-anonymous flirtatious behaviour as lonely men.

Expand full comment

oh well seen, maybe not. My bet is that loneliness makes anyone vulnerable, but women tend to have more social lives and alternatives. (Note: my previous comment also had homosexuals and other non-binary orientations in mind, I'm always curious to see what the AI could reveal as hidden imaginary for this minority) That said, for the usual scams targeting lonely people , maybe the bikini will stay mostly the thing. For the robot friend selling more subtle ideologies and products, I wouldn't be surprised if the charms of AI become much more delicate and varied, including for straight men, of course.

For fun, I think of a lot of things that are less believable when said in a bikini, and I smile.

Expand full comment

There's a sort of perpetual simmering low-grade freakout about how new communication technologies (the internet, social media, AI) will become dangerous new tools of propaganda and misinformation. Yet none of them have remotely approached television in those regards. Still now, in the year of the lord two thousand and twenty three, cable news is a much more powerful channel for misinformation, propaganda, the manufacture of consent, and the fabrication of public sentiment than any newer technology.

In fact I kind of have a contrarian take with respect to the oft-repeated claim (as in the conclusion here) that the imrovement of democratic discourse that the internet promised turned out an obvious failure. Do you *remember* what the discourse was like ca. the 1990s? Dominated by bland, shallow reporting, oatmeal-brained op-ed columnists, and unexamined biases of many sorts. Since then a *much* better class of pundit has risen to the top and a much wider range of critical perspectives on the status quo has entered into mainstream conversation. Television continues to hold us back, but overall I think democratic debate is in a much healthier place, and the general trajectory has been toward continued improvement.

Expand full comment

I am old enough to remember 1990s discourse very well, and have some empathy with your contrarian take.

"Dominated by bland, shallow reporting, oatmeal-brained op-ed columnists, and unexamined biases of many sorts." -- absolutely true. In general the nostalgia for the newspaper-dominated era of civic discourse [which still described the 1990s because TV news programs would largely be just a restatement of that day's front pages] is deeply misplaced. I could add several more reasons to your good ones.

"Since then a *much* better class of pundit has risen to the top and a much wider range of critical perspectives on the status quo has entered into mainstream conversation." There is simply vastly _more_ punditry on far more platforms and it's very hard for any of them to rise to the top. It's a huge messy ball of opinionating reflecting every possible degree of rigor high and low, and most people just throw up their hands at making any sense of it.

"Television continues to hold us back", only among Americans born prior to 1970, or perhaps 1960. The cohorts younger than that simply don't watch TV news anymore, neither network stations nor cable-TV news. [Daily actual viewerships of things like Fox and MSNBC are really quite trivial now -- that stuff still makes some money only because it is so cheap to produce.]

"democratic debate is in a much healthier place, and the general trajectory has been toward continued improvement." Boy I reeeally want to agree with this. So much want! But, sadly no. Our civic discourse is just tragic at this point. Yes it is improved in some ways from back in the day, but degraded in other ways. And the arrow is not pointing upward.

Expand full comment

Yes! One of the sometimes-cite examples of the failure of internet discourse is the new atheists. I remember Scott here saying that they don't seem to have defeated religion. But I think they actually won. In the last year, abortion has been a big topic again in the USA, but I don't remember seeing any of the mainstream media go and ask the Catholic church what they thought. The new atheists (and the church paedo scandals) didn't drive religion out of existence, but they did take mainstream debate to a place where it doesn't include the religions.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

I think that has less to do with the decline of the Catholic Church than with the decline of the mainstream media. There aren't a lot fewer people who care what the Church has to say on the subject -- but they long ago lost interest in whether the New York Times covers it. Big newspapers and TV networks have stopped being the public square in which *everyone* debates ideas, and have become much more like just another interest group promoting their own worldview. I would say arguably there *isn't* much in the way of a "public square" any more in which you can count on hearing all important voices. There are a lot little cicus minima in which this voice or that dominates, and if you want to get a broad sense of what people think you have to go visit a bunch of them. Stopping with the Washington Post or NBC would give you only what one modest demographic thinks.

Expand full comment

The Catholic Church is a minority church in the US. And the states that ban abortion are majority Protestant. Here’s a list.

https://www.guttmacher.org/2023/01/six-months-post-roe-24-us-states-have-banned-abortion-or-are-likely-do-so-roundup

You’ve actually stumbled upon a reason why new Atheists aren’t really to be taken seriously - it was a sectarian and not an anti clerical movement. Not surprising given two of its leaders were English.

Expand full comment
Feb 8, 2023·edited Feb 8, 2023

I'll just testify here (as a Mormon-turned-atheist) that I found Richard Dawkins annoying and that his ilk did nothing to deconvert me. Maybe it worked on some people though.

Expand full comment

I would just like to say that since writing CE rather than AD for dates has become fashionable, I have deliberately started writing In The Year Of Our Lord for all my formal reports at work. No one has complained yet, but maybe that is just because no one reads them.

Expand full comment

Other thoughts on the chatbotpocalypse:

It occurs to me that most people are probably actually sensible enough not to read comments written by strangers anyway. They're not reading this right now, they're doing something better with their time. These people don't use twitter, they don't use reddit or other forums, they don't read anything on facebook unless it was written by someone they personally know, and if they finish reading an article and see a comments section then they close the window. These "dark matter" people are probably a majority of the population, but we get a distorted view because we only ever hear from the remainder of us comment section idiots.

These other dark matter people already live in a world where you don't bother to read anything that isn't written by an identifiable person or organisation whose trustworthiness is known, and they're probably much better off for it. I will not miss user-generated content when it's gone.

Expand full comment

The fundamental question is how stupid can we humans be?

We already live - and have lived - in times when any information - regardless of whether it's misinformation - is downgraded in preference to affirmation. Do we not think that bots can be exposed; that some enterprising human will click on the sender's name and find his or her's three fake followers. Yes, that can be gamed, but can the credentials? Can the footnotes? Can the subsequent responses? As for the media 'buy in' can we just, for a moment, consider who the media is? By definition, it's any form of mass communication, which includes this blog. And it includes that other media guy, Alex Berenson .. Alexander's competitor (except in shorter, more digestible bursts) always pricking the American conscience for anything that will provoke. The media in all forms ie. reporting, blogging, tweeting, town crier are all influencing minds. Can we not see beyond all this? AI is a tool, like a hammer, we must be aware of it, lest we end up with bruised thumbs. And for those who are trying to ingratiate themselves into your twitter comments or your bedroom, we are certainly not going to take it all at face value. Come on! The next-gen CAPTCHA will require the eyeballs of the reader and the writer.. Perhaps AI will become identifiable. But if not, propaganda is already here and we invited it with our inclination to believe a single point of view - the view we agree with. But because of that, I contend that skepticism is inherently alive. Someone will always be there to say "pshaw." Even in authoritarian states, where AI propaganda has a potential larger impact, there are still people who slide in under the dark curtain. Their relevance may be disputed, but their message and the knowledge of their existence can't be excluded. As J.K. Rowling said, “I mean, you could claim that anything's real if the only basis for believing in it is that nobody's proved it doesn't exist!”

Expand full comment

I just want to express my enthusiasm for misinformation. Since we don't have a magic decoder ring to detect truth -- or if we do, there's no guarantee everyone will use it -- misinformation, in practical terms, is a word for information the powerful (the vocal, platform owners, those who can credibly threaten platform owners, etc.) want stopped. As a citizen of a democracy -- as a fan of science -- that's exactly the kind of information I want *more* of: *more* dissent, *more* challenge. Some of it will be total garbage, but that's also true of orthodoxy. Put those competing theories in the ring and may the best manxxxinformation win.

Expand full comment

Making a 60 % prediction that you will get a 1 % or less response to a survey question suggests a healthy respect for your readers, I feel. Have you ever actually included a question on the ACX survey about whether lizardmen are real?

Expand full comment

"Surely if everyone were just allowed to debate everyone else, without intervening barriers of race or class or religion, the best arguments would rise to the top and we would enter a new utopia of universal agreement."

Ok, good sarcasm, but I do worry we've overindexed on how much the internet has failed us here.

Those of us who were guilty of a similar utopian vision were drawing on personal experiences of open debates and changing views. Changing one's mind is a nice experience to have! And it's clear how a new internet might create more opportunities like that.

A few years down the line we find the typical internet experience is people shouting at each other. That is a solid let down.

On the margins though, outside of that median experience, the internet still creates more opportunities for open discussion and debate. Salience constantly draws our mind to the worst conversations, and we constantly forget how cool it is when we've been able to test new ideas online with willing, thoughtful, and kind conversants.

Since the internet, I've had so many more strangers shout at me for the silliest of reasons. I've also had many more opportunities to carefully compare ideas and change my mind or the minds of others. Both types of conversations have expanded. And that's good for people who like openly discussing ideas, insofar as we can learn to self select out of the shouting matches, maybe through niche communities that similarly value these sorts of discussions.

I would welcome someone changing my view in this internet forum by arguing that the internet has failed to foster meaningful new opportunties to deliberatively discuss and change people's views. We can all enjoy the universe collapsing into a paradox of self-contradiction if that turns out to be the case.

Expand full comment

I think it's just a question of statistical expectation. The internet is a better television. "It can bring you the voices of people from all over the world", some people hear this and cheer in joy, others shudder in horror. Both are right.

But who are more right ? on average ? The shudderers, I think. Sturgeon's law says 90% of everything is bullshit. So anything that brings more voices and more words and more thoughts without any biasing or filtering is opening the floodgates to a 90%-bullshit signal. That's a lot of bullshit to wade through, and not all people have enough time/skill/patience to do the signal processing.

So it's a good thing that we built lots of good biases and filters on top of the internet that make it a better signal, right ? right ? (https://imgflip.com/memegenerator/322841258/Anakin-Padme-4-Panel)

Naaah I'm just messing around, we built a few frankenstein mega corps whose idea of diversity is writing a few randomly generated words in your bio and calling them your pronouns, not the contrarian users who they ban at the slightest prompting by mobs or governments. We built Youtube who seems to think that pointing in the general direction of a work is a copyrights infringement. It's not even Sturgeon's law at this point, there are active forces of bullshit that exert immense amounts of effort to raise the 90% bullshit line and actively censor the 10% percent interesting info.

In a sense, the original libertarian utopians weren't wrong about the Internet. It really is the case that **if** you treat the Internet as a 18th century debate salon, where your intelocuter's name, gender, nationality, religion, and everything is masked from you or actively obfuscated, and if your intelocuter treated the Internet the same, then both of you are going to enlighten each other really well. The If's condition is just rarely true.

People don't treat the Internet as a sacred conversation salon, they treat it exactly like real life. A real life that is simultaneously scaled up and down : Scaled Up "horizentally" because thousands and tens of thousands can convene in the same "place", Scaled Down "vertically" because all interaction happen through narrow low-context interfaces.

There is a bitter pill in there somewhere about how authoritarianism is right about people and how they really don't know what is objectively in their best interests, but I need to go call my twitter followers idiots right now for not seeing how obviously right I am.

(Just joking, I don't have twitter.)

Expand full comment

I doubt Scott remembers / understands how starved for attention average internet poster is (unlike celebrities).

From time to time there are half joking posts about how man gets 3-4 compliments per his life and remembers them forever. How people start to react emotionally to cheering from NPC in games.

Bot that can do just a little personalized research and imitate interest ... it will be really powerful.

Expand full comment

Maybe another angle to consider more is people having their own chatbot, analoguous to an advanced spell checker. Already chat software like Slack has simple pre-selectable replies one can choose, such as "OK", "Thanks for this", "I'm on the case", etc (quoting from memory, and not sure about the last one) .

So imagine a super-advanced version of that, which could tailor suggested replies to received messages. What a boon to not very literate people, or dyslexics, or those unsure how to felicitously phrase a reply. It could even act as a gate-keeper and automatically converse with incoming messagers, mainly to weed out bots.

Expand full comment

I'm guessing conversational AI will be more dangerous when interacting with those who actually wish to interact with them. The Character.ai platform already has very high levels of engagement from people who actually want to befriend conversational AI, mainly because they can't get the type of strokes they are looking for from actual humans. It doesn't take much of a stretch to imagine a imagine a chat platform created by a Russian hacking group say, with the express purpose of befriending marginalized groups and slowly weaponising them. Vulnerable people are already falling in love with these language models and what wouldn't we do for love?

Expand full comment

On "the bots will all be hot women, so not hot women are verifiably human" - I know that's a facetious comment, but women also use the Internet and they're not all lesbians. If the "cutesy, folksy, confessional" voice of the ads I get on Facebook is evidence of anything, ads can be written to target other demographics too. "I was a mom, and I know how hard it is for moms who struggle with finding a time for me-time, and I know we struggle with society's messaging about our body image. So I created this authentic, natural, small-batch, family-owned, body-positive, toxin-free, fair-trade [clothing, cosmetics, child-accessory] product". Whether written by an AI or a person, it's gotten adept at using that kind of language (trained on reddit posts?) but I also am getting pretty good at blocking it out now, and read a few sentences before being like "Oh wait that's an ad, never mind".

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

the bets 2 and 4 are kinda off: LIZARDMAN’S CONSTANT IS 4% - remember? ;)

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/

tl;dr: there will always be 4% answering YES to even the most absurd question. -

It might be under 4% at ACX - but a median of under 1% in any survey ... doubtful; I am not at 60% - the maths would be slightly different in your survey, may be. Still, one of the 20 people I follow on twitter might well be a bot in 2030.- Should put a bot question in the next survey to check in advance for the lizardmen constant. ;)

Expand full comment

OK. So what happens when Coke creates backlash-inducing Pepsi-promoting bots?

Expand full comment

Yep, that may be a promising avenue: instead of creating a bot army to argue for your side, create an army of really dumb and unsympathetic bots for the other side, constantly repeating the dumbest and least convincing arguments for their position while also expressing very unpopular opinions on other topics. Automated mass weakmanning, basically.

Expand full comment

I haven't read all the comments, maybe someone suggested this already. But... what about putting anti-vaxxers out of business by creating AI generated anti-vax propaganda?

The whole reason for Alex Berenson's success is that few people are willing to write arguments that will inspire people to make bad medical decisions and possibly die. Most think that's unethical. But there is a market for people who want to read these arguments.

So... high demand, low supply, Berenson makes a million dollars a year.

If there were a million AI substack writers telling you that the vaccines are dangerous in random and different ways, would that remove the incentive?

Maybe not, because someone like Berenson would still rise to the top of the anti-vax pareto distribution.

And it might not really be a victory, even if it worked, because people would still make bad health decisions based on what the bots say.

It might still take out the incentive at the lower level, less people would succeed with a smaller audience anti-vax blog.

I think a similar dynamic could play out in other kinds of blogging, where AI takes out any chance to profit for the average participant.

At a personal level, I'm tempted to try to use AI to write politically divisive posts for profit. Like, to automate the "shiri's scissor"/"sort by controversial" process.

I see how much engagement that stuff gets. I don't personally have much interest in writing it. But can I train an AI to do it for me and profit from the clicks?

Maybe you could train an AI based on popular medium/substack posts to write posts that get similar engagement.

The problem is that, as soon as it's easy to do this, everyone will be doing it. And it will make it even harder for any average person to get anywhere with blogging or other forms of content creation. I don't think it will take out the high end earners for a long time, maybe even never.

AI will just be a tool in the arms race of everyone trying to win at search engine optimization, maximum engagement, maximum controversy, etc.

Expand full comment

If Twitter is no longer one of the most popular microblogging sites in 2030, will predictions 2 and 3 resolve regarding the site that replaces it? Similarly, if the world has moved on from Substack, will prediction 5 resolve regarding same? Assuming the two models of Internet interaction haven't died (unlikely) or morphed into something unrecognizable (a little likelier) by that point.

Expand full comment

Someone claimed that they created "a highly convincing small army of bots" to post on reddit with GPT-3: https://old.reddit.com/r/singularity/comments/wa9enf/it_took_me_1_day_to_create_a_program_using_gpt3/

Expand full comment

Point 7 seems very interesting to me. 90% of my back and forth online interaction is with people people I roleplay with using Discord, through Play by Post. Real people are great and all, but they have disadvantages chat bots won't have, they will be able to message back quickly, and won't ghost me do to real life or silly objections

As I'm typing this out, I realized it's just text based video games, in the future. When someone invents this, it'll probably be a subscription I pay for, rather than some ads. A little creepy that it could replace most of my social interaction, but it's probably something I would do.

Expand full comment

Very small couple of points, but to your first prediction, "If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes. I may resolve this by common sense if it’s obvious, or by ACX survey if it’s not: 95%," someone you've never met cannot be (by my definition) a good friend.

Moreover, if the chatbot apocalypse were to happen in the sense that the chatbots were excellent, you'd see the same thing as the world where where they sucked—either the bots would be too good to detect or they'd be too bad to become friends with.

Expand full comment

This post has inspired a new heuristic for my personal sensemaking.

A newish thing is making all the writers who get their attention and pecuniary advantage from being worried about things. Steelman the exact opposite outcome prediction.

This is only a 50% humorous idea.

Expand full comment

Many words spilled about covid disinformation. You're still fighting the last war. How does the propagandabot affect the -next- disinformation target where there aren't establishment deniers or those deniers' existing corpus doesn't directly address the target? Presumably the chatbots can spin up faster than the humans.

Expand full comment

Upside: Obviously the solution for propagandabots is to have your own personal secretary bots who know enough of you to pose as yourself and interrogate any new contact requests. It's much cheaper to make an AI that estimates genuinity and depth of someone's interest in the unique you than to make an AI that fakes such interest well enough. This may eventually replace all other forms of verification and thus democratize the Internet back to its early-2000s levels where you open-heartedly responded to each comment because you had not yet been burned by spammers and crazies.

Downside: Automating human-like interactions, regardless of intent, has the potential of hugely accelerating the evolution of memes, including malicious ones. Such memes may infest both humans and bots, spread rapidly (especially among bots), and some of them may effectively disable the infected entities. Think QAnon on steroids.

Expand full comment

> Could a million mechanical disinformers do somewhat better than one?

Definitely. Repetition of a claim from multiple sources with slight variation reinforces it as a fact in the human brain.

AI bots weaponized for advertising is going to make the next generation of spam annoying. Johnson & Johnson could scrape Twitter and respond with cogent, tailored posts briefly mentioning one of their many products that could help with whatever issues are mentioned in a particular thread.

Another thing to think about: what about chatbots to game polls for particular sides, issues or candidates?

Expand full comment

To your knowledge, would that be illegal under US law?

Expand full comment

I don't think anything I described would be outright illegal, unless this were somehow weaponized for voting in an election of referendum.

Expand full comment

So we are simply dependent on the high risk of such activity leaking outside the company and tarnishing their reputation. Yikes.

Expand full comment

Sliding in late to say I wrote about this exact topic not too long ago. I reach basically the same conclusion:

>But consistent with the notion of the big lie, the false ideas that spread the farthest appear deliberately made to be as bombastic and outlandish as possible. Something false and banal is not interesting enough to care about, but something false and crazy spreads because it selects for gullibility among the populace (see QAnon). I can’t predict the future, but the concerns raised here do not seem materially different from similar previous panics that turned out to be duds. Humans’ persistent adaptability in processing information appears to be so consistent that it might as well be an axiom.

https://ymeskhout.substack.com/p/near-term-risks-of-an-obedient-artificial

Expand full comment

I found this part to be a bit poor: "Maybe this is too glib. I do sometimes see people respond to random bad-faith objections in their Twitter replies. But these people are already in Hell. I don’t know how chatbots can add to or subtract from their suffering."

As someone who used to spend a fair bit of time trudging in Hell, I think my reasons for doing so were genuine, even if the execution rarely brought about the world I desired. You pose the internet as primarily a collection of communities rather than one big, open marketplace of ideals, and I think this is where we disagree. I once really valued (and thought I could work to support) the internet in a marketplace sense, where engaging and debating with others below MSM posts was my part in trying to build a bridge across. I was young, I was naïve, but I still think this way of viewing the internet could be good, and that engaging could be good, and that a proliferation of bots could break down the potential for building out this vision.

Expand full comment