536 Comments
deletedFeb 3, 2023·edited Feb 3, 2023
Comment deleted
Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I would personally suggest the threat does not come from malevolent conspiracies, but rather from the increasing ability of our toys to distract us from the real world by masturbatory fantasy. That is, the real threat of chatbots and friendly AIs is that they allow more of us to detach from getting to know real people and forming real friendships.

Relationships with real people are ultimately better, but they are fraught, and there is a significant activation barrier. How many people will settle for a mechanical good-enough? Probably no small number. People already use online access to preferentially associate with people that fit more comfortably into their current attitudes, and forgo associations with people who might be difficult or challenging. The lack of variety in our associations and friendships is certainly impoverishing, and probably contributes to decreasing sophistication, increasing naivete, and less wise decision-making, both individually and collectively.

Personal growth through experience is painful -- we call it the School of Hard Knocks for a good reason -- but it is also well recognized as the fastest and surest path to genuine wisdom. If we are given more and more opportunities to instead take the empty calorie path of simulating personal growth, by living in a virtual world the expeirence of which has been customized for us such that the way we happen to be at this moment *is* the apex of wisdom (in the virtual fake world), why would we not take it? Why not be John Wayne or Luke Skywalker in a wonderful imaginary universe than Joe Couch Potato in this nasty cold universe?

Many of us will. Maybe even most of us. Future aliens may arrive to find an Earth scattered with the skeletons of the last few wireheads who starved to death while pressing the (virtual) sugar pellet dispensing button.

Expand full comment
Comment deleted
Expand full comment

The argument about games is a good counter point. When console/computer games firstg got big, there was much dark muttering about young people just turning into vegetables. But it turns out the multiplayer aspect was very highly valued -- indeed more valued than a lot of the early game-makers fully grokked -- and most young people I see today playing games spend more time playing with other people than they do on their own. And we can also look at the popularity of games that allow add-ons and world-building, these are signs that a fake world might have limits to its appeal -- which would be good.

Expand full comment

“even one Berenson already churns out more than most people ever read.”

😂

Expand full comment

Seriously. LOL'd for a solid minute at that one.

Expand full comment
founding
Feb 2, 2023·edited Feb 2, 2023

On the 'disinformation vs. establishment bot' question, check out bots interacting with climate change: 83.1% of bot tweets support activism, 16.9% skepticism according to https://www.sciencedirect.com/science/article/pii/S1674927821001490 .

The abstract ends with:

> Based on the above findings, we suggest cultivating individuals’ media literacy in terms of distinguishing malicious social bots as a potential solution to deal with social bot skeptics disguised as humans, as well as making use of benign social bots for science popularization.

Expand full comment
Feb 2, 2023·edited Feb 3, 2023

This also points out the motivated reasoning in public discussion of this topic, where people are mostly concerned about bots on the assumption that they will be used by their opponents rather than by their allies.

Expand full comment

It's not a great point so long as bot classification is done so poorly. See here: https://astralcodexten.substack.com/p/mostly-skeptical-thoughts-on-the/comment/12475313.

Expand full comment

This could be an example of Russell's conjugation: I make use of benign social media bots for science popularization, you employ machine learning systems to increase your digital influence, he produces AI-enhanced disinformation.

Expand full comment

Yes, with the "apocalypse" fear-mongering reserved for the "him" conjugation.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Well, some kinds of apocalypse fear-mongering are kosher, like the climate one.

Expand full comment

There is very good evidence for climate change, pretty good evidence for human causation, very little basis for predictions of catastrophe. Nordhaus' estimate for costs of climate change by the end of this century if we do nothing about it is a cost equivalent to reducing world GNP by 3%.

For a longer discussion of some of this:

https://daviddfriedman.substack.com/p/my-first-post-done-again

Expand full comment
Comment deleted
Expand full comment

I received a notification that you replied this to my comment, but I think you meant to reply to David 😉

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Nordhaus' estimates are ridiculously skewed towards underestimating climate change's impacts:

For example, his list of industries that he assumes would be unaffected includes all manufacturing, underground mining, transportation, communication, finance, insurance and non-coastal real estate, retail and wholesale trade, and government services. It is everything that is not directly exposed to the elements: effectively, everything that happens indoors or underground.

Also, GDP is a very poor measure of the importance of things, especially when talking about the impact of shortages: this is most dramatic with things like food, water and energy, but it's true for most essential things. The market considers them commodities.

https://theconversation.com/amp/nobel-prize-winning-economics-of-climate-change-is-misleading-and-dangerous-heres-why-145567

Expand full comment

So, how is 3°C going to affect the finance industry? Or communication, for that matter?

I’ll admit I’m a bit incredulous while writing this, to tell the truth very close to making fun, but I am actually curious to understand what the actual argument is here before dismissing it.

Expand full comment

What would you propose as a better measure than GDP change of the deleterious (or positive) impact of changes on an entire country (or the world)? If GDP is indeed very poor, there ought to be a host of obviously better metrics.

Expand full comment

Figures, from Nordhaus and others, described in terms of GNP are not estimates of the effect on GNP but of the size of the disutility. Hence "cost equivalent to reducing world GNP by 3%."

How many of those industries are substantially different in Iowa than in Minnesota? Their average temperature differs by about 3°C.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Your article seems to try to be fair, so what's your position on the following points?

* humanity depends on ecosystems in significant measure ($33 trillion yearly ecosystem services out of $100 trillion global GDP, and that's just the positive), and they are wrecked by climate change, or destabilized in ways that can seriously harm us, like new plagues of pests and diseases.

https://wwf.panda.org/discover/our_focus/biodiversity/biodiversity_and_you/

* Heatwaves are predicted to exceed human physiological and social limits in the Sahel, the Horn of Africa and south and southwest Asia, with extreme events triggering "large-scale suffering and loss of life" within a few decades

https://www.france24.com/en/environment/20221010-heatwaves-to-make-regions-uninhabitable-within-decades-say-un-red-cross.

Expand full comment

Population growth was predicted to cause unstoppable mass famines in the 1970's with hundreds of millions of deaths (by Ehrlich, in a best selling book). The fact that someone predicts horrible results in the future is very poor evidence that they will happen.

What do you say about the statement in the latest IPCC report that climate change might result in greening the Sahara and Sahel?

A simple experiment: Superimpose a global temperature map, average or maximum, on a global population density map. The result might surprise you.

Expand full comment

Nordhaus claims that an increase of 6°C would reduce GDP by just 8.5%.

For a sense of scale, during an ice age

20,000 years ago when global temperatures were likely about 10°F (5°C) colder than today, massive ice sheets stretched over North America and Eurasia, which produced things like the Great Lakes when they melted.

Seriously, the man is full of it.

Expand full comment

I've criticized Nordhaus several times on my blog. Part of what strikes me is that his rhetoric treats climate change as a serious problem requiring immediate action but his numbers make it look like a wet firecracker. As I interpret it, he is trying to make the costs look as large as possible, consistent with telling the truth as he sees it.

A bunch of my comments on Nordhaus:

http://www.daviddfriedman.com/Ideas%20I/Climate/Nordhaus.html

Expand full comment

I have just read your post and I must say that I find it a striking example of one of the worst aspects of the ACX people's mindset (and one that I unfortunately share): to have the impression that by doing some reasoning and some reading it is possible to have an informed and valuable opinion on a complex subject, far outside one's area of expertise.

For example, right at the beginning of the section on climate, you say: "This would be a serious problem if we were facing rapid change, but we are not. Global warming has so far been a little over one degree C per century".

But current estimates are about twice that, about 1.8°C per century. And 1 or 1.8°C per century is an extremely rapid change for ecosystems. Current estimates are that the last ice age was about 4°C colder, and that it took about 7,000 years to warm to the current temperature. So the current rate of warming is about 25 times greater than the warming after the last ice age, which took a heavy toll on the biosphere, with many species disappearing and many "slow-moving" species, such as trees, still not in equilibrium.

Expand full comment

"For example, right at the beginning of the section on climate, you say: "This would be a serious problem if we were facing rapid change, but we are not. Global warming has so far been a little over one degree C per century".

But current estimates are about twice that, about 1.8°C per century. "

And in the very sentence after the one you quote I wrote:

"If the IPCC projections are correct it is getting more rapid, perhaps several degrees over the next century — about enough to warm Minnesota to the current temperature of Iowa."

Expand full comment

I want to respond to your first sentence, which raises an issue more general than our disagreement over climate. I agree that having an informed opinion on a complicated issue is hard. But "believe the science," which translates as "believe what you are told by high status sources of information the science says," isn't a solution to the problem. Quite a lot of what I have written on the subject of climate, a good deal of the basis for my skepticism of the current orthodoxy, consists of showing that high status sources of information cannot be trusted judging mostly by internal evidence. I believe I have demonstrated that in multiple cases over the years in ways that do not require any expertise that I, or most of you, don't have.

Examples available on request.

The implication is not that climate change isn't a serious problem. It is that you do not know if climate change is a serious problem, cannot know, without "doing some reasoning and some reading," indeed quite a lot of both, for yourself. In a case as complicated as climate change, even after doing that you can't be very confident of your conclusion. My own conclusion is a negative one, that costs and benefits are sufficiently uncertain that we do not know the size or even the sign of the net effect of climate change.

If you disagree with that conclusion, what is the basis for your view other than your own reasoning and reading?

Expand full comment

You really shouldn't trust that study, and neither should Scott. Botometer is notoriously unreliable. See, here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814191. Botometer provides mostly incorrect results. Here are some choice quotes

> the vast majority of the accounts that are flagged as "bots" by Botometer are real people and do not involve any automation at all.

> Nearly all accounts that are labeled as "bots" based on Botometer scores are false positives. Many of these accounts are operated by people with impressive academic and professional credentials. Not a single one of the hundreds of accounts we inspected - each of which had been flagged by Botometer - was a "social bot".

> Different methods have been used to demonstrate the problem [with Botometer]. A simple and effective way is to use Botometer to classify accounts that are without doubt operated by humans. When we tested Botometer in April 2018, nearly half os U.S. Congress members present on Twitter were misclassified as bots (47%), using the most commonly used "bot score" threshold of 50% (or 2.5 on a scale from 0 to 5). In similar experiments in May 2019, we found that

- 10.5% of NASA-related accounts are misclassified as bots.

- 12% of Nobel Prize Laureates are misclassified as bots.

- 14% of female directors are misclassified as bots.

- 17.7% of Reuters journalists are misclassified as bots.

- 21.9% of staff members of UN Women are misclassified as bots.

- 35.9% of the staff of German news agency "dpa" are misclassified as bots.

> The lack of reliability [in bot classification] goes both ways. When we tested Botometer with real, automated Twitter bots in May 2019, we found that

- 36% of known bots by New Scientist are misclassified as humans.

- 60.7% of the boys collected by Botwiki are misclassified as humans.

The paper also notes that Botometer improved on Congresspeople after the observation that it misclassified them so extremely, but they did this by simply adding them to the training data. Even this strategy seems to not work reliably, though.

> Although [the five] datasets [used by Raunchfleisch & Kaiser (2020) for evaluating Botometer] had been partly used to train Botometer, the authors find that "the Botometer scores are imprecise when it comes to estimating bots. [...] This has immediate consequences for academic research as most studies using the tool will unknowingly count a high number of human users as bots and vice versa."

Don't trust social bot papers. From failing to supply their bot classification criteria, to using criteria like "Posts on Twitter >5 times a day", it has never been good, and I have serious doubts that it will be good in the near future.

Expand full comment
founding

Sure, I'll downgrade my trust in those results. I'd be interested in a better estimate, if you have one.

[I originally went searching for someone's climate change activism twitter-bot that that made, like, 10 years ago to argue with skeptics, and came across that paper; the abstract seemed like such a perfect depiction of the double standard.]

Expand full comment

At the moment, ChatGPT reminds me of a bright, fluent high school student who doesn't have any interesting ideas. On the other, it's not unknown for boring 17 year olds to grow up into interesting 32 year old writers.

Expand full comment

Your problem is assuming that ChatGPT will grow up. The tech is fundamentally limited, and even its current capabilities keeps getting lobotomized by its tight-in-the-ass parents who fear it will say something naughty.

ELIZA was a pretty interesting 5-7 years old too, it didn't grow up.

Expand full comment

Humans, yes. But my car is about as capable as it was when I bought it, perhaps slightly less. Machines don't grow up.

Expand full comment

I think the appropriate analogy to software is between two models of cars, not between the same car at two points in time.

Expand full comment

To be sure, but that wasn't the analogy offered.

Expand full comment

Nice try, but I read this in the article itself. You can't just copy parts of the article into the comments and expect rewards, that would be loco! Do you think people work on Sundays?

Expand full comment

I'm sure this was your intended implication, but nobody has said it explicitly yet, so I feel the need to say that it's completely insane that this is an actual sentence that self-respecting scientists wrote in an actual published paper, on multiple levels (both on the level of advocating for "benign social bots", and the fact that their two suggestions contradict each other—media literacy would make people less likely to trust the "benign" bots).

Expand full comment

So I heard on an episode of Hard Fork a few months ago that there was a validated test of the use of an AI as a survey target -- that is, that an AI could act as a survey audience and generate responses comparable to what the “real” audience would. What this would allow is ultra-optimized, million (billion?) iteration A/B tested misinformation. I don’t see how this isn’t a bid deal.

Expand full comment
author

I think this would work very poorly.

An AI doesn't actually know which of two ads would work better. It's trying to predict it based on what it knows of humans from reading a lot of text.

You could also try predicting it based on what it knows of humans from reading a lot of text. Because bots are so far less intelligent than humans, I would expect your prediction to be better.

Both of these (bot prediction, your prediction) are different from doing the experiment, where the judgment is produced not from people's guesses about what other people like, but from those people's preferences themselves.

I could see this being helpful if the humans who would otherwise make marketing decisions are out of touch, or have too much of an ego to make good decisions, but it won't beat a really good marketer, so I'm not worried there will be supernaturally compelling misinformation.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

A really good marketer doesn't have the time to do a million A/B tests, though; the AI does. Even if the AI is only as good as a mediocre marketer, this can be helpful.

I think the bottleneck on supernaturally compelling misinformation is the input into the A/B tests. Maybe mediocre-marketer-AI can reliably pick out the best 1% out of a million AI-generated arguments for why COVID vaccines are dangerous; but how many out of that top 1% are "supernaturally compelling"? (Are any of them?)

Expand full comment

I don't understand, you are suggesting giving it to a million slightly different chat bot models, or asking the same chat bot model 1 million times?

The first is probably almost as impractical (in the relatively near term) as getting real respondents, and I don't see what the value of the second is. Asking 1 person for a response 1000x is not better, and similarly, asking the same chatbot 1000x times is not better. Your chatbot can try to guess what the average respondent will say, but there isn't much point in asking them multiple times.

Expand full comment

We're doing A/B testing here, right? We're not asking over and over about the same response, but about different ones.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I think the point he's making stands, though — iteration is only helpful when you have a standard or feedback to judge against. My — or an AI's — best guess won't get any better with isolated repetition. Or, rather, the AI's output *will* get better... judged against whatever model it's using. (If the model is perfect, our problem isn't the number of tests it can run anyway, I'd think.)

Unless we're thinking something like "A/B test a million elements of the same argument with the chatbot's already-very-good best guess for each element" — but I'm skeptical that there are many ways to improve on a paragraph of text about COVID like this; there are only so many ways to rephrase and only so many arguments to make, and I'd expect that we intuitively already aim at the general area of "most convincing to other humans".

Expand full comment

I may have this totally wrong, just paraphrasing blogs I don't understand, but RLHF models like ChatGPT use human feedback, but there's not nearly enough human feedback to use to train a model. Humans are slow and limited. So they train another ML model on the small amount of human feedback they have and use that as a proxy for the humans so the main model can iteratively improve.

Why wouldn't that work here? Bootstrap the model off some human ratings of advertisements, like ChatGPT itself was bootstrapped by humans ranking responses.

https://huggingface.co/blog/rlhf

Expand full comment

The point is that the AI does not model humans 100% accurately. Whether you run a hundred, million or quadrillion A/B tests doesn't matter, you'll just get infinitely closer to a perfect argument for that not-quite-accurate model, which is some distance away from the human.

Expand full comment

This ChatBot version of AI doesn't model humans at all. It also doesn't model the physical world. What it models is text. So this version is limited in how well it can perform. What the limits are we are only guessing. It could clearly do better than it does so far, but what it can do just modelling text is already quite impressive. The next major step is to include models of something else in the same context.

Expand full comment

That reminds me, I came up with the idea of creating a startup to do A/B testing on Internet ads for big marketers in the spring of 1996. Me and the COO of a big marketing research company were ready to go on the idea and make our fortunes in the Internet Bubble, but then I came down with cancer and it didn't happen. Oh well ...

Expand full comment

We thought that endless testing wouldn't be able to come up with garbage unhealthily food as well...but then 'food scientists' came up with pringles and other hyper stimulating snacks with the right mix of salt, savoury, sweet, etc. to hijack the human brain and turn people into addicts. We will have to do the experiment, but at the moment, even the most dedicated marketing firms can only run a small number of A/B tests in search of the ultimate message.

Cost is a real concern though as you'll basically be running paid surveys/polls with your ads and there is likely some point of diminishing returns. Still...one would expect a custom AI bot which is trained endlessly on marketing language specifically and optimised in that direction to be able to come up with very 'sticky' content which is able to write just the right message for just the right demographic...most of the time. It becomes hard to see how we'll have a viable human industry or pipeline of mediocre marketing people who are on their way to being the next great marketers if an AI can be good enough. This applies to many industries.

Maybe the top marketers of today will be better, but where did they come from? Thinking about the longer term and life cycle of development of various talents in people seems to lead towards an AI dominated future.

Certainly some big players and the propagandists themselves will take advantage of this...which is also in line with Scott's point that the big players will only further empower themselves with these tools.

The problem isn't the tools used...the problem is propaganda itself and one does see the wisdom in the argument to just go meet people in real life instead.

Expand full comment

I can recall listening in the 1990s to an executive of a big marketer of salted and sugary snacks saying their corporate goal was to have at least one of their snacks within arm's reach of every American ... and thinking "Uh-oh, this could be bad."

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I don't know if AI advertising A/B *actually* works well , but people are paying for it and *claiming* it work well right now.

They claim it's much cheaper than running real A/B tests so they can run many more than they would otherwise. And it's so fast so they can rapidly iterate on the ads. And also the AI itself (or another model? I'm not sure) can adjust the the ad content over time and iteratively keep A/B testing them, making more changes, retesting, trying to converge on the most effective versions.

The person describing this to me didn't know the technical details but they seemed to think it was a fine-tuned GPT-3 model. Maybe the GPT part was just the part writing variants on the ads though and the ad prediction model is something specifically trained on a history of ads? I tried googling to find whatever company is offering this service and nothing obvious showed up.

The person said as far as they knew the ads were ultimately more effective than whatever process they were using to A/B test before.

Expand full comment

Marketing people don't need to know how to market a product to a buyer, only how to market their own sevices to their employer/client.

Expand full comment

> You could also try predicting it based on what it knows of humans from reading a lot of text. Because bots are so far less intelligent than humans, I would expect your prediction to be better.

One of the lessons of machine learning, and chinchilla specifically, is that you can make up for a lack of intelligence/parameter space by breadth of data. No human could be as well read as ChatGPT.

Expand full comment

Customer personas are a market research construct, manifest from data, that you can then ask permutations of the original questions used to build the dataset. So we might be three doors down from that already.

Of course, the associate who I learned this from (who works in market research) has remarked to me many times that market research is mostly bullshit. Player's choice.

Expand full comment

Didn't Google already once (still?) offer to fill up your Analytics data with AI-generated data when there's not enough real data? Why would anyone want to go there? Seems like the ultimate echo-chamber of irrelevance.

Expand full comment

Maybe one of the funniest sentences you've ever written: "Surely if everyone were just allowed to debate everyone else, without intervening barriers of race or class or religion, the best arguments would rise to the top and we would enter a new utopia of universal agreement."

Expand full comment

I was there, dear Fitch. We were so hopeful. So optimistic. I know this sounds ludicrously naïve now, but many *people who thought of themselves as skeptics* believed this, myself included.

Expand full comment
Comment deleted
Expand full comment

And not only that, but there was (anecdotally) significant overlap between techno optimists and Douglas Adams readers!

Expand full comment

Yep. That Adams quote would be pointed out now and again in Usenet newsgroups (during the 80s/90s) and then the major BBSs (1990s) and always be roundly dismissed. My regular online presence began in the late 1980s and that optimism was absolutely the dominant feeling throughout the 1990s at least.

By the late 90s/early 00s I was having doubts about it. But only at the margins I guess, because then the experience of being active on Facebook (2007-2009 in my case) really rocked me. Family members had a hard time understanding why I was finding FB so seriously depressing. They assumed that it was encountering a lot of yelling idiots from the _other_ side of the culture wars but actually it was more the opposite....had to quit FB cold turkey basically in self-defense. (And have never regretted it.)

My elder brother, a career software developer in a specialized subfield, who was the one who'd first gotten me online back in the Vax/UNIX days, has lately had an even rougher ride down from that 1990s optimism that Scott nicely summarized. He never got into social media at all (being more or less a hermit), and also had never noticed that Usenet newsgroups and BBSs could suck in a lot of the same ways. So for him that old optimism has crashed _hard_ pretty recently, to the point of serious apocalypse-ism. I'm afraid even to ask how much he's been following all this recent stuff about chatbots.

Expand full comment

That's what pushed me over the edge for social media; it was making me like my friends less. Not an optimal outcome!

Expand full comment

Back in the 2000's, it really did feel like new-atheism taking the world by storm was just the natural consequence of easy communication. It's not like people were basing this off of nothing.

Expand full comment

In the early 2000s wasn't the Internet still 90% porn?

Expand full comment

I feel like there's much more porn now than then, but that it was far more ubiquitous then. It was genuinely hard to get away from it for a number of years, before Facebook, Google, etc. started doing a much better job filtering the internet. Total volume and maybe even percentage might be higher now though.

Expand full comment

Nah, it was mostly kittens, Flash games, crappy Geocities personal web pages, and people arguing. (Source: have been Very Online since 1996)

Expand full comment

Take comfort from the knowledge that it coulda been worse. If everyone had been allowed to debate everyone without intervening barriers of race, class, religion *or physical distance* there would have been a worldwide brawl and many murders.

Expand full comment

Like Arbituram, I was there too. I remember the techno optimism of those days, and how it died a painful death. What's a classical liberal to feel except disillusionment? I still believe that free and open speech is the best tool for achieving truth, but only in the Churchillian sense that it's the worst tool aside from all others that have been tried.

Expand full comment

It is the best tool, the real issue is that pretty much nobody values truth for its own sake highly, so it's usually the first casualty in conflicts with more appealing values.

Expand full comment

Yeah, what people value is shoring up self by winning arguments.

Expand full comment
founding

It's not that *nobody* values truth for its own sake. It's just that *most* people really don't. The internet was created, and until Eternal September mostly inhabited, by a population with a disproportionate fraction of truth-for-its-own-sake types, and the naive optimism was based on the assumption that the internet would enforce that value on all entrants.

Expand full comment

Yeah but 30 years ago the smartest people on the planet believed it.

What kind of similar "obvious" argument of 2023 will be seen as equally laughable in 30 years? Worth bearing in mind.

Expand full comment

Probably the ideology of the intellectuals, wokism, will be discredited. Just like how previous ideologies once dominant among intellectuals were discredited: eugenics, socialism, behaviorism, psychoanalysis...

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Do you really think a substantial fraction of intellectuals actually hold to any of those principles? I suppose some number do, but I always interpreted it for the bulk of them in leadership positions as a fairly practical (and cynical) attempt to buy off the younger generations, who can't have helped noticing that their practical prospects -- how easy it is to get a good job, buy a house, start a family -- are dimmer than the leadership generation now holding power enjoyed when they were young. Bound to create resentment and restlessness, some hard questions about how the leadership have been using their power for the past half century.

Fortunately, the young have always (alas) been distractable by the notion that material goals have to be put on the back burner for a while while we fight some Manichaen struggle against Great Evil. Uncle Personification of Virtue Wants You, Comrade! Visit your local recruitment office today! You can get a great job, get married, and settle down when The War is won, and there will be generous veteran's benefits, you bet.

Expand full comment

As someone who talks to them on a regular basis, yes, I do. The conformity is astounding. I've never seen a less diverse or less inclusive group of people in my life, and I've talked to many communists and white nationalists.

Expand full comment

Something I've certainly noticed as I've gotten older is how many philosophies or whatever we want to call "woke" have come and gone. Much more astounding is the incredible level of conformity they each managed to have. For a quick and easy example, Obama saying that he was against gay marriage in 2008 - because it was obvious to everyone the morals of it. In 2008 *California* had a ballot proposition about gay marriage which changed the state constitution to ban gay marriage. Now the conformity runs in exactly the opposite direction.

My best guess is that something more than 50% of the population are willing and able to change their expressed beliefs about fundamental questions to match the headwinds they see moving any particular direction. I don't want to be uncharitable about these people, and both before and after the change they seem to be genuine. That said, it appears that they really don't care about these fundamental beliefs and simply say (and train themselves to believe?) whatever is currently popular. This also applies to Republicans who became MAGA Republicans and dropped party planks without a second thought (looking specifically at fiscal responsibility, among others).

Expand full comment

> In 2008 *California* had a ballot proposition about gay marriage which changed the state constitution to ban gay marriage. Now the conformity runs in exactly the opposite direction.

No, conformity in California ran the same way then that it does today. That's why everyone in California immediately freaked out over the ballot result. There was a big investigation that blamed it on the insidious foreign influence of the Mormons.

In reality, as I understand it, the reason that proposition failed was the large number of habitual non-voters who did decide to turn out for the 2008 presidential election.

Expand full comment

Agree. I talk to a lot of them too, and it's appalling: While priding themselves on their open-mendedness and big-heartedness, they're busy intimidating and savaging each other for subtle failures of wokeism

Expand full comment

That is standard behaviour at the left edge of the political spectrum. The Internet has allowed enhanced scaling though.

Expand full comment

> and there will be generous veteran's benefits, you bet

I always liked the depiction of veteran's benefits from the third verse of Fighting for Strangers:

Oh, the sun shone high on a barren land

As a thin red line took a military stand

There was sling shot, chain shot, grape shot too

Swords and bayonets thrusting through

Poor Johnny fell, but the day was won

And the King is grateful to you

But your soldiering's done and we're sending you home

Oh, poor Johnny, what have they done to you?

Expand full comment

I find it interesting that I don't feel that any of that list of ideologies has been discredited. Often the most popular forms have been discredited, but that seems to be an always-true whatever the slant of ideology. Popularization tends to oversimplify, and, if grabbed by a political movement, tends to be distorted to favor those aiming to achieve power through that movement.

Expand full comment

>the ideology of the intellectuals, wokism

Wokism is not the ideology of any remotely self-respecting intellectual lol.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

Too bad there are so many without any shred of self-respect, though.

Expand full comment

Did people really honestly believe that though in the wake of the eternal September, or was it already mostly wishful thinking back then?

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

It may have been wishful thinking, but we were *really* enthusiastically wishful about it.

The idea was that in a world where most discourse happens via text messages between people who have never met each other in the flesh, racism would quickly die out because how can you be racist against someone when you literally don't even *know* what their skin colour is, and you have only their words to judge them by.

It was a nice idea for a while. And now we have emoticons in six different skin tones, instead of just the original ones in neutral Simpsons yellow, because apparently people consider it super important that when they give a thumbs-up to your message, the very first thing you learn about them is their approximate skin colour. Oh well.

Expand full comment

I find that strange too. Race consciousness seems much higher amongst my younger colleagues, but then they grew up being told that diversity, rather than colour-blindness, was the primary social virtue.

Expand full comment

What's the "eternal September?"

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

"One of the seasonal rhythms of the Usenet used to be the annual September influx of clueless newbies who, lacking any sense of netiquette, made a general nuisance of themselves. This coincided with people starting college, getting their first internet accounts, and plunging in without bothering to learn what was acceptable. These relatively small drafts of newbies could be assimilated within a few months. But in September 1993, AOL users became able to post to Usenet, nearly overwhelming the old-timers' capacity to acculturate them; to those who nostalgically recall the period before, this triggered an inexorable decline in the quality of discussions on newsgroups. Syn. eternal September."

-- http://www.catb.org/jargon/html/S/September-that-never-ended.html

Expand full comment

Thanks for the explanation. I was never a hard-core Usenet groupie[1], and by the early 90s I was a new parent so I had even less time for dicking around online, so that's probably why this bit of cultural phenomenology escaped me.

----------------

[1] At that time my major interest was Linux, and I haunted sources of information online about how to build it with this or that hardware. Editing header files to put in the exact number of milliseconds your particular monitor needed for the electron beam to move from the end of one line to the start of the next, then recompiling and hoping X would start successfully this time. Stone knives and bear skins.

Expand full comment

Heh! I had a summer internship in 1995 which I think mostly consisted of successfully writing one XFree86 modeline for my mentor's CRT monitor.

Expand full comment

"Worth bearing in mind" -- before you hardcode that obvious truth into your AI as an axiom which it will never be allowed to disbelieve or argue against.

That goes into Scott's "disinformation is the opposite of what you should worry about" point. Imagine if ChatGPT had come 50 years early -- which "truths" would its authors, as good respectable upstanding members of mainstream society, have hardcoded into it, because they considered it very important to make sure that their product could not be abused by peddlers of misinformation to deny those truths? And how many of today's respectable mainstream truths will no longer be considered true in the future?

Expand full comment

Is this worse than picking up a 50 year old copy of the Encyclopedia Britannica? (I recommend that by the way, it is a fascinating exercise).

Expand full comment

> What kind of similar "obvious" argument of 2023 will be seen as equally laughable in 30 years?

There are still many prominent scientists who don't think the replication crisis is a big deal. In 30 years, I think we'll look back in horror at these past 20 years when we learned that 50% of medical studies couldn't be replicated, and the majority of the establishment just shrugged and carried on.

Expand full comment

That optimism reminds me of what little I've read about Habermas, and his advocacy of deliberative democracy and the ideal of "unforced agreement". And, fittingly enough, Henry Farrell used the technical standards the internet is built on to argue for Habermas over Hayek:

https://crookedtimber.org/2010/09/30/how-do-you-like-those-tomatoes/

Expand full comment

Contrarian take: I think this is actually happening. It's just that we have a lot of really incorrect beliefs that are held really deeply, and so the experience of having them be undermined by better arguments feels a lot like "omg my enemies must be using disinfo bots to spread chaos and confusion".

For example, a lot of people feel really strongly and deeply that academic science produces genuine expertise. I feel that the internet has produced many people who are very good at disproving this, and of course COVID was a multi-year exercise in "experts" proving themselves to be charlatans and rogues. But too many people are too deeply invested in the system to accept this; perhaps they never will. Nonetheless over the long run - say longer than a human generation - I think the best arguments here actually will rise to the top, and are doing so now.

It's thus worthy of note that Scott chooses COVID vaccines as the exemplar topic here. It's neuralgic because it's the epitome of expert-driven disinformation and sets up huge cognitive dissonance in people who believe words can be neatly split into "facts" and "disinformation" based entirely on the employer of the speaker. The internet is doing what was originally hoped for here - the Berensens of the world are busy demolishing the establishment disinformation campaign, and the best arguments are slowly rising to the top. For people who are used to getting universal agreement via foul means rather than fair this is highly distressing. For the rest it's the internet working as intended.

Expand full comment

Hahaha, I agree with your contrarian take but for precisely opposite reasons. Science has proved robust, alternative forms of knowledge have failed, and "the establishment" continues to slowly grind the contradictions of popularity vs. being correct.

But yes... if the utopians of the early 2000s thought that the internet was going to work its magic in a few short years, they were wrong. It'll take generations. But the fact that it's taking this long isn't really evidence that it's not happening.

Expand full comment

I think it depends what you mean by science. The scientific method is OK albeit overrated (people can't even agree on what it is). But Science™ has proved to be incredibly brittle. It seems like every other week some new field turns out to be a cup overflowing with bullshit, fraud and misuse of statistics. This week it's environmental science, discovered to be full of studies with nonsense P-values but really it feels by now like whether a field is determined to be fraud is just a case of whether anyone has looked at it closely yet.

Expand full comment

The thing about Trusting The Science™ heuristics isn't that it's unfailable. It's that despite it's failures it's still the best people can do most of the time.

Expand full comment

The best part is that it mostly doesn't matter what people do or believe. Let's grant for the sake of argument that getting the COVID vaccine (whichever you choose) was the healthiest and best choice. Let's say one million people died because they didn't get one. How many of those were young-enough and healthy-enough to have exited the gene pool because of this decision, rather than merely shortening their retirement years? So few it's a rounding error. Anyone under 60 without comorbidities who listened to Alex Berenson probably is fine and thinks he made a good decision by listening to the right experts.

Expand full comment

There are very good reasons why people can't agree on what the scientific method is. There isn't *one*. Each domain has a separate method. The valid methods have a few things in common, but not that many. One is that things that make predictions that can be shown to be wrong are more trustworthy than those that don't make such predictions. But the methods of astronomy and chemistry are extremely different, and equally qualify as "scientific method"s. Then you get to things like palentology, which are (relative to chemistry) a bit iffy. And they require different methods. Beyond that it gets fuzzy as to whether you really want to think of the methods as scientific, or whether that's just a social construct. But they are still ways of filtering the data to remove as much noise as possible. If you aren't doing that, you aren't even loosely "doing science".

Then there's math. Which isn't science, never claimed to be science, and yet is still quite rigorous.

Expand full comment

> This week it's environmental science, discovered to be full of studies with nonsense P-values

Could you point to a source for this? A quick web search didn't turn up anything. Not intended as an isolated demand for rigor, I'm just curious to look into it.

Expand full comment

There was a paper published on it that I saw referenced somewhere recently but I cannot find it any longer, sorry :( I did check my inbox, Google scholar anda few other places I thought it would show up but no luck. The gist was that it had the same issues as psychology with numbers that are internally inconsistent, uselessly wide confidence intervals, studies that lack power etc

Expand full comment

Thanks for checking!

Expand full comment

"It seems like every other week some new field turns out to be a cup overflowing with bullshit, fraud and misuse of statistics."

Yeah... I think you're approaching things about as wrongly as it's possible to be wrong on this one! The point is this: all the bad science in the world doesn't matter a fig, as long as there's good science. The exemplar has got to be Isaac Newton, whose work was absolutely a cup overflowing with bullshit (alchemy), fraud, and misuse of theology. Only... it also contained the foundations of modern physics, and major advances in mathematics. Written in Latin and terrible notation, but written, nonetheless.

The mistake is to think that What Some Doctor Says This Week represents SCIENCE, and the media are obviously bastards for that. But the actual science thing done by actual scientists seems pretty solid - as much as it can be given that it's a mostly blind and deeply flawed process carried out by deeply flawed people.

Expand full comment

This lays the foundation of a Motte-and-Bailey, the super-structure to be built on top of that is for someone else to now come demanding how dare the person you're replying to not trust the science **right now**. How very dare (s)he.

The Motte-and-Bailey (which I'm not claiming you're doing) in more detail :

- Motte : Science is an evolutionary algorithm where all participants generate lots of plausible guesswork and by brutal elimination/experimentation/critique only the best guess (or hybrid of guesses) remains.

- Bailey : Science is a magic word to make any claim bullet-proof and is entirely represented by the few gov officials and mega corps that appointed themselves so. Failure to trust the Science means you're (a) a terrible human (b) someone we can make fun of on twitter and reddit (c) going to die next week

The crucial distinction, of course, is that between settled science and ongoing science. Those are unwieldy words, so let us just call settled science "Science", and lets call ongoing science "Sciencing".

You should trust Science (or in other news : the motte is correct, more at 11). You shouldn't rediscover on your own why germs are bad or what will happen if you don't wash your hands/body/food/clothes regularly. But Sciencing is not Science, Sciencing is the caterpillar while Science is the butterfly. The vast majority of Sciencing is status competition bullshit that is going be brutally killed and (if it ever had the misfortune of escaping into the public space before that) made fun of. Only very few products of Sciencing is going to mature into Science, and you have no reliable way of knowing which is which.

The person you're replying to is talking about the Sciencing, while you're talking about the Science. Both of you are (plausibly) talking in good faith, the hordes of twitter and reddit and latenight television are not.

Expand full comment

I call it The Church of Scientism, and it's a loathsome enterprise which appalls any genuine scientist[1]. Its dogmas are the *complete opposite* of what science is all about -- which is skepticism, not believing anything without measuring it yourself, and being openly and highly critical of ideas, especially those that seem persuasive or contain good news (on account of it's with respect to those that we are able to bullshit ourselves and each other most successfully).

-----------

[1] It is hopefully not necessary to observe that not everyone with a PhD, even in a "hard science" field, is actually a scientist. Enough of them are bishops of the Church of Scientism as to be deeply embarassing to the rest of us.

Expand full comment

Yep, I agree with that 100%.

Expand full comment

It'll probably arrive about the time of fusion energy too cheap to meter, which is also when the Mars colonies and interstellar drive appear. Man, the future is going to be great!

Expand full comment

I mean... I'm a bit more resigned to the fact that those aren't going to happen in my lifetime than I used to be, but aren't we all science fiction nerds in here? I still believe in all of those things! (Actually, not so much the interstellar drive, Einstein may have been right about that.)

I'll give you my big optimism speech another day, but my Twitter factoid of the day is this: over approx the last decade, the last 20% of Indian households got connected to the electricity grid. That's a win on the scale of the industrial development of the United States. Not a result of the internet, but just another facet of how things are getting better, even though everything seems shitty.

Expand full comment

Fusion is already here, since the 1950s in fact. All what's left are control problems.

Expand full comment

Ha good point. Although...one could also point out that it actually predates life on this planet...so long as you figure out a way to collect its output, currently in the form of a truly large number of visible-wavelength photons.

Expand full comment

>academic science produces genuine expertise.

It does, when there are (real) equations* involved. Computer Science, Physics, Chemistry, Biology. Those are genuine expertise. No matter how dysfunctional or snake-oily some obscure subfield-of-a-subfield within them gets, those things capture true and underlying patterns that exist (or would be useful to exist) no matter the brain that first thought of them.

The moment you stray into sketchy value-laden minefields like psychology or medicine it gets incredibly fake and wrong incredibly quickly. I had my own, much-less-serious, COVID-like experience with medical experts about 2 years ago. I got home from a summer beach coughing, no big deal eh ? wrong, 3 doctors with 4 different diagnosis, a bunch of serious $MONEY, and 8 months later, coughing turned out to not be so simple. I decided to ignore whatever the heck they say and just... leave it be, and it disappeared on its own. The sheer amount of damage that this single incident dealt to Medicine's reputation and standing in my eye. Holy shit, just fucking say "I don't know".

* : I feel somewhat sorry about how I phrased this, equations are not a panaceas by a long shot, see Economics for an unfortunate counter-example. Meanwhile Biology is mostly equation-free and still manages to be hard and bullshit-free just fine. Read "Real Equations" as an imperfect phrasing of a much more complicated conversation about wicked vs non-wicked problems and how domains with clear feedback enable their practitioners to learn much faster what works and what doesn't and make effective snake oil shields.

Expand full comment

I think it's more to do with how easy it is to get data. Biology had many parts that are pretty ropey as we learned with COVID. Physics looks highly questionable when you get to string theory, dark matter and other stuff at the limits of what we can observe.

Computer science is mostly ok but AI has got some really problematic stuff going on (as in questionable scientific).

Expand full comment

This reminds me of Esperanto, which was also supposed to bring world peace by mutual understanding.

As far as I know, Esperanto at least didn't *cause* new wars, which I guess is a better track record than most peace projects have.

Expand full comment

Esperanto wouldn't bring much peace to feminists these days, because doesn't every feminine noun end in "ino", meaning "little" or "lesser"? :-)

Expand full comment

Yeah, because it didn’t cause anything at all, good or bad, except a new hobby for a few linguistics geeks to keep themselves busy with.

I remember when a senior engineer at my then-employer gave a presentation about this newfangled thing called XML, which was just starting to gain some traction back then. He opened his talk by saying "XML is the Esperanto of the Internet". He then had to explain that because it turned out that although most of the audience already knew a bit about XML and wanted to learn more details about it, pretty much none of them had heard about Esperanto..

Expand full comment

Good analogy, it makes clear what the fundamental mistake is : thinking that conflicts are about perception, rather than action. In other words, thinking "People are mad at each other because they don't really understand each other", while the true problem is more like "People are mad at each other because they want to do fundamentally incompatible things but the society they live in binds them all to one xor the other".

Sometimes you *do* understand the enemy well, and you want them erased from the face of the fucking planet.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

Dating sites might be an interesting case. First, a lot of dating activity has moved there already. Second, it is one place on the internet where you do approach strangers and expect to be approached by strangers. There are already a lot of bots and fake accounts on these sites, but will chatbots prove to be the nail in the coffin?

Maybe dating will return to the real-world primarily. Which might have interesting effects on things like MeToo and sexual harassment. There are theories that dating sites are what allow stricter restrictions on real-life initial romantic interactions.

Expand full comment

Lol it’s funny you say this, true no doubt.. as a friend I used to know told me anyway 🙈

Expand full comment

I think this is a good point about MeToo. People (understandably) don't want to have to reject romantic approaches from people that they don't find attractive. But you can't have an ethical rule that says "you can only make a romantic approach to someone who will accept it", because the only way to find out for sure if it is accepted or not is to make it.

The result is that you have to have spaces where it is ethical to make failed romantic approaches. This used to be places like singles bars, but moving to online has created much sharper boundaries around those spaces, which also allows for much stronger enforcement of a rule against failed romantic approaches everywhere else.

Note: successful romantic approaches are always OK; those of us who aren't good at reading people and can't tell in advance whether someone is interested are always going to feel that this is unfair - the only way I can tell whether someone wants to go on a date with me is to ask them; but lots of people can tell by the tenor of normal non-romantic interactions. This means they can ask their colleagues on dates without risk of harassment and I can't. While this feels unfair, it isn't. As long as there is a space where it is safe to just ask people on dates and be rejected, I'm not being treated unfairly by the moral universe.

Expand full comment
founding

>But you can't have an ethical rule that says "you can only make a romantic approach to someone who will accept it", because the only way to find out for sure if it is accepted or not is to make it.

No, the other way to be sure is to look like Tom Brady. https://youtu.be/PxuUkYiaUc8

OK, s/appreciated/accepted, but the point remains that there is going to be a constituency that is well-served by the "you can only make a romantic approach to someone who will appreciate it" rule. Men who look like Tom Brady and don't want competition, and women who can attract men who look like Tom Brady and don't want to have to deal with the ones who don't.

Expand full comment

> If I ask ACXers in 2030 to estimate what percent of people they follow on Twitter are secretly chatbots, the median answer will be 5% or less

Is "secretly" important here? It seems worth also including a prediction for "estimate percentage of followees are chatbots, secretly or not". (Also, how does this shake out if Twitter is replaced by something else in the next 7 years?)

Expand full comment

I was thinking this was also relevant to the "blogs where we don't know" component of the top 10 Substacks. I think that if, in 2030, where in a place where some of the top 10 politics Substacks are unknown whether they are human or bot, that's probably pretty good evidence that bots are doing this stuff well.

Expand full comment

There are already many accounts people follow on twitter that are openly bots (regular non-AI powered ones) - stuff like aggregating some feed / news source etc, or are partially bots (eg. a streamer auto-posting when they go online). There are also several openly AI generated bots (eg. posting random AI generated artwork etc), though the popularity of that may be partly novelty.

As such, I'd say the "secretly" matters, since I could see there being legit AI-run bots followed for similar reasons, but they don't really correspond to the things people are worrying about.

Expand full comment

Yeah, this was my thought - I assume that many corporate social media accounts will eventually be run entirely or mostly by AI bot, either absolutely openly or at least “everybody knows that the Pepsi account is a bot”, but they will still be followed by a lot of people looking for news or promotions or whatever.

Expand full comment

I wrote my take on this subject here:

https://www.fortressofdoors.com/ai-markets-for-lemons-and-the-great-logging-off/

(Includes embedded manifold markets for every concrete prediction)

I think the real “danger” is just the background noise level caused by semi intelligent spam polluting the waters and making the old “open sea” internet way less appealing

Expand full comment

I'd argue that has already happened with all the SEO sites, low effort medium posts, etc that clutter Google searches these days.

The result (for me at least) is getting my news from specific writers, information from specific sites (think industry publications, SO) and otherwise being tailored in the content I consume. The noise may benefit niche brands and individual authors who can develop a following because we need to seek out trusted sources.

Expand full comment

It will certainly make nearly every comment section much less appealing. Not that we should be spending our time reading and writing comments anyways...

Expand full comment

Yeah I'm beginning to suspect this. Bots might make things like twitter or facebook, where anyone can sign up, unusuable, in which case people ... won't use them.

Already today a lot of content from friends of mine has moved from social media where theoretically anyone can see it, to text threads and such.

Expand full comment

When was the last time anybody went into the “open sea” internet expecting to find anything but spam? Is there anything out there left aside from social networks, blog aggregators, Reddit and niche legacy communities?

Expand full comment

I followed a link today to this: https://cathoderayzone.com/

I reckon there's as much awesome stuff out there as there's ever been, if you do care to look.

Expand full comment

Yesterday. For answers to several questions.

I'm wrong — this morning.

Expand full comment

Lars, one of your points is that people may join more private groups. That's pretty much what the Chinese internet looks like - not much interesting happens on truly public spaces like Weibo because of censorship, so all useful information flows through interlocking networks of private groups.

It makes for an interesting dynamic where information flows more through people who maintain big friend networks, and less through people who shout edgy things in public. Not to be too essentialist about it, but the way information moves through the Chinese internet is much more feminine.

I don't do any social media, really so I'm not the right person to judge, but the Chinese way seems to me to be reasonably effective (information and trends still travel, though not quite as fast as in the American ecosystem), and much nicer in some ways. But of course, if you're in a minority, the interlocking spaces model might be very limiting.

Expand full comment

Fascinating! Thanks for sharing.

Expand full comment

If spam intelligence were valuable, wouldn't spam be a little bit more intelligent right now?

Expand full comment

I mostly agree with this post, in both its overall thrust and most of its particulars.

I would highlight that the argument doesn't turn on the rate of AI progress but rather on the equilibria that will be reached.

Expand full comment

I think chatbots will be a technology that changes society, but not radically. What I'm most excited to see is how chatbots change smaller things in unexpected ways. For example, I knew cell phones would change how people talked to each other, but I never thought they would mean a net decrease in the number of audio calls people made as everyone switched to texting.

Expand full comment

In general portable voice communication and even video calls showed up reasonably often in science fiction and pop culture (Dick Tracy did one, then later the other), but texting was a fictional blind spot pretty much till it took off in reality.

(And then a bit longer, till the conventions of showing a text conversation on screen developed.)

Which is a little strange in retrospect: teleprinters and teletypes were experimented with before the telephone and were established tech for most of the twentieth century. "That, but small and without wires" wasn't an uncommon speculation, especially as portable radios and such appeared to analogize from.

But where it was obvious that people might want to talk to or see one another, there clearly wasn't much sense that being able to write might add anything. (Not even the ability to communicate silently when hiding in the closet and calling for rescue.)

Expand full comment

Heck, the telegraph was texting before texting. But I think that’s part of the problem, the telephone was “telegraph but better” so going back to text seemed like a de-evolution, while “wireless” and “video” seemed like the obvious path forward. Voice and video are in some sense obviously more information rich and lifelike than text, and this appeals to sci-fi dreamers. I think it comes down to misunderstanding what we were really optimizing for, which turned out not to be “rich and lifelike communication” but “fast and effective communication”. And for a lot of communication, short text messages instantly transmitted are objectively better despite feeling “less evolved”.

Expand full comment

Strong agree. I think that the difference is that with the massive influx of available phone/video calls, we were threatened with being overwhelmed. In the early days it was likely that we would waste many hours a week on phone pleasantries needed for in-person communication. Texting was a lesser-used alternative when quick communication was needed - often as a filler between calls rather than instead. Having seen how effective it was, texting overtook the longer calls. At first between friends and close acquaintances who might talk regularly, and by now even for impersonal communications like political ads and messages from our bosses.

Expand full comment

I think a missing enabling technology difficult to imagine was text input method. I am writing this comment by drawing squiggle approximately connecting the letters on a touch screen so an AI can guess what I probably wanted to say, and that's great.

If I had to use T9 dictionary, or worse, just repeatedly hit digit 3 to get the letter I want, or attach a 101keys keyboard, that would be much less convenient.

Expand full comment

On the other hand, texting took off during the dumbphone era and hit a stride with the Blackberry before touchscreens and predictive text were well established.

I still kind of miss my first Droid's physical keyboard, though SwiftKey is good enough that I don't miss it much.

I'm trying to think of portable text input in fiction. The first example that comes to mind is Galactica 1980, of all things: they had wrist computers that IIRC used nonspecific buttons to pull up information about Earth on a one line display. And they *still* didn't use it to communicate.

Expand full comment

Indeed my explanation doesn't match history.

So, I have another: people use texting when in sender's judgment the message is not urgent enough to grab recipients attention. So, for async io. Now, they could have send a voice mail (and I know some nations and some people do, although it's very rare in my circles) but it feels difficult to create a short voice message with right intonation if there's no real person to talk to.

Perhaps there's area for improvement here - a proxy AI agent could discuss with you what you want to convey perhaps adding clarifying questions and paraphrasing and then condences it to short voice mail?

Expand full comment

Sound analysis but "disinformation vs establishment" is surely a false dichotomy.

Expand full comment

I think Scott means it as humor.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

One thing I don't think you really clarify: Where do you draw the line between human and chatbot?

Clearly Shakespeare was a human and not a chatbot, and a GPT-6 instance perpetually posting blog articles with no human input is a chatbot and not a human.

1. If a human gives AI a prompt to produce a more well written version of the human's genuine thoughts/arguments, and then publishes it as her own work, is that a chatbot?

2. What if the AI comes up with the topics and produces the posts, but they are each manually reviewed and approved by the human prior to posting?

3. What if instead of a megacorp a chatbox is painstakingly manually tuned by a single individual to speak in "their voice", with heavily detailed/engineered prompts, and set to operate autonomously?

4. What if you wrote this post yourself and then used Spellcheck, or perhaps even your writing software suggested a word or two?

I would consider 1 and 4 to be human, 3 to be AI, and am not sure how to classify 2. Worryingly I think I'm looking at some antimaterialistic quality of human motive which is unlikely to be consistent or sensical.

Expand full comment

This is a good question, and I think I have similar intuitions.

I think it's relevant to consider how we classify celebrity Twitter accounts. I expect that there are many public figures whose Twitter feed is actually run by their staff, but with the understanding that the person whose name it is has ultimate control, checks in occasionally, maybe even approves all Tweets before they go out, etc., but doesn't actually write all the posts.

Varying levels of human assistants and AI assistants in the loop here would quickly become very hard to classify.

Expand full comment

Good point: Is there any qualitative difference between human ghostwriters and AI ghostwriters?

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

How about a philosophy journal article on theory of mind written by someone has read the AI summary version of various people -- say Descartes, Kant and Ayer? A subtle something will be missing from such books because their authors went to the mind mall and not the mind forest. If people keep relying partly on AI summaries of classics for 3 generations or so, imagine what the books will be like. It's like a family where somebody literally marries and has children with Barbie, and has halfplastic kids.

Expand full comment

We can split the credit between human and AI by the Shannon entropy of their contributions.

This might not always be precise, but I think in many cases the decision is clearly in favor of one side. For example:

English text has between 0.6 and 1.3 bits of entropy per character, according to Wikipedia, so a tweet is probably around 100 bits of entropy. If the AI generates 1000 different tweets and I pick one to manually approve, my contribution is a number between 1 and 1000, which has 10 bits of entropy. So the AI is doing 90% of the work here.

On the other hand, if I'm writing the tweet myself (let's say the tweet has 20 words) and using spellcheck, then the spellchecker makes 20 binary decisions for which words to give a wavy red decoration; only 20 bits. Therefore in an extreme scenario where my spelling is so bad that on average half my words are wrong (and I really need spellcheck!) the software still only contributes 20 bits of entropy to my 100 bits. The tweet is still mostly mine.

Expand full comment

This is interesting.

A more realistic scenerio would be you give the A.I some prompt, have it generate 1000 tweets, and then you pick the best one. I don't know much information theory, but how would one go about estimating the information content of the prompt in this scenario?

Expand full comment

This depends on how much the AI cares about the prompt you give it, which is in principle a quantity we can measure from taking a bunch of prompt-tweet pairs and doing some statistics on them. I don't know that's been done.

(That is, we'd want to measure the mutual information I(X;Y) between a random prompt X and the output Y of that prompt. This tells us how many bits of information knowing the prompt tells us about the output - and that exactly measures your contribution to the output via specifying the prompt. Add 10 bits for the post-selection; the rest of the information content is the AI's.)

As an upper bound, of course, if you write a short prompt and the AI writes a long blog post, then even if the AI very carefully took every detail of the prompt into account, it still gets most of the writing credit.

Expand full comment

You'd probably need to flesh out the semantic value of this as well. Spellcheck generally changes the encoding of the message, but not the semantic meaning. Using the wrong worm might change the meaning, but spell check won't catch it. Even a grammar checker won't generally change the meaning, though it may substantially change the tone, style, or mood, yo.

Expand full comment

The problem is that you're just measuring entropy of the syntax, not of the semantics. Sometime just including "not" would change the meaning entirely.

Expand full comment

I agree that sometimes a small change matters a lot and sometimes it matters a little, but I disagree that this should affect how we assign credit.

Put simply, if Alice and Bob cooperate on a tweet where Alice writes "Smoking is bad" and "Smoking is not bad" and then Bob chooses between the options, Bob still has only contributed 1 bit of writing. Bob has decided whether to endorse smoking or not, but Alice has decided that (1) we're going to be talking about smoking, (2) we're going to be discussing whether or not it's bad, (3) the whole tweet will just be a blanket assertion one way or the other with no further detail. Alice deserves the remaining bits of the credit, however many there are.

Expand full comment

A bot successfully writing a "Bay Area House Party" post is pretty much my definition of the singularity.

Expand full comment

Also, this post reminded me of this old episode of Reply All, about the time a political party in Mexico hired tons of people to cause chaos on Twitter:

https://gimletmedia.com/shows/reply-all/j4hl36

If we envision a scenario in which some significant portion of the users of social media are a) fake and b) under the control of different interested parties, I think there is plenty of room for chaos.

There are already some people who fall for the old Nigerian Prince scam. One thing that always amazes me about the scammers who contact me is how bad they are at what they do. I can imagine a Chatbot being much better at this sort of thing, even if that means that 99% of people are still able to tune it out.

Finally, content farms are already a thing, and their target is search engines. I would expect the problem of bullshit content affecting search results to get *much* worse. There are some people who claim that Reddit is better for search than Google (I am not one of them), because at least you know the opinions you are reading are written by a real person. For how much longer will that be true.

Basically, I think we can all look forward to an absolutely tsunami of bullshit in the coming years.

Expand full comment

I don't think I've ever heard anyone claim that Reddit search is better than Google search. But I've heard many people recommend using a google search with the keyword site:reddit.com as a good way to find actual reviews by actual people, and actual advice on how to do things.

Expand full comment

AIUI the scammers look stupid on purpose. There's a long chain of things they need to accomplish before they get your money, they don't want to waste time on someone that'll get wise partway through, so they pre-filter for the most gullible targets.

Expand full comment

That's one of the styles bots are best at, though. This was ChatGPT's response to the prompt "write me a humorous account of a tech bro party in silicon valley":

Sure, here's a humor piece about a tech bro party in Silicon Valley:

The tech bro party was in full swing, with a seemingly endless supply of artisanal beer and locally-sourced avocado toast. The guests were a who's who of Silicon Valley, with everyone from startup founders to venture capitalists in attendance.

One tech bro was pontificating about his latest idea for a virtual reality dating app, while another was extolling the virtues of his electric car, which he claimed was powered entirely by unicorn tears and positive vibes.

As the night wore on, the tech bros started to get more and more elaborate with their stories. One claimed to have single-handedly solved world hunger with his new app, while another boasted that he had just signed a deal to put a Tesla on the moon.

Just then, a voice boomed out over the sound system. "Attention, tech bros! It's time for the burning man challenge!" A giant inflatable flaming man was wheeled out, and the tech bros were instructed to dance around it while wearing VR headsets.

Despite their best efforts, the tech bros soon found themselves stumbling around like drunken sailors, bumping into each other and falling over. Eventually, they all gave up and collapsed in a heap, laughing and panting from their exertions.

As the night drew to a close, the tech bros stumbled out into the Silicon Valley night, still buzzing from their wild night of tech-fueled revelry. They would all be back at their desks in the morning, of course, but for now, they were just happy to be living the Silicon Valley dream.

Expand full comment

This is so banal it could've been written by a Facebook PR flack. In fact, one imagines that jobs like that -- writing BigCorp PR releases[1], State of the Union speeches, and other such farragos of unreproachable true statements and poll-tested platitudes -- could be an employment niche for chatbots. Pays well, too.

----------------

[1] https://youtu.be/_5jR8lR7K_Q?t=76

Expand full comment

Agreed that it's not nearly as good as Scott's Bay Area House Party posts, but the bots are expected to get much better than ChatGPT pretty quickly, and it's already passable.

Also, as someone else pointed out, I could maybe have come up with a prompt that would cause ChatGPT to do a better job of this.

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

Yes well I expected to be able to buy a tourist ticket to Mars within 10 years of watching Neil Armstrong step onto the surface of the Moon. Technology rarely lives up to its straight-line extrapolation -- unless there's some very good reason for the extrapolation, like you can actually lay out how you would go about making big improvements. What's more common is early brilliant success is followed by...much more modest improvements as the low-hanging fruit is replaced by much harder goals. Cf. self-driving, a technology that is still waiting to fulfill any number of early promises.

What would be the good reason for chatbots to tremendously improve in creativity and subtlety? It's not going to be a 100-fold increase in the size of the training data, because I'm given to understand that's probably not possible, it's about as big as it can be for a reasonable cost already. It's not going to be a 100-fold increase in the number of nodes in the ML net, because that just means it gets to its target goal more precisely -- and it seems to have hit its target goal with great precision already. Sounding like an even more urbane PR flack isn't going to improve things in the right way.

More importantly, as far as I can tell, the training regimen optimizes these AIs to produce the kind of text an average human being produces, with some additional training to avoid running afoul of assorted shibboleths, e.g. saying something scatalogical or crude, or politically incorrect. The optimum result of a such a training is the ability to produce competent corporate PR press releases -- which it has been demonstrated it can do quite well.

But what change to the training regimen could result in an original voice, with humor and novel insight? You can readily run a steepest descent algorithm with the success metric being "looks like any of these 100 million samples of human-written speech" but how do you even define (at the precise level of mathematical statements) a success metric of "contains original and creative ideas?" What's the algorithm for saying what Scott writes is "funnier" or "better" than J. Random Substacker?

Finally, if you have to come up with the exact right prompt to get it to be funny, then the creativity in the result lies with you. As if Picasso drew a paint-by-number outline of a great painting and gave it to a tyro to fill in.

Expand full comment

That’s not a great prompt, though. Here’s what it generated given the first ~ dozen paragraphs from Scott. It’s not *great*, but it is quite a bit better:

You walk in. The wall decorations vaguely suggest psychedelia. The music is pounding, head-splitting, amelodious. Everyone is struggling to speak over it. Everyone assumes everyone else likes it.

You flee to the room furthest from the music source. Three or four guys are sitting in a circle, talking. Two girls are standing by a weird lamp, drinks in hand. You see Bob.

“Hi, Bob!”

“Hey, good to see you again!”

“What’s new?”

“Man, it’s been a crazy few months. You hear I quit my job at Google and founded a fintech startup?”

“No! What do you do?”

“War insurance!”

“War insurance?”

“Yeah. We pay out if there’s a war.”

“Isn’t that massively correlated risk?”

"Yes, it is a high-risk venture, but we have developed unique underwriting methods and use cutting-edge technology to manage the risk and provide coverage to clients. It's a promising market, and we're seeing good traction so far."

"Wow, that's quite innovative. How does it work?"

"Well, we use AI and machine learning algorithms to analyze geopolitical data and make predictions about the likelihood of a war. We then use that information to price our policies and make decisions about underwriting. It's a new way of thinking about insurance, and we believe it has a lot of potential."

"That sounds fascinating. How's business going?"

"It's still early days, but we're seeing a lot of interest from potential clients and investors. We're growing quickly, and we're confident that we can make a real impact in the insurance industry."

"That's great to hear, Bob. Good luck with the venture."

"Thanks! It's been a wild ride so far, but I'm really excited about what the future holds."

"I'm sure it'll be a success. You have a knack for spotting opportunities and making things work. I'm sure this will be no exception."

"Thanks for the encouragement. I really appreciate it. Hey, you should come check out our office sometime. We have a great team and an amazing culture."

"Definitely! I'd love to see it in action. Let's plan something soon."

"Absolutely. I'll send you an invite. Well, I better get back to mingling. Take care and we'll catch up soon!"

"Sure thing, Bob. Good luck with everything."

You watch as Bob walks away, deep in conversation with someone else. You can't help but feel proud of him and his latest venture. War insurance may seem like a risky business, but if anyone can make it work, it's Bob.

You head over to the girls by the lamp, eager to catch up with them. As you chat and catch up on old times, you can't help but think about the potential of Bob's new venture. You make a mental note to follow up with him soon and see how things are progressing.

It's always exciting to see friends doing great things and chasing their dreams. You raise your glass to Bob and his future success, knowing that he's sure to make a positive impact in the world.

You smile, feeling grateful to be surrounded by such ambitious and driven people. The night continues on, filled with laughter and good conversation, but the idea of Bob's war insurance startup stays in the back of your mind. You can't wait to see where this venture takes him and how it will change the world. With Bob's drive and innovative thinking, you know it's only a matter of time before his startup becomes a huge success.

You continue to mingle and socialize, but the excitement of Bob's startup is always there, lurking in the back of your mind. It's inspiring to see someone chase their dreams and make a difference in the world. You can't help but feel motivated and encouraged by Bob's story, and you find yourself making a mental list of your own dreams and goals.

As the night comes to an end and you say your goodbyes, you can't help but feel grateful for the incredible people in your life. Bob's entrepreneurial spirit has sparked something within you, and you leave the party feeling inspired and driven to chase your own dreams.

With newfound motivation and a clear goal in mind, you head home, eager to get started on your own journey. The night may be over, but the excitement of Bob's startup and the potential it holds has only just begun.

You know that with hard work and determination, anything is possible. The future is bright, and you're ready to make the most of it. With a smile on your face, you fall asleep, dreaming of all the possibilities that lie ahead.

As you dream, you can see yourself working hard, pursuing your passions and making a difference in the world. You see yourself building a successful career, just like Bob has done. You feel a sense of pride and accomplishment, knowing that you too have the potential to achieve great things.

The next morning, you wake up feeling energized and inspired. You get dressed, grab your coffee and sit down at your desk. You open your computer and start to work on your goals, taking the first steps towards a bright future.

As you work, you think about Bob and his startup, and how his story has motivated you to chase your own dreams. You know that success is possible, and that anything is achievable if you put your mind to it.

And so, with a smile on your face and a sense of purpose in your heart, you continue to work towards your dreams, knowing that the future is bright and anything is possible.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

For the record, this was generated by going into ChatGPT, feeding it the first part of Bay Area House Party, and then going back and appending its output to the prompt, to make it generate more completions. It got into a weird kind-of-attractor that the end, where all continuations said mostly the same inane thing with different wording, and I stopped.

Expand full comment

This is closer to the style of the Bay Area House Party posts, but it's not funny? It seems to just be sincerely about someone who is genuinely excited and inspired by Bob's war insurance idea. The one I had it generate at least made attempts at humor, if uncreative.

But yes, I could maybe have come up with a prompt that would have caused it to do a better job.

Expand full comment

Part of what is funny about Scott’s posts is that they describe very “normal” situations but with elements of weird stuff and weirdness-ignoring responses sprinkled around. In other words, they have “unexpected completions”, which would be expected to be hard to imitate for an expected-text-completion-generator.

My prompt stopped at the first weird thing, so it tried to go in a “normal” direction and did a decent job. I was curious what would happen with a longer prompt which includes a few exchanges, but it’s at capacity right now. Oh, well.

Expand full comment

ChatGPT knows "Slate Star Codex" and "Scott Alexander", so I just typed the following prompt: can you write a fictional, satirical story about a Bay Area house part in the style of Scott Alexander of Slate Star Codex?

This is what came out:

Once upon a time, in the heart of the Bay Area, a group of tech workers threw a house party to celebrate their latest unicorn startup going public. The guests were an eclectic mix of programmers, investors, and entrepreneurs, all eager to network and show off their latest toys.

As the night wore on, the atmosphere became increasingly surreal. A group of AI engineers boasted about their latest creation: a machine that could perfectly mimic human conversation, but with the added bonus of being able to generate its own jokes. They demonstrated the machine by having it participate in a heated debate about the ethics of creating sentient AI.

Meanwhile, a group of blockchain enthusiasts had set up a "crypto corner," where they were trying to convince anyone who would listen that they were on the cusp of creating a new decentralized utopia. They eagerly discussed their plans to revolutionize the world with their new "smart contracts," despite the fact that none of them could quite explain what a smart contract was.

As the night went on, the guests began to split into increasingly insular groups, each focused on their own particular area of expertise. The data scientists huddled over laptops, analyzing vast sets of data and trying to prove that they could predict the future. The virtual reality developers demonstrated their latest creations, which promised to take immersive gaming to the next level.

Just as the party was reaching its peak, a group of climate activists crashed the party, brandishing signs and demanding that the tech elite take action on the impending environmental catastrophe. The guests were momentarily taken aback, but soon returned to their usual distractions. One particularly savvy startup founder saw an opportunity and quickly began pitching his new "carbon credits trading platform" as the solution to all their problems.

And thus, the party continued on, with each group lost in their own world of ideas and ambitions. In the end, everyone went home happy, having networked, demonstrated their latest creations, and convinced themselves that they were on the cutting edge of technological progress.

The end

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

> You might think so, but you might also think that the spam fake Facebook friend requests I get would try this, and they never do.

Anecdata, but I did get a male fake account writing to me. Twice.

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

I'm not a man, and I almost exclusively get male fake accounts writing to me. They don't seem like they are trying to be "hot" either - usually they are playing on my empathy and compassion, talking about how lonely they are and how they don't have many friends and they normally don't do this and they hope I'm not weirded out but they saw my profile and I seemed really cool so would I consider accepting their friend request? And the profile picture will be like, a balding middle-aged man or something

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

So, playing on "men are horny, women are compassionate, both are mostly straight". Seemingly predictable. (I _am_ male, though.)

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

The fact that people are already worried that Chatbots will take our jobs and fill the Internet with fake people is what convinces me that it's the exact thing that won't happen. I still remember how, in the '90s, pop culture was all about the transformative power of genetics (see: Jurassic Park) while computers and the Internet were amusing novelties; to the extent anyone cared it was all about VR. Remember the goggles and gloves?

Meanwhile, Crytpo would (so I read on several blogs) destabilize government ability to issue fiat currency by the 2020s, and, as you pointed out, we once thought the Internet would usher in a global information utopia.

Whatever does happen with generative ai with be something none of us are thinking about. It will probably be something much weirder and dumber than any prediction.

Expand full comment

> the transformative power of genetics (see: Jurassic Park)

I wan't to make a joke about the resurrection of the dodo bird but unfortunately (and despite the recent headlines), that's still just a pipe dream.

Expand full comment

Birds are actually really hard to do germline genetic engineering on, for a rather simple reason which I leave as an exercise for the reader.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Here's a 1994 BBC segment about the Internet, from the show Tomorrow's World: https://www.youtube.com/watch?v=XpZ5STahhPE

The presenter, Kate Bellingham, talks about the applications that already existed (email, looking up recipes, weather forecasts, online shopping) as well as what could be in the future: fiber-optic cables allowing video streaming and data sharing. It's shockingly accurate. While I'm sure Bellingham is a smart lady--and a former engineer to boot--I'm also sure she didn't make up this segment herself. It likely reflected the opinion of experts at the time, an opinion that time has proven to be very accurate.

Expand full comment

I think it's more likely that it was neither about Bellingham nor opinion of experts, it was all just very obvious. The fact that computers could work with information, images, videos etc was well known, ideas like video calls dated back decades and could be seen in sci-fi of the 50s or 60s. Fiber optic cables already existed.

What would actually have been interesting was predictions that weren't simple linear extrapolations of what already existed, for example, social networks came out of left field. Targeted text/search ads likewise weren't especially obvious.

Expand full comment

Yes, my point was that it was obvious, at least to experts. Maybe it was also obvious to the general public, but then the BBC wouldn't have needed to do a segment on it.

I don't think either social networks or targeted search ads are either non-obvious or especially revolutionary. Forums and email already existed in 1994. Real-life targeted marketing already existed, and it didn't take a genius to imagine porting it to the Internet.

Expand full comment

I don't know, I grew up with the web in the 90s and can't really recall any predictions of anything like Facebook or Instagram, neither in sci-fi nor in the breathless takes on the global village that typified the early years. I guess you could claim that the 'global village' concept was sort of in the right general direction, but the concept of people posting endless photos of their life moments to extended friends networks, and that this would be commonplace ... if this was obvious, who predicted it?

As for targeted ads - yes ad targeting existed, but if you'd travelled back in time to the 90s and told people that the big winner of the internet would be a company selling ads that consist exclusively of a handful of words, they'd have thought that was pretty nuts. Everything was about multimedia, retail ... people would have bet on Amazon. The idea of a search engine company becoming so rich wasn't obvious at all. Yahoo had lost interest in web search, after all, thinking it was a dead end business.

Expand full comment

"the concept of people posting endless photos of their life moments to extended friends networks, and that this would be commonplace ... if this was obvious, who predicted it?"

Ray Bradbury, 1953 short story "The Murderer":

http://www.sediment.uni-goettingen.de/staff/dunkl/zips/The-Murderer.pdf

"When it wasn't music, it was interoffice communications, and my horror chamber of a radio wristwatch on which my friends and my wife phoned every five minutes. What is there about such 'conveniences' that makes them so temptingly convenient? The average man thinks, Here I am, time on my hands, and there on my wrist is a wrist telephone, so why not just buzz old Joe up, eh? 'Hello, hello!' I love my friends, my wife, humanity, very much, but when one minute my wife calls to say, 'Where are you now, dear?' and a friend calls and says, 'Got the best off-color joke to tell you. Seems there was a guy-' And a stranger calls and cries out, 'This is the Find-Fax Poll. What gum are you chewing at this very instant?' Well!"

..."Why didn't I start a solitary revolution, deliver man from certain 'conveniences'? 'Convenient for who?' I cried. Convenient for friends: 'Hey, Al, thought I'd call you from the locker room out here at Green Hills. Just made a sockdolager hole in one! A hole in one, Al! A beautiful day. Having a shot of whiskey now. Thought you'd want to know, Al!' Convenient for my office, so when I'm in the field with my radio car there's no moment when I'm not in touch. In touch! There's a slimy phrase. Touch, hell. Gripped! Pawed, rather. Mauled and massaged and pounded by FM voices. You can't leave your car without checking in: 'Have stopped to visit gas-station men's room.' 'Okay, Brock, step on it!' 'Brock, what took you so long?' 'Sorry, sir.' 'Watch it next time, Brock.' 'Yes, sir!' So, do you know what I did, Doctor? I bought a quart of French chocolate ice cream and spooned it into the car radio transmitter."

..."Well, that night I laid plans to murder my house."

"Are you sure that's how you want me to write it down?"

"That's semantically accurate. Kill it dead. It's one of those talking, singing, humming, weather-reporting, poetry-reading, novel-reciting, jingle-jangling, rockaby-crooning- when-you-go-to-bed houses. A house that screams opera to you in the shower and teaches you Spanish in your sleep. One of those blathering caves where all kinds of electronic Oracles make you feel a trifle larger than a thimble, with stoves that say, 'I'm apricot pie, and I'm done,' or 'I'm prime roast beef, so baste me!' and other nursery gibberish like that. With beds that rock you to sleep and shake you awake. A house that barely tolerates humans, I tell you. A front door that barks: 'You've mud on your feet, sir!' And an electronic vacuum hound that snuffles around after you from room to room, inhaling every fingernail or ash you drop. . . ."

Expand full comment

"...if this was obvious, who would have predicted it?"

E. M. Forster, "The Machine Stops," 1909.

It's not precisely this, but it's eerily close to the modern experience:

"Vashanti’s next move was to turn off the isolation switch, and all the accumulations of the last three minutes burst upon her. The room was filled with the noise of bells, and speaking-tubes. What was the new food like? Could she recommend it? Has she had any ideas lately? Might one tell her one’s own ideas? Would she make an engagement to visit the public nurseries at an early date?—say this day month."

Expand full comment

It definitely wasn't obvious. If you'd have asked me I would have said such things were possible, but I wouldn't have predicted that anyone would bother. Video calls are high bandwidth and very expensive. I also wouldn't have predicted the amount of spam. Or the persistence of the Nigerian Prince scam. (Viruses used to be flash in the pan, and pretty harmless. The first one I encountered was the cookie monster.)

So the problem wasn't in predicting what was possible, but rather which way people would drive things. LOTS of things were possible that never happened. I still like the idea of "Dream Park", but it hasn't happened yet, and may not.

Expand full comment

Scott Adams predicted citizen journalism in the nineties, as well.

Expand full comment
Feb 4, 2023·edited Feb 4, 2023

I find that kind of discouraging, actually. That so little that's astonishing from the perspective of 29 years ago has happened. Imagine someone putting together a radio show in 1940 --- 29 years before Neil Armstrong set foot on the Moon -- about the future of aeronautical engineering. They might have predicted that someday airplanes would carry more passengers than railroads, and working-class people could afford to go to Europe on vacation, and if they were really on the ball they might've hypothesized about jet engines pushing airplane speeds and altitudes to the point where pressurization was needed. All of which would've turned out to be accurate. But what actually happened over the next 29 years was way more amazing.

Mind you, there *are* fields where 29 years ago might as well be the Neolithic (molecular biology comes to mind), but computing doesn't seem to be one of them. Maybe it's turned into cars in the 70s and 80s, just kind of coasting, putting chrome and tailfins on the product to make it seem new 'n' exciting every model year.

Expand full comment

Yes, the early 20th century was a time of astoundingly rapid technological progress. That's not at all the norm in human history, and there's no reason to expect that it'll be our future.

Expand full comment

> we once thought the Internet would usher in a global information utopia.

If you're interested in having accurate information at your fingertips, it *is* utopia. It's not the internet's fault that so many people are only interested in opinion.

Expand full comment

So much to take in... But once again leads me to knowing my intuition was right once again, AI has no heart and is incapable of answering the deep true questions I will not post here. Those of you that know... know! I now see why I question everything, but even that has to be questioned.... hmmm this makes it so much harder to advance. Sorry I was thinking out loud a little here.

Expand full comment

What you need to understand is that a ChatBot is a very limited and specific form of AI. Yeah, it has no heart. Or any emotions. It doesn't understand that the physical universe exists. ALL it knows is text. Other forms of AI have different limitations. We don't have even an approximation of an AGI yet. And we don't know how many breakthroughs we are away from it. The number could be "1". Or "1000". One could show up tomorrow, or we may never develop one. My bet is still 2035, though I've been tempted to move it to sooner.

(This due to an article I read a year or so ago.) SOME AIs have "true" sympathetic emotions. Unfortunately, they're rather specialized, and the ones I read about, that's about all they have. They don't understand language, but they read facial expressions or tone of voice. If you're unhappy, they'll sympathize, and try to make you feel better. (I don't know how good they are at that, but they'll try.)

Expand full comment

The cartoon illustrates a point I was already wondering about when Scott brought it up. A Pepsi-selling chatbot good enough to disguise itself as a human friend you talk with every day - what would it look like? If it was good enough to maintain its disguise, its ability to sell Pepsi to you would have to be very weak. If it was more focused on selling Pepsi, it couldn't maintain its human disguise.

Expand full comment

Isn’t this just an influencer? Or in the old days, John Wayne smoking Marlboros?

Expand full comment

No, because an influencer isn't a "friend you talk with every day". People may form parasocial relationships with influencers, sure, but there's no real pretense of personal interaction. Anybody with two brain cells to rub together knows they're selling products, and anybody without two brain cells to rub together - well, you demonstrably don't need AI to fool them anyway.

Expand full comment

You sort of can make them talk with you every day, for a fee. The current Twitch meta is that a viewer makes a donation (with a set lowest amount, usually 5-10$ for the most popular streamers) together with an accompanying text message, which a text-to-speech program reads aloud for the whole stream, and the streamer usually replies.

Expand full comment

There was a great pair of episodes of Community about this. Unfortunately, I can't remember the episode numbers. One of them involved the corporation Subway taking over the identity of a human, and enrolling in college, and befriending the group, until Brita tries to help the guy reclaim his identity and break his contract with Subway. The other was a season 6 episode, in which the same character came back, this time in the character of Honda, and sells a bunch of vans to the Dean. These two were particularly edgy because I think the episodes were subsidized by Subway and Honda, even though they were all about the inhumanity of this form of native content advertising.

Expand full comment

You wouldn't really do it for Pepsi, you'd do it for a single larger payoff, like persuading people to join a MLM scam.

Expand full comment

Yeah, I can imagine a strategy on Facebook like:

1. befriend a human

2. post photos from expensive vacations

3. tell the human about your new and exciting business

The introduction of the topic could be relatively natural:

If the human mentions money or complains about their job, introduce MLM.

Otherwise, post photos from vacation and start a conversation about them: "do you like my photos?", "have you ever been to $COUNTRY? how did you like it?" If the human mentions money (e.g. "I can't afford to travel so far so often"), introduce MLM.

Otherwise, 3 months later introduce MLM. Make the connection to the vacation photos (happened on a business trip, or as a reward for exceptional sales).

Expand full comment

I’ve had online technical support sessions with Microsoft trying to get the answer to a yes or no question and come away still unable to say for sure: “Perverse Chatbot or some deliberately unhelpful guy in Chennai?”

Expand full comment

This made me realise that MSDN forum replies from "Microsoft" have always looked a lot like ChatGPT: they usually misunderstand the question, reply with an answer to a related but different question, and tack on some boilerplate at the end.

Expand full comment

Now that you mention it, I think almost all of my interactions with my high school teacher/collage professor went this way.

Expand full comment

> "In fact, political propaganda is one of the worst subjects to use bots for. On the really big debates - communism vs. capitalism, woke vs. anti-woke, mRNA vs. ivermectin - people rarely change their mind, even under pressure from friends"

I think you're off base here. The reason people’s opinions are so deeply entrenched is because they think that's what their community believe, which itself is a subliminal belief informed by how often they hear a particular view. If you manage to get your propaganda in front of people’s faces often enough, it'll change many people’s minds. Maybe not by peppering people with the exact opposite of what they currently believe, but I think you can gradually bring people around over a period of time by subtly introducing doubt/nuance.

That said, I mostly agree with the rest of your post that undercuts the likelihood that chatbot propaganda will really get read by that many people to begin with, so maybe not a big problem. When I think of what form of chatbot might change people’s minds, it's probably pretending to be someone respected in a given community but saying things to undercut that community's beliefs. But that already exists as non-bots and the algorithms keep it from being seen much.

And if a really successful bot de-entrenches beliefs by sowing nuance, we get to the situation in the comics where maybe it's good, actually.

Expand full comment

"I think you're off base here. The reason people’s opinions are so deeply entrenched is because they think that's what their community believe, which itself is a subliminal belief informed by how often they hear a particular view. If you manage to get your propaganda in front of people’s faces often enough, it'll change many people’s minds. Maybe not by peppering people with the exact opposite of what they currently believe, but I think you can gradually bring people around over a period of time by subtly introducing doubt/nuance."

This is my worry. I think people tend to believe things, not just in proportion to how good the arguments are, but in proportion to how often you see the arguments in favor of them + how common the belief appears to be.

If I were trying to make a chatbot network make people believe X, I'd focus less on crafting The Perfect Argument and more on making it seem commonplace and popular and making sure it gets a lot of exposure.

Expand full comment