706 Comments
Comment deleted
Expand full comment
Nov 3, 2022·edited Nov 3, 2022

*Is* there even a "typical" person in (*Checks Google*) a 1.4+ billion-strong monster of a nation state ? Like the joke goes, the average human has 7 fingers on 1.5 hands and 0.4 of a breast pair.

Also, keep in mind that asking "What does the majority want" is the censor's way of thinking.

"Would the typical Muslim woman *want* the freedom to not wear a hijab ?" It doesn't matter, if even a single Muslim woman wants the freedom to not wear a hijab (trivially demonstrable) then a world which doesn't grant her that freedom is curtailing her rights. Same thing goes for Freedom of Speech.

Expand full comment
Comment removed
Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Huh, a conspiracy theory with an actually decent website, that's a first for me. And apparently Bill Gates, Trump and Biden are all on the pro-zombie side, also a fresh perspective!

Expand full comment

I see that you read a Substack which links to one that includes the claim that ivermectin is indeed the covid cure.

You should try spreading your message to those who also believe ivermectin is a cure. We have one guy already on here, Mannan Javid, and we've had a couple others as well. I'm sure that they would be very interested in Big Pharma is lying stories, and that would lead on to the zombie plan.

Expand full comment

See, if everyone jumps on this post to point out that it's a stupid conspiracy theory, that's not censorship, that's just responding to free speech with more free speech.

Expand full comment

This comment removal is very self-demonstrative.

Would you prefer to have "moderated" this instead of "censoring" it if you had the option, Scott?

Expand full comment
Comment removed
Expand full comment

All day long. History is riddled with false ideas that turned out to have merit. And vice versa.

Expand full comment

Opponents of moderation also like to conflate the two. The debate is at its least sane and most viral when the terms of the debate are obscured, so this is probably a stable equilibrium.

Expand full comment

Maximum outrage does seem to be a stable equilibrium...

Expand full comment

Haha a game theorist

Expand full comment

Are there opponents of moderation (as defined here)?

Expand full comment

Well, there's definitely people who would prefer that their own content be shown even to people who don't want to see it (e.g. spammers). And I feel pretty confident that at least some of those would rather eliminate all moderation than allow themselves to be moderated, so I'm going to say "yes".

Expand full comment

I have an uncomfortable relationship with the question "are there opponents of moderation?" because I only hold the wise, good, enlightened, and progressive values I do hold (as well as some of the challenging, subversive, and oppositional values I also hold) because I was exposed to content that I did not want to see, back in the bad old days before moderation (online or in real life).

I personally would prefer that I not have to think about seeing poorly-reasoned, annoying, crass, or obnoxious content, but I also have to admit that many good things that I now have incorporated into how I see the world came to me against my will. Moderation appears convenient, but if viewed from a far-out-enough zoom level is probably a moral evil.

Censorship, plainly labeled as such, at least has a kind of honesty about it. If I know I'm not permitted to see something because of true censorship, I have a heuristic that makes me inclined to think it's probably worth the effort to seek out. If I am in a bubble of moderation that prevents me even from hearing about it, or hearing about it only in strawman form from people I already agree with, I'm going to do less growing.

Maybe the whole reason I've gotten into reading about Lacanian psychoanalysis is that I've run out of interesting external challenges and am now concerned with internal strifes:

> But be the war within, the brand we wield

> Unseen, the heroic breast not outward-steeled,

> Earth hears no hurtle then from fiercest fray.

Expand full comment

> back in the bad old days before moderation

I think the big difference wasn't that the old days didn't have moderation; in the old days, the internet had private property rights.

People went to different forums that they liked. Some forums moderated, some didn't. When you owned a forum, you set the rules. Now, all kinds of conversation and discussion happens on these giant centralized platforms where the individual users _don't_ have private property rights.

The same is true of games. People used to play on private servers. If you were a dick, the server would ban you. Everyone had an incentive to behave. The threshold for being banned from _any_ use of a game, however, can be set much much lower. The server operators lose nothing by telling you to take a hike. But the game owners lose a potential future customer.

Bring back private property rights and the web will shine. You'll see plenty of great content here because Scott is a good property owner.

Expand full comment
deletedNov 3, 2022·edited Nov 3, 2022
Comment deleted
Expand full comment
Comment deleted
Expand full comment

I think the culture changed, Billy. Millennials dropped official religions and then took up the whole religion, because nature abhors a vacuum and people have to find meaning and connection to a bigger purpose.

Expand full comment

Well, this has two levels:

1. Technical: You are right that there are enough resources in the hands of most people to handle a small community, but there are obstacles blocking the use:

- Dynamic IPs make it hard to reach your devices from outside of your home. To change this IPv6 has to take over. But dynamic IP are also a feature for safety and privacy.

- Security: I don't want the public to scan and attac my personal devices because they are known in the public, espacially in combintion with my personal views.

- If some posts really go viral, the load is too big not only for affordable hardware but also for usual private bandwidth, so there has to be some infrastructure to cache and deliver popular content. Who organizes and pays this? Can you trust it?

2. Economically:

Currently all platforms optimize their for as much time spent as possible using lock in mechanisms and psychology to expand screen time. A open, and respecting alternative would not do this, so it has a uphill battle.

The only way to bring change I see, is to force all to bigger players to support federation, so there will be a real competition again when even small hosts can participate in the general communication.

Expand full comment

I've grown impatient with the seeming hypocrisy (I have no doubt that those spouting this do not think of themselves as hypocrites): when it's Bad Man getting booted off social media sites, "Well it's a private company, they can do what they want, if you don't like it set your own site up".

When it looks like Another Bad Man may take over and run that private company by his preferences, then it's a matter of social importance and he shouldn't be allowed.

If it's going to be "one law for me and another for thee", then be clear about it.

Expand full comment

Yeah. It’s been funny to see people who would normally never say “it’s a private company they can do what they want” suddenly discover this concept.

Expand full comment

How many people have you seen saying that Musk shouldn't be allowed to take over Twitter? I'm sure there have been some, but I don't think it's been a widespread opinion.

"Musk taking over Twitter is a bad thing and everyone should respond by leaving" definitely /has/ been, but that's totally compatible with the "private actor" standard you set out above.

Expand full comment

It's really a scaling problem. Some things work fine at a small scale, but don't work so well at a larger scale. For others, it's the other way around. I think "ownership" is one of those things that works great at an individual level. and progressively less great as things scale up. But sometimes it's NECESSARY at the large scale if things are going to get done. Consider road maintenance. Who should own the road? Well, you should own your own driveway, the city should own the city streets, some larger entity should own the highways. Now how do I drive my bulldozer over to my cousin's house?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Well, ends justifying means is human nature. We used to gather in small groups every week, Sunday usually, to hear a lecture and discuss why this is a bad idea, but the practice has fallen off. Plus we just seem a lot more narcissist, culturally and even individually[1], and narcissists can rationalize anything if it suits their wants.

--------------

[1] I blame the Internet, because why not? but also because a world in which one is encouraged to seek individualized and prompt gratification of your urges to be socially popular will always encourage narcissism.

Expand full comment

I think there isn't much hypocrisy when people saying these know the following facts. Elon is entangled in many business and political interests and has signaled his readiness for compliance with EU's DSA regs, Indian, Russian, Chinese, Brazil, Saudi governments, I'd add US law enforcement/govt here too. My problem with him is that he is masquerading a pretty extreme govt censorship as "free speech". GOP, his fanboys and some unprincipled self-proclaimed "libertarians" are cheering him on it. This all is extremely sad to watch.

https://www.techdirt.com/2022/10/28/elon-musks-first-move-is-to-fire-the-person-most-responsible-for-twitters-strong-free-speech-stance/

https://www.techdirt.com/2022/08/11/elon-musks-legal-filings-against-twitter-show-how-little-he-actually-cares-about-free-speech/

Expand full comment

Back in the old days people set up or used forums (e.g. usenet) to discuss. You can argue there was more tolerance of bad stuff because you could just go to a different forum/group. I argue this is a red herring.

Today people set up forums to make money. They make money by ad views and followers and popularity. You may find the forum useful for discussion, but if you cause a disturbance, you will be banned. A disturbance is simply the maximal disruption to revenue. If you espouse questionable views but are popular, and banning you would lose more than keeping you, you stay.

If Scott hypothetically allowed material such as lead to a revenue problem for Substack, what do you think would happen?

Expand full comment

This is why Substack’s business model is more aligned with moderation vs censorship. The customers are the audiences to numerous private conversations. If you don’t like it, don’t pay. If substack starts censoring viewpoints, I’m more likely to stop paying.

Expand full comment

This reminds me of something I've thought about for a long time, but only now just zoning in on a label: "profit-focused moderation".

It's the idea that moderation happens by default, because certain "information" has more mass appeal and is therefore more profitable. For example, years ago (pre-Internet, into early days of Internet) in the small town I live in there was a well-stocked video rental store. This store had DVDs (and before that video cassettes) of all the big summer hits, most of them with multiple copies waiting to be checked out for one or two nights.

Elsewhere in that store there was a foreign movie section: hundreds of movies that would almost never ever be screened on TV, but which I had access to because somebody somewhere decided it was a good thing to offer. So as a teenager I was exposed to Krzysztof Kieślowski, Ingmar Bergman, Federico Fellini, Luc Besson and others. This, as a pretty poor kid whose only trips outside the country were a camping trip to Brittany and a school tour to Paris, opened my eyes to different ways of seeing the world in an astonishing way.

And I think about now, and especially my teenage daughter, who has a million times more information and options coming at her than I ever did at the same age. But Netflix, Prime etc. only bring us the 1-5% of "content" that it believes has most commercial merit rather than most artistic/mind-expanding merit.

It's weird, but it was so much easier for me to come into contact with the Three Colours trilogy than it will be for my daughter or her school-mates.

Expand full comment
Comment deleted
Expand full comment

the torrent movement was successfully broken by the content industry ant the copyright laws when it was gaining momentum some 10 years ago.

At one hand they were after the exchanges like the pirate Bay. But even more effective was the real danger that you could get exobitant fines just because of watching and thus sharing a copyrighted movie.

This was there and working long ago, the problem is a legal one. For me Stremio is just a shiny UI for the usual for profit streaming services and has nothing to do with torrent.

Expand full comment

The long tail exists, it's just hard to find. You could probably find DVDs of it on Amazon.com, but you have to know what you're looking for. :/

Expand full comment

This is my point ... I did not have to know what I was looking for to be exposed to it. The difference between seeking and browsing is vast.

Expand full comment

> Earth hears no hurtle then from fiercest fray.

I read that as "Earth hears no turtle" and thought it was a very poetic takedown of the flat-earth-on-the-back-of-a-turtle theory, with a dash of "A'tuin doesn't care about you."

Expand full comment

As an internet marketer, I can say I only want my content to be shown to people who want to see or might want to see it, and as soon as I have the slightest inkling somebody doesn’t, I’d be wasting money to show it to them. It’s hard to imagine any rational actor wanting to show advertising to people who didn’t want to see it. More likely spam is the result of online broadcast media being cheap enough that you can show an annoying message to everyone just to get a few buyers, plus laziness and the fact that curating an audience takes more patience, plus multipolar traps, ie everybody is emailing everybody so you have to be even more obnoxious and show even more display banners and send even more emails to get the same number of buyers.

Expand full comment

Thought-experiment: You are currently advertising on platform X. (Imagine X however it has to be for you to want to advertise on it.)

Platform X adds a new user setting that allows users to opt out of all ads--just check this box, and all ads disappear, no cost. Some large fraction of users turn this on. (Imagine whatever fraction you think is realistic.)

Are you willing to continue paying the same amount you were previously paying, to show your ad to the just the remaining people who kept ads turned on? (Each of the remaining people sees the ad exactly as many times as they originally would've, but the impressions you would have gotten on everyone who opted out are simply gone with no compensation.)

If you really *don't want* your ad shown to people who would opt out, then the fact that they are not seeing it is not a disadvantage (according to you), and so this service is at least as valuable to you as it was before, and you should be willing to pay at least as high a price for it.

If all marketers are willing to pay the same price for that service, then lots of online platforms are being stupid by not letting users opt out of all ads, since they could make their users happier and reduce bandwidth costs for ads while still earning the same advertising revenue and keeping advertisers equally happy.

I'm guessing you're inclined to say something like "the group of people who opted out of ads *in general* includes some people who still might want to see *my* ad in particular, if they were making a separate choice for it." There's maybe a coherent way of defining "wanting" where some of those people still "want" to see your ad, but those people are clearly not *consenting* to see your ad, so I argue that if you're still trying to show your ad to them, then you are against moderation.

My second guess at what you'll say is "I'm happy to let those people turn ads off, but then I shouldn't have to pay as much to show my ad to the smaller number of remaining people", but that would imply that you'd still want to show your ad to the people who opted out if you could do it for free, and you only meant you didn't want to pay the market rate for it, not that you didn't want to do it at all.

Expand full comment

Why would it imply that I'd still want to show my ads to people who opted out for free if I wanted to pay less for a cohort who opted out?

Couple of truisms: everybody "hates" ads until they see one that's relevant for them. But also, every low-rent "numbers game" spammer is engaged in that multipolar trap I mentioned, so that's decreasing the interest of even potential buyers, methinks. What that means is I'm probably not interested in advertising to those who opted out. And if you're halfway decent as a marketer (which the "spammers" obviously aren't) you can still advertise profitably to the remainder.

In real life, we'd probably want to think around corners and lean into the things the volume guys aren't willing or able to do. Just for instance, if Scott wanted to go full Dan Pena and sell tickets to his seminars, he could do that without advertising at all (and if it wasn't garbage he wouldn't burn his good will like Dan has). Spam farms aren't able to produce a resource like SA's blog that's inherently attractive, so that's the channel where I'd try to beat them for eyeballs.

Expand full comment

I still strongly dislike ads that are relevant to me.

And I don't mean "relevant to my interests, but I don't want to buy the thing *right now*", but actual "this is a thing that fulfills a desire/need I have, I am going to buy it right now."

Generally speaking, I still resent the ad.

There are multiple reasons for this.

That one's a bad "truism", as it is not true.

Expand full comment

Robin Hanson is fantastic and I'm sure you know him -- one of the elders broadly. He co-wrote a book called "The Elephant in the Brain", one section of which explains in some detail and with great persuasiveness the purpose of advertising to people who do not want or cannot afford your product. Really fascinating stuff.

I'd summarize it but you write like a smart person who reads quickly and I don't want to mangle the point for when you read it yourself.

Expand full comment

> Why would it imply that I'd still want to show my ads to people who opted out for free if I wanted to pay less for a cohort who opted out?

If you're willing to pay $X for A+B, but A is worthless to you, then you should be willing to pay $X for just B. If (A+B) >= X, but A <= 0, then B >= X.

To make that example slightly more concrete: If you are willing to pay $X to show your ad to A and B, then you're getting at least $X of value out of (showing your ad to A) + (showing your ad to B).

Suppose A wants to opt out of all advertisements, but B doesn't. At least one of the following must be true:

1. You weren't getting any value out of (showing your ad to A) anyway. That means you must have been getting *at least* $X of value *just* from (showing your ad to B), so you should be willing to continue paying $X to show your ad to *only* B.

2. You're getting less than $X of value *just* from (showing your ad to B). That implies you must have been getting more than zero value out of (showing your ad to A), or else you wouldn't have been willing to pay $X for (A+B) in the first place. Therefore, there is some positive amount of money you'd be willing to pay to show your ad to A, even though A would prefer to opt out if if they had the choice.

Expand full comment

I'd argue that if 99% of the population hide moderated content, and a thoughtful non-spam piece of content gets moderated, then it's being hidden from people who would have agreed to see it if they knew it existed. You have censorship _in effect_ if not by the strict definition.

The way you get consent is similar to asking "would you like to share a room with just people from group A, fishermen and clerks, or also include group B, dentists and convicted sex offenders?" Nobody's going to share a room with the dentists if they're arbitrarily paired with sex pests, not even someone interested in hearing more about the noble profession of dentistry.

I don't think the non-MVP version is viable. At best you might get checkboxes for five to ten categories that make a piece of content reportable, which are very broad categories. This at least lets you separate penis pill spam from hate speech, but it doesn't let you separate angry screeds about sex offenders in the Catholic church from angry screeds about blood libel and pizza restaurants.

Expand full comment

Yeah I think this is an extremely important point

Expand full comment

Indeed, the trick would be to be sure a very large percent of use keep moderation on, at least for some categories, than flag anything you want to censor as belonging to this category. Like the latest political scandal being a Nigerian scam....

However, don't falseflag too much, else the few people actually enjoying reading about how "Prince Hassan would need your assistance in transferring his heritage" may let the trick out...

Expand full comment

I've been waiting years to see the following built. I can't build it myself since it requires an existing community to take up:

- anyone can tag any content however they like

- anyone can filter the content they consume, by filtering on the tags provided by users they trust

- users share their list of tags, and links to peers whose lists of tags they trust

People can filter out whatever tags they don't want to see. If someone adds tags incorrectly, you can filter them out as a troll, and filter out anyone else who seees their tags as valuble.

What are the downsides here?

Expand full comment

"What are the downsides here?"

Wars over tagging. People not agreeing on the kinds of tags for the kind of content they produce. I'm already annoyed about trying to use tags on fanfiction sites to filter out things like "romance" and the results still include stories that are about romance, because the author tagged it as "mystery" or "family drama" or whatever.

Trying to sort out "which one of ten thousand users do I trust their tags to be indicative of content, and filter accordingly?" is a big job.

Expand full comment

The system should let you say “this tag doesn’t apply” and then automatically infer that you don’t trust the users who applied that tag. That doesn’t seem hard at all.

Expand full comment

Yes, very similar problems exists, and (imperfect, but what isn't?) solutions have been worked out. Wikipedia moderation, merchant/hotel/whatever ratings,.... Those are abused but still, it works to some degree....

However, I do not think the climate for working such solution generally exists, as Scott observed: Moderation is often the PC way to actually do censorship....And clearly, more and more actors are willing to up the censorship, even in regions of the world that used to be championing free speech (and still pretend they do). So implementing actual moderation where the information consumer having the final say is not something where I expect progress, on the contrary: it will just makes censorship harder and more obvious. Confusing censorship and moderation is not an error, it's 100% deliberate.

Expand full comment

I offer a rebuttal in the form of music: https://m.youtube.com/watch?v=bOtMizMQ6oM

Expand full comment

People who cannot figure out how to turn on a simple setting to broaden their horizons should probably be kept in the dark.

Expand full comment

This is more about rational ignorance. If the simple setting shows me all the platform's spam AND ALSO 0.1% interesting content, why would I turn it on? Even if the platform doesn't outright censor horizon-broadening content it can still raise the associated cost of seeing it beyond what a rational actor is willing to pay.

I think it's easy to say "they should just open their minds" without considering what it's like to voluntarily fill your social media feed with spam. That's a pretty significant sacrifice and likely outweighs any benefit you'd get from the good content.

Expand full comment

Certainly. "We're committed to free speech and therefore will allow everything not explicitly illegal" was plenty common ethos on the early internet, with /b/ on imageboards being probably the most well-known example. And yes, this included toleration of huge amounts of automated spam.

Expand full comment

It's more like opponents of "filter bubbles" or "echo chambers" - which are both valid things to criticize. Essentially, if someone only exposes themselves to content they want to see, then at best they end up out-of-touch, and at worst they become extremists.

It's one thing to not censor conspiracy theories. It's a whole other thing to only consume conspiracies.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

People who do cutesy cartoons with cuddly grandpa Karl:

https://miro.medium.com/max/614/1*Rk8NgfKRLzD3CUrwgh37gw.jpeg

You cannot tolerate intolerance, you see, and moderation is tolerating it.

The problem is, when it comes to censorship, is that when *we* are doing it, then it's not censorship (which is bad right-wing behaviour), it's moderation or it's banning hate speech because that is violence and the likes.

I've had to struggle with the idea of not censoring ideas I think are bad and indeed harmful. I was very pro-censorship in my youth, it's only over time I've come to "I may not agree with this, and I don't want or like it, and I do think it is harmful - but nevertheless, they have a right to state their views". That's what kept me from tearing down posters pushing for [item of social liberalisation] in my country when I was in my twenties, nobody would have seen me do it, and I thought the posters were stupid and offensive. I don't think today's twenty year olds would extend the same courtesy to posters trying to promulgate views I find acceptable (see the tomato soup throwers).

So moderation yes, censorship no.

Expand full comment

Whener I see this comic (annoyingly often) it drives me insane. It cites The Open Society And Its Enemies as its respectable philosophical source, and implies that that book shares its message, but it's basically lying - Popper is talking mainly about refusing to tolerate people who use violence and intimidation to get their way, not 'anyone who has views we consider intolerant in the abstract'. I fact, iirc, he explicitly cites people who seek to censor 'dangerous' ideas without engaging with them rationally in safe debate if they have the opportunity to be *among* the enemies of the open society...

Expand full comment

Exactly, I find that comic deplorable, it misses the point utterly. The message is 'if you want freedom of speech, you paradoxically have to deplatform the pro-deplatforming folks'!

Cue cries of hypocrisy from the crowd banging pots and pans outside some talk and calling the deafening clanging an expression of speech. But the 'tolerant ones getting destroyed' in that comic really refers to those who are tolerant towards the breach of neutral civic norms within the culture of debate - those that tolerate the disruption by pretending it's communication, which tends to happen as being a bystander is less costly for one's reputation(especially with social media around) than going the 'ACLU of the old protecting the KKK' route.

If you believe in censorship of ideas you don't like, well, have some of your very own medicine in advance, I guess?

Expand full comment

Agree. The criterion is basically "is your society at acute risk of being overthrown by totalitarians?" Not "did someone use the wrong pronoun?"

Expand full comment

I see the main problem in people (as this comic) failing to differentiate between:

1. ideas / speech

2. actions

3. persons

First something I think is very important in any context: you can judge ideas and actions, but never condemn persons as a whole. This breaks any real discussion, causes the other side to dig in mentally to the point they can't listen to you any more. Even worse it makes them feel and present themselves as victims of oppression, distracting from your critic and getting sympathy for being dealed unfair.

I'm strongly in favor of free speech, and i think we all have to learn to tolerate ideas including ideas that fly in the face of common sense or are outright dangerous if they are acted upon. Without tolerating the idea, there can be no debate and thus there is no chance that proponents of bad ideas listen to your arguments. Furthermore if you don't listen to these proponents, you cant fine-tune your argument to their actual points and all your reasoning is more against your straw-man of their idea and not against their actual idea. This makes you sound unconvincing for any bystander.

The only thing i agree we should be intolerant of are intolerant actions.

So I do agree that we should push back against intolerance, but only with respect to the person, on a equal level and without escalating.

If someone spreads intolerant ideas, address the ideas with arguments.

If someone acts intolerant, act against it. But don't overreact.

If someone is a intolerant person, give an example of a tolerant person not only to him but also to all the others watching. They will see who is the better company.

Expand full comment

Besides spammers, harassers, trolls, and a variety of confrontational or passive-aggressive ("just asking questions" when it really is disingenuous, which definitely does happen) ideological activists, and people who think breaking society into mostly-disjoint viewpoint bubbles is bad, probably are not big believers in moderation as defined here. I've listed them in order of increasing "might actually have a point", except for spammers who I would place at about the same point as trolls.

Expand full comment

Of course there are opponents to moderation - the argument is only rarely "I should not have to see this" but instead "this shouldn't be allowed to exist or be seen by anyone". Therefore, almost by definition, they have to be in favor of censorship instead.

Expand full comment

Is it a "stable equilibrium" or are we looking at something like the tendency of a far from equilibrium system to extract as much energy as possible from its environment?

Expand full comment

These moderation features are often referred to as “Subreddits” and upvotes.

I find it kind of funny how often proposals for improving Twitter boil down to “Make it more like Reddit.”

Expand full comment
Comment deleted
Expand full comment

Ostracizing people or blocking them because they participate in a sub you oppose isn't censorship under Scott's definition - it doesn't prevent them from speaking or prevent other people from seeing their speech.

Expand full comment
Comment deleted
Expand full comment

Obviously a system where you couldn't even look at under 0 posts would be even worse ?

Expand full comment
author

That doesn't seem true at all to me. Reddit doesn't have anything like the "see banned posts" button I propose, or the "filter by different levels of offensiveness". It solves the problem by having different boards people can post on with slightly different standards (although Reddit leadership still bans subreddits that get too controversial). This seems like a totally different solution.

Expand full comment

I guess the similarity I was gesturing at (albeit poorly) is the archipelagian nature of it—you can moderate or censor content as strictly as you want, but if someone doesn’t like a certain kind of moderation, you can change how filtered your content is. (By moving to another subreddit, for example, or you can find some different set of moderation settings you like better).

In a repeated game, these solutions are very similar, even though in a single-shot scenario you can think of Reddit as having “Censorship” carried out by each subreddit’s moderators. The equivalent to unblocking a censored post is starting a new subreddit and putting the censored post there.

Expand full comment

You can think of censorship and moderation as existing on the same spectrum of “How much work does it take to opt-out of this.” In the case of moderation like you proposed, it takes hardly any—just a button push. In the case of full-blown censorship, it can be impossible. In the case of subreddits, it’s in between—starting a new subreddit isn’t a piece of cake, but it still happens pretty often.

Expand full comment

Reddit does have mod-driven censorship as you describe but interestingly there is at least one website that (at least partially) implements the basic idea that Scott describes here as the MVP for moderation. Reveddit allows you to see banned posts. You can't interact with them because it is an external site but you can at least see what moderators have take down. One key point is that reddit posts can be "shadow moderated" such that other users cannot see your post buy you have no idea that it is invisible to others.

I think that reddit provides a more varied landscape for communication than monolithic platforms but it would be interesting to see a product that had moderation along the lines described here.

Expand full comment

Slashdot has been doing something slightly more advanced than Scott’s MVP for over 20 years. Moderators assign posts a score between -2 and +5 and users can decide the minimum score they want to see.

Expand full comment

I believe "filter by different levels of offensiveness" is quite comparable to their NSFW filter, and presumably the architecture in place to "see banned posts" would be similar to the filters currently implemented to prevent people from seeing spoilers in TV show subreddits unless they opt-in

Expand full comment

This was my first thought as well!

Expand full comment

Reddit does sortakinda have something sorta like that, I guess? Specifically, the idea of "quarantine" for a subreddit.

Expand full comment
author

That's true, good point!

Expand full comment

A lot of Reddit content restrictions require you to be signed in to bypass them. So it isn't quite the same. Reddit still knows, to some degree, who is looking at wrongthink, which I'm sure the CCP would support.

Expand full comment

Adding to Axioms comment below, quaranting a subreddit is usually intermediate step to erasing it completely.

Expand full comment

Exactly. Consider hierarchical filter subscriptions [mentioned in this post](https://www.reddit.com/r/TheMotte/comments/h99lly/culture_war_roundup_for_the_week_of_june_15_2020/fuwbcc7/) and in Stephenson's [2019 book](https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell). To be frank, it's strange that this hasn't been done yet.

Expand full comment

You can see reddit as a middle ground; it doesn't empower *individuals* to make their own moderation decisions but it does delegate that power from the platform to individual communities that can define different standards (with limits; some things are banned platform-wide on reddit).

Expand full comment

The "sort by controversial" button is right there my dude

Expand full comment

That filters nothing; only orders.

Expand full comment

When a post gets too downvoted, you have to press a button to read it, it's collapsed by default. The downvoting mechanism is different from moderation only in that 1) it's not called moderation and 2) it's designed according to Scott's principles outlined in this post instead of being re-branded censorship. If you look at it objectiely, reddit lets people vote on which content gives them a good user experience and then makes the bad-experience content harder to see but still accessible

Expand full comment

FWIW you can change some filter preferences regarding NSFW and downvote amounts. You can choose to not display submissions below a score of your choice, you can choose to not display NSFW thumbnails/previews, and you can choose to get a warning when you are about to enter a NSFW sub. Of course it's primarily user-driven rather than mod/admin-driven, but still.

Expand full comment

One limitation I see in these systems is their global notion of "score" aggregating everyone's opinions. There are a lot of people whose opinions I would like to ignore, so I'd prefer to be able to specify my own scoring function over the disaggregated ratings by individuals.

Expand full comment

Having a separate "controversy corner" is functionally the same thing as "see banned posts", AFAICS.

Expand full comment

Reddit does hide negative karma posts by default. Which is similar.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

That's not true at all. Outside regulation gets enforced on them (i.e. you can't post "unacceptable" content within a subreddit, reddit will still delete your posts/ban you even if the subreddit mods are fine with it) and many subreddits just get removed from the site entirely.

Expand full comment

It happens from time to time, but the overwhelming majority of content moderation is done by subreddit mods. Outside regulation tends to be limited to the most extreme handful of subreddits.

Expand full comment

It's rare because the majority of people who reddit considers undesirable have already been driven away from the site.

Expand full comment

Here is a post that got removed by the admins tell me is this the most extreme of hate?

Nazis do (((this)))

But « thiis » [sic] is just a different type of quotation mark used in French, German, Russian and so on. https://en.wikipedia.org/wiki/Guillemet

I got a post removed by the admins for saying the following

"If you look at the map of Asian countries by how their trans laws are (link dead) the state of texas would be in the sunshine and rainbows, so maybe the texas anti-trans laws aren't too big a deal"

and the following

"There aren't many good studies regarding transwomen in sports, the best one (https://bjsm.bmj.com/content/55/11/577.abstract) showed a sudden and sharp drop after 2 years on hormones, but they did still outperform their cis counterparts.(and remember the direction of publication bias here) There was one case study of a female powerlifter (since removed by AEO on reddit :( ) that showed that she significantly outperformed her cis Peers but also was significantly worse than her pre-transition averages. Making her this weird middle ground between the 2 groups.

Expand full comment

A few years ago there was a mass censorship campaign driven by the admins. A lot of huge, run-of-the-mill conservative subs got banned (despite very popular pro-Stalin/Mao subs still existing today).

Expand full comment

Indeed, Reddit is almost as bad as the other platforms, especially with its automated shadowbans !

(Also its "locking old threads" fundamentally violating netiquette.)

Platforms are fundamentally evil, don't use them ! (Yes, this includes substack, I expect it to get worse as its market share increases...)

Expand full comment

How is locking old threads a violation of netiquette? Most message boards I've used have a rule against "thread necromancy," because if a discussion has run its course people are generally not happy about having to revisit an argument from two years ago that they've mostly forgotten about.

(I think the only website I know that wants users to re-open old threads instead of starting new ones is Stack Exchange, and that's because "threads" there are answers to specific questions rather than conversations.)

Expand full comment

Most message boards I've used also have a rule against thread duplication. Thread necromancy *is* what netiquette calls for in the cases where the discussion is not hopelessly outdated. And sometimes new information comes up that can be very helpful to the people that were part of the old discussion, which they might not otherwise notice (through e-mail subscriptions to the thread and other forms of notification) if a brand new thread is used instead !

But first comes the expectation that you first search the existing discussions to see whether any questions you might have had / discussions you would like to start have not already been answered and started.

Reddit instead seems to treat these expectations and the moderation that they require as hopeless ?

Here's one example of a heavily moderated board along these lines :

https://forums.factorio.com/viewtopic.php?p=29281#p29281

Expand full comment

See, that makes sense if you're running a forum for tech support, or a suggestions thread for a game, where the goal is to group all the information for one topic in one place. Or if you have a megathread for some big issue of the day ("War in Ukraine megathread"), which will routinely get re-upped whenever related news comes out.

But it doesn't make sense for threads on more specific topics, like a particular news article or political event - news can get old quickly, politics can change quickly, and you're more likely to revive a flame war than you are to actually provide new information or a new argument. Similarly, forums for creative writing discourage necros because they drag an old story to the front page when there's nothing new to add (no new chapters have been written).

Threads in those contexts are less collections of information and more like conversation topics - you'd be a poor conversationalist if you showed up at a party and were like "Hey, remember three months ago, when we were talking about that movie? I didn't get to say this at the time, but I think you're wrong about Batman's motivation..."

Reddit is mostly a news aggregator, and is *designed* to have links fall off the front page as they get older, so it makes sense to archive old threads once they've run their course.

Expand full comment

Ok, it might make sense in *some* specific contexts... but even for "old" news you might want to get updates sometimes (though depending on the kind of news, and how old it is, a new thread linking to the old one might indeed be more appropriate).

(And let's face it, politics are likely to turn into a flamewar regardless, necro or not necro... not to mention that doing this probably violates the "don't necro if there is not a good reason to do so" anyway.)

Reddit perhaps became too popular for its own good, I see it more and more often being used for "serious"/"in depth" discussions, rather than for the chat-like discussions that you mention, often even *replacing* forums. It's particularly aggravating for lower volume subreddits, when someone has posted some arcane question that you have the answer for, but it's now locked...

Expand full comment
founding

To note that Reddit is outsourcing moderation/censorship. Subreddit admins are under considerable pressure to police their subreddits to match the (often fuzzy) standards of Reddit management, or face a real danger of having their subreddit banned or quarantined.

The fuzziness in particular is pretty efficient and/or annoying, depending on your point of view.

Expand full comment

Well no, maybe that was Reddit before "Anti-Evil Operations". Now... https://www.reddit.com/r/TheMotte/comments/x5t3jh/meta_the_motte_is_dead_long_live_the_motte/

> Reddit has become increasingly hostile - we just had a comment removed for discussing the meaning of various types of parenthesis, I'm not making that up, I'm not exaggerating, that's a thing that happened - and if the community is to survive, we need to disengage from Reddit.

Expand full comment

Was the "various types of parentheses" by any chance discussing the use of (((triple parens))) to indicate that a person is Jewish? Because, like, a discussion about grammar is different from a discussion about Nazi dog-whistles.

Expand full comment

It was, verbatim:

--------------------------

Nazis do (((this)))

But « thiis » [sic] is just a different type of quotation mark used in French, German, Russian and so on. https://en.wikipedia.org/wiki/Guillemet

---------------------------

So, yes, it is sorta discussing Nazi dog-whistles, but it isn't a dog-whistle itself. Someone just asked if « » aren't these dogwhistles, someone else answered that no, it's ((())). And that answer was deleted, without possibility of the reversal by the mods.

I tried using these in a comment to see if it'd get deleted just on the basis of having (((word))), regardless of context. Nope. So it's not automatic. Someone actually decided to delete this, going over the actual mods in the subreddit.

Expand full comment

Reddit used to be much more like that than it is now, the uniform sitewide moderation standards get more maximalist all the time. Although at the very least, it's hard to argue *something* doesn't need to be done about sites (and subsites) that mostly act as hubs to coordinate harassment/doxxing/swatting/stochastic terrorism. On the other hand, the claims that banning them off places like reddit is actually *effective* is something I find a source of worry as much as relief...if something that non-content-specific neutralizes organizing for bad causes it could have a similar effect on good causes that if anything seem to require more organization and resources.

Expand full comment

Reddit itself bans certain viewpoints from being discussed across the site as a whole.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Upvotes are a way for other people to choose the content I see. Subreddits are good but too blunt a tool. Also Reddit censors quite a lot from the top, at the subreddit and at the individual post level.

Expand full comment

In a fully decentralized system each user would select moderation options themselves and it would be impossible for any central authority to censor. But someone would complain about child porn, and governments wouldn't have the option of tolerating it.

Expand full comment
author

I agree that child porn is one of the strongest (both in reality, and in public-relations-world) arguments for censorship, and that since it has no public value even a site that was committed to free speech in every other way should just ban it.

Expand full comment

I guess you don't mean AI-simulated CP, which definitely has value, since it would reduce SA and exploitation of non-simulated underage humans. But this is way too far outside the Overton window at this point.

Expand full comment

hentai exists already, but we still have pedophiles.

Expand full comment

Yes, but hentai does not cater to the same audience.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I'm not sure "does not cater" is a great description, but lots of pedophiles consume hentai featuring child characters

Expand full comment

True, but we have no good way of knowing if the number of offending pedophiles has gone down as access to hentai has gone up.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

The broad social effects of normalizing touching yourself to images of graphic rape of toddlers, indistinguishable from the real thing, are not even considered, much less measured.

They just want to get married, and there will be no further steps down the slope.

Expand full comment

I think the argument there is by analogy with the argument over porn; that where porn consumption increases as porn is legalised/more easily accessible, rates of rape go down.

But people who consume porn still want sex with real people, I imagine, and can get it without needing to resort to rape all the time. For paedophiles, that's not an option. Are there any studies on "the choice for obtaining sex was either rape or porn, and we demonstrate that porn in this scenario reduces rape"? Because that's what paedophilia is about, and if "consuming pictures of fake children that look realistic" does reduce offending, then it's something to consider.

I don't like it, but I'd consider it.

As for the slope, well we all know that the slippery slope is a fallacy, don't worry about the current push on to rehabilitate in the public view "non-offending MAPs (minor attracted persons, current preferred term since 'paedophile' is so negative)".

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I know that there have been some _studies_ [reddit_soyjack.png] that suggest that access to pornography reduces rape. For example, this article notes that rape declined by 44% between 1995 and 2015, a period when porn went from embarrassing-to-obtain to ubiquitous. https://www.psychologytoday.com/us/blog/all-about-sex/201601/evidence-mounts-more-porn-less-sexual-assault

But the burglary rate declined by a more than 44% during the same period. I suppose half of all potential burglars could be sitting at home beating off, but I don't think anyone has ever seriously made the argument that pornography is the solution to crime broadly. https://www.statista.com/statistics/191243/reported-burglary-rate-in-the-us-since-1990/

Expand full comment

>I don't think anyone has ever seriously made the argument that pornography is the solution to crime broadly.

I mean, I'm quite certain that a few authors inspired by Aldous Huxley have, at some point.

Expand full comment

The study I read looked at internet penetration by state and rape rates by state. I think it also looked at murder rates. Rape anticorrelated with internet penetration, murder didn't.

Expand full comment

> The broad social effects of normalizing touching yourself to images of graphic rape of toddlers, indistinguishable from the real thing, are not even considered, much less measured.

Considering how "violence in video games" ended up....

Also, nothing is "measured" because people feel free to preemptively call to ban things.

Expand full comment

> Considering how "violence in video games" ended up....

Most people get aroused from thinking about or seeing other people having sex (that is, after all, the whole point of pornography); very few fly into a murderous rage from thinking about or seeing other people fighting.

Expand full comment

Moral panic over violent vidya is a fair point. But a more valid comparison is between video games that depict violence indistinguishable from the real thing and CSAM that depicts the abuse of children indistinguishable from the real thing. I’m also pretty sure that what goes on in the brain during the use of pornography of any type is distinguishable from what goes on in the brain while someone is playing GTA, Watch Dogs, or Postal.

Pornography _may_ reduce rape, although I’m skeptical of those claims, as I explained above. Perhaps I’m wrong. But what I was concerned about in my original comment are the broad social effects of ubiquitous (child) pornography. We might decide that the rape-reducing effects of pornography are worth the costs, but the costs are there: pornography harms relationships, distorts men’s (and boys’) expectations about sex, and is a huge waste of time. Sedating potential rapists is a worthy goal; sedating tens of millions of non-rapist men is a cost. Normalizing the idea that that adult women who have the wherewithal to resist a man’s advances exist for men’s pleasure has some social cost; normalizing the idea that defenseless children exist for men’s pleasure has a greater social cost.

Not that you asked, but my primary objection to vidya is that it’s unproductive. Acquiring skill at Call of Duty skill produces nothing worthwhile. The same could be said for television and spectator sports. I wouldn’t ban any of these, but we shouldn’t pretend that the social costs don't exist.

Expand full comment

Child porn is not protected by the first amendment, nor is the instructions to make bombs. There is jurisprudence on this. But bottom line is everything that isn't explicitly banned is allowed.

Expand full comment

Do you happen to have a court case for bomb making instructions not being under the first amendment?

I could mostly find https://en.wikipedia.org/wiki/United_States_v._Progressive,_Inc. which ended without creating a precedent before going anywhere near the supreme court.

Expand full comment

https://www.justice.gov/opa/pr/man-sentenced-20-years-prison-attempting-provide-material-support-isis-1

This one was a guilty plea, and was apparently just decided in July, so might still see appeal. But currently you can be sentenced to twenty years for uploading what you think are bomb-making instructions for the purpose of spreading terrorism.

Expand full comment

Agreed. I would argue though that in that case, incitement to commit crimes is clearly present and providing technical details probably increased the sentence from a few years to 20. So much for Florida man.

If the framing instead have been "here is how to make explosives if Florida is occupied by the Soviets", or even no framing I doubt that he would have gotten a sentence at all.

On Wikipedia, the contents of US Army field manual 5–31 are listed as "Boobytraps – Describes how regular demolition charges and materials can be used for victim-initiated explosive devices. This manual is no longer active, but is still frequently referenced." So more of a bomb placing than making manual, but apparently an unclassified (?) publication by the US government.

Another sort of instruction manual for making "munitions" covers PGP:

https://en.wikipedia.org/wiki/Pretty_Good_Privacy#Criminal_investigation

"The claimed principle was simple: export of munitions—guns, bombs, planes, and software—was (and remains) restricted; but the export of books is protected by the First Amendment. The question was never tested in court with respect to PGP. In cases addressing other encryption software, however, two federal appeals courts have established the rule that cryptographic software source code is speech protected by the First Amendment"

Personally, I think efforts to keep the lid on procedural knowledge are misguided (especially if it has spread somewhat already). Any fool can find instructions to make nitroglycerin on the web in minutes. (I guess. Currently have no fools around so I can't verify it.) The hard parts are finding the ingredients and not using your life when following the instructions.

Expand full comment

Yeah, the law currently requires proving intent.

https://www.findlaw.com/legalblogs/criminal-defense/is-it-illegal-to-post-bomb-making-instructions-online/

But note the man had already uploaded ISIS-friendly videos, and this was the only one that resulted in a charge. The bomb one was the one not covered by the Frist Amendment.

Expand full comment

Notably, child sexual abuse material is a bad *file*, not a bad *statement*. And yes, I realize you can steganographically encode files in text, or, uh, non-steganographically do that in Base64, but you probably can't do much file transfer over tweets. Most of the other horrors moderators report having to deal with are also images and videos. I realize that 'censorship for non-text, moderation for text' is not the best proposal, nor is 'post text for free, images for a fee per image', but they do look tempting sometimes.

Expand full comment

It is enough to share links to these files on upload sites, no need to upload images. Links are easily disguised as text, if you would expand the censorship to links.

Expand full comment

"Public value" as a criterion is not needed in this case - it's plain old illegal, as is bomb-making and a lot of other stuff. It doesn't even have to be debated or moderated, just passed onto the police.

Expand full comment

Are bomb-making instructions illegal in the US? I don't think they are, but I do not claim to be a lawyer....

Expand full comment

Sure, but if we're resorting to the law to settle the issue, then the law doesn't prevent the type of censorship that's being called out here.

If you want to oppose that type of censorship, you need to do so from a moral framework that doesn't just refer back to the law, because the law allows it. So your moral framework has to have a position on bomb-making instructions and cp, not just say 'and those are illegal'.

Expand full comment

If all debates could be settled by “what is the current law?” There would hardly be any debates about anything.

Expand full comment

One must differentiate between the act and intentions to act, and the discussion of the act. It's illegal (in Australia) to even possess an image of child pornography (CP), let alone participate or enable. However, it's perfectly legal to discuss CP in the abstract (I'm doing it now) and whether the law or social mores are in alignment etc. I could not legally sent a bomb-making file to a bomb-maker who then makes or intends to make one, but I could (probably) discuss the details of making one and how hard it might be to obtain components, as part of a broader discussion about, say, public safety and terrorism, and how to make bomb components harder to obtain. However, moderators on my on-line newspaper comments would not tolerate these topics for a second, as the totality of both topics is dangerous to approach in any form (when does legal discussion slip into illegal discussion?) and best avoided.

Expand full comment

With child porn, we still have a sense of why it is bad because we acknowledge the suffering involved in producing it: real children were really abused, and whatever the fantasies paedophiles have about children are really sexual beings, able to consent, and would freely engage in sex with adults but the real harm is because of the social stigma attached which confuses them, makes them ashamed and afraid, and inflicts psychological damage about normal, natural sexuality - it is not consenting kids having fun sex with caring adults, it's produced for a market by abusers.

AI-generated 'real fake kids' porn is going to blur that distinction. After all, it's not a real child. It can be a happy fake fantasy about consenting fun for all. So where's the harm? Nobody was hurt making this.

And then once we accept that, then it's "so what is so bad about paedophilia anyway, or rather 'minor attracted persons' which is the term you *should* use since it's Scientific and The Science backs it up"? After all, they didn't *choose* their sexuality, they were born that way. And the porn they consume doesn't harm anyone. And they've never, ever even looked at a real child that way.

Besides, now that you've accepted that there is real and hurtful prejudice against MAPs, what *is* a minor anyway? For ordinary sex, you've accepted that age of consent laws can be bent; that it's silly to think that 18 can consent but 17 and 360 days can't; that we shouldn't prosecute a 17 year old for having sex with their 15 year old girlfriend or boyfriend.

So if John is 30 and Billy is 16, is it *really* statutory rape? If John was 18 and Billy was 16, it would be an acceptable exception! If Billy can consent at 16, what is the difference if John is 18 or 30?

If 16, why not 14?

Constant dropping wears away the stone.

Expand full comment

> Constant dropping wears away the stone.

So far in the direction of banning more and more things. Slippery slope is symmetrical. See AI Dungeon implosion, or explicit Harry Potter fanfics being theoretically illegal. Also illustrations (so now we have global internet, where you can break the law if you happen to visit some Japanese site if you're not in Japan).

Absurd; https://ansuz.sooke.bc.ca/entry/335

> Saturday the 15th, afternoon: Gargron the Mastodon developer and admin of mastodon.social creates a Github issue to discuss technological aspects of the ロリコン issue, mostly focused on the potential legal exposure for server admins whose servers may end up caching, and thus "possessing," material that is illegal to possess in their local jurisdiction. In postings there and on the Mastodon network, both in English and Japanese, the administrators of pawoo.net declare that they will not ban from their own servers material that is legal in Japan, but they will attempt to enforce a rule that "mature images" must be hidden by NSFW tags, and they will cooperate with other technical workers in attempts to keep "mature images" out of caches where they might create liability for third parties.

> I think that the word choice of the Pixiv admins calling this stuff "mature images" in their English-language communications is telling: Japanese people think what the English speakers are freaking out over is the possibility that children might see the images. They're "mature" images that ought to be for consenting adults only, is the objection to ロリコン that comes closest to making any kind of sense from a Japanese point of view.

> The idea that even consenting adults ought not to be allowed to see such images isn't on the Japanese radar, and would seem to be wacky moonbat nonsense, even though it is so obvious, and so obviously sensible, as to be unspoken on the English side.

Expand full comment

I do not see a slippery slope anywhere on this topic.

The consensus that child sexual abuse is bad is probably even stronger than the one that Hitler is bad. While there were some attempts to co-opt the sexual revolution to decriminalize child abuse by some, these have failed completely. Today, from the mainstream to the political fringes, from Trump voters to woke activists, from Marxists to neo-Nazis, basically nobody suggests that child sexual abuse should be punished less harshly than it is. It is the one evil we can all agree on.

As a society, we have decided that the most emotionally satisfying way to deal with this is basically to lump pedophiles together with child sexual abusers and consumers of CSAM. Statistically, some of politicians are pedophiles. But while many politicians nowadays are open about being LGBT and some might put a list of their fetishes on the web (?) or whatever, I do not recall any politician (outside explicitly pedophile parties and after 2000) stating that they are attracted by prepubescent humans. It would be social suicide, they might as well tattoo a swastika on their forehead.

I do not think that this creates the optimal incentive landscape for non-criminal pedophiles. Will they try to seek medical help, knowing fully well that some activists would leak a list of pedophile patients in a heartbeat?

Banning images of children being abused was obviously a good thing. Similarly banning anything which could possibly be used to arouse pedos, from any lewds of underage persons (which means that any 12 year old with a mobile can create the legal/digital equivalent of anthrax) to cartoon/AI generated sex to child-like sex dolls seems a bit excessive.

Expand full comment

Well how else am I supposed to learn how to make fireworks?

Expand full comment

And what about those who argue "That is a bad law, I do not consider myself bound by it, and I will not alone break it, I will work to have that law overturned"?

Expand full comment

I don't think this is necessarily the solid example you think it is. I think current child pornography laws are actually really bad. See e.g. this article by Rick Falkvinge: https://falkvinge.net/2012/09/07/three-reasons-child-porn-must-be-re-legalized-in-the-coming-decade/

Granted, these are largely arguments why *current* such laws are bad -- i.e., they're overbroad (covering things that cannot reasonably be claimed to be harmful -- they've even been extended to *animated* child pornography), and them being strict liability is ridiculous. So notionally one could have child porn laws that didn't have these flaws and that weren't such a problem.

That said, one of the arguments made applies to any such laws. Namely, they're *unnecessary*, because if you are filming the rape of a child for money, you're already doing something illegal; child-porn laws are just one more case of people trying to take something that's already illegal and declaring it double-extra-illegal and catching a bunch of false positives in the process. These sorts of laws, where you try to outlaw something that's already illegal, are just usually bad ideas. There's really no reason for child porn to have a different legal status than films of any other horrific illegal act, e.g., snuff films.

Expand full comment

Child porn laws seem quite distinct from garden variety laws that apply to already illegal behavior, in that they ban behavior that would otherwise be legal - namely the possession / distribution of child porn which is distinct from child sexual abuse.

The argument you present only seems to apply to laws regarding the initial production.

As for the rationale for banning the possession or distribution of child born, it is to to discourage further child sexual abuse by making it less profitable.

Expand full comment

This. In theory, if you cannot produce something legally, you cannot distribute it. In practice, the person who produced it will hide their face, and the distributors will be like "I just found this somewhere online, I honestly have no idea who produced it".

Or it will be produced in some country where the production is legal. Or they will *pretend* it was produced in that country.

Or maybe the producer is willing to risk jail time. Or the producer lives in a country where it is technically illegal, but police doesn't care, or cannot enforce the law (e.g. against a sufficiently powerful criminal organization). Legalized distribution would make an extra source of income for Mexican drug cartels.

Expand full comment

>There's really no reason for child porn to have a different legal status than films of any other horrific illegal act, e.g., snuff films.

I remember 'Faces of Death' being a popular franchise back in the day.

It's very possible to get video of illegal acts; even if the person who made the video goes to jail, once they have sent it to anyone else, that second person can make infinite copies and disperse it anywhere if there aren't any laws against doing so.

I'm pretty sure I've obtained almost none of the normal porn I've watched online from the original creator.

Expand full comment

There's a lot of talk about the difference between 8-year-olds and 17-year-olds, but they don't seem to be calling for a legal distinction between the two. Seems like that should have been a thing if the murder-and-jaywalking comparison was heartfelt.

They've got similar possession laws for endangered species. https://www.kwch.com/content/news/Taking-an-eagles-feather-could-get-you-a-100000-fine-and-a-year-in-jail-490671881.html

Is that relevant? Probably not. But it's a neat fact, and everyone should know it.

Expand full comment

If the goal is to reduce financial incentives for content producers, they should legalize pirated content and only go after people who bought or sold films of illegal acts.

Expand full comment

What if the content is provided for free, on a web page full of ads?

Expand full comment

so long as the piracy site isn't paying the producer, it's ok

Expand full comment

Double illegal is because the rat bastards who do this shit will try to wiggle out by "ah, but it's only illegal if it's *not* consensual, and Susie consented freely! This wasn't rape, this was just ordinary porn!"

There has been a case like this in my own country several years back; a Big Brain judge decided statutory rape laws were wrong, why should we criminalise two 15 year olds having sex? So for convoluted reasons, the law was nullified for a very short period until the new law could be brought in by the legislature.

And sure enough, a guy convicted of raping a 12 year old had his lawyer do an appeal over "if the statutory rape law was unconstitutional so it was overturned, how could my client be convicted for a crime that didn't exist?"

Give the rat bastards even the glimmer of a loophole and they'll go for it. Keep it double-double illegal.

Expand full comment

More generally, it's reasonable to believe that certain content will be made *illegal* and that platforms will naturally ban such illegal content. So we support some "censorship" by government/lawmakers. In contrast I doubt there are any cases where you would argue for platforms to ban *legal* content.

Expand full comment
founding

But wouldn't that be covered by existing law? So not subject to the subjective opinion of the business doing moderation. I think the second bullet point above only becomes interesting when we consider things that are allowed by current law, yet we would still want to ban.

Expand full comment

Yeah, but I think the point here is that the existence of CP stops us from having a decentralized/distributed social media platform, where moderation is impossible in principle because there's no central authority to do it and no central server to do it on.

Expand full comment

How is this different from the World Wide Web ?

Expand full comment

There is some evidence that porn is a substitute for rape, not a complement, which is a somewhat analogous issue.

Expand full comment

It's hard to know how to extrapolate. Although many rapists or would-be rapists surely think than porn, fornication, and the like are immoral, I would imagine most people who end up watching porn rather than raping women can see that what they're doing hurts no one and is perfectly fine.

With child porn, it's possible the more analogous case would be to a rape fetishist watching rape snuff videos. It's not implausible that the outlet makes them less likely to rape a woman themselves, but it's not implausible either that it would condition them to be more blase about rape.

I know from a subjective perspective, it seems like the reason I don't consume child porn isn't that it's illegal, but that it's immoral (I could almost certainly get away with it) and that my commitment to morality has been a good thing. It's conceivable, I admit, that I've been hornier and more dangerous because of my choice, but it really feels like my resolution and commitment to good behavior have been a strength, not a vulnerability, and that immoral choices would have been moving down a dangerous path.

(This all isn't to doubt the general principle that pedophiles can benefit from finding sources of sexual release, as people attracted to women do, just some skepticism about applying that principle to actual child porn.)

Expand full comment
founding

I think in those cases a good middle ground is to use your own mind and the minds of others to aid in an outlet. Discussion, roleplay, art, written fiction, and AI can be particularly helpful in that department, since none of them require acts to be performed. Hopefully in the future whatever the equivalent of the holodeck is won't be censored to heck and will support simulation of the entire human (and inhuman) experience among consenting individuals.

Expand full comment

Is there a good reference for some people that have studied this? I was under the impression that there was no substitution effect, but this belief is purely from cultural osmosis.

Expand full comment

I read one article some years ago, which I have briefly described, am not familiar with the whole literature.

Expand full comment

> I agree that child porn is one of the strongest (both in reality, and in public-relations-world) arguments for censorship, and that since it has no public value

CP is a broad brush which includes animated characters. Access to CP has also been shown to reduce recidivism in sex offenders, which seems publicly valuable, so I'm not sure that "no public value" is a strictly correct valuation of CP.

Maybe you mean "allowing it on *any* platform" has no public value, and that access to it should be restricted to those who are demonstrably struggling with attraction to children. Maybe that's reasonable. Then again, people who don't like CP won't actually look at it, and having access to it could prevent a first offence, so I'm not sure that's necessarily true either.

Expand full comment

The traditional (pre-1960s) way of squaring censorship of child porn (and other kinds of porn, for that matter) with support for freedom of speech was to say that "speech" had to have some kind of propositional content. "The government's policies are stupid and harmful" has propositional content, and so would fall under free speech protections; an image of a child getting raped doesn't, and so wouldn't.

Expand full comment

Non-offending pedophile here.

Child pornography should be illegal, if for no other reason than children can't consent to have images of them at a vulnerable time spread to others. Obviously child abuse should be illegal.

However, what about fictional/artificial material (drawings, computer renders, stories, AI-generated artwork) that depicts sexual acts with children? I definitely get the visceral argument that this stuff is gross. However, there are legitimate arguments that depictions without real children don't harm anyone and provide a valuable sexual outlet.

I can speak from experience. I'm a successful professional with a very active social life, who is out to several wonderful and supportive friends. Nonetheless, with no sexual outlet, it's easy to feel both lonely and sexually isolated. "Artificial" child pornography provides a real improvement to my quality of life. While I'm confident that I would never hurt children regardless, these materials certainly haven't made me any more likely to offend, and I think many people can see why having *some* outlet is much safer than having none.

I've written about this before, for example here:

https://livingwithpedophilia.wordpress.com/2020/09/30/is-artificial-child-pornography-a-good-outlet/

Under current law the legality of artificial child pornography is questionable at best. I would claim (but with somewhat shaky evidence) that its scarcity leads to greater consumption of real child pornography, and quite possibly to more child abuse. In other words, even saying that child porn should be censored needs more definition to be well-targeted.

Scott, I actually found your post quite clarifying personally because until recently I posted on Quora about my pedophilia. I provided views into the lives of non-offending pedophiles and shared advice for the thirteen-year-old kid who is just realizing he's a pedophile, and is trying to figure out how to live his life. Quite frankly, no one supports that kid, and without good support they're going to search online and find child pornography or worse. I had over a million views on Quora and had been writing there for over six years with zero concerns and an invitation to the Top Writer program, until a couple of weeks ago when they deleted my account. It was incredibly frustrating, both because of the work I'd put into making a good resource and the sense that I was really helping people. I hadn't violated their policies; it was a case of "we don't want you writing about pedophilia, even if you're advising people on how to productively address their attractions."

It's an interesting question. Is sharing non-sexual information about pedophiles, and advice for living your life morally, something that should be restricted? I think no, because of the huge benefits it provides to have good information out there. What your post did was give a name to it: all of my conversations had been civil, so you helped me understand that what I experienced was not moderation, but censorship. (That didn't make it any less frustrating, but it helped me process six years of work that was simply removed.)

Expand full comment

That was how early spam fighting worked on USENET, there were people/programs that did nothing but publish real time filters you could implement for you or your servers.

Expand full comment

Why can’t the government prosecute the individuals sharing the illegal content? There’s no reason the enforcement needs to be centralized.

Expand full comment

Because that's hard, basically.

The people posting the objectionable material always have the option of using VPNs and other methods to disguise their identities and make themselves hard to find and prosecute.

Even if you find one it's still a long time and a lot of resources to prosecute a single person, who might be 1/100,000th of the problem. You definitely don't have the resources to find and prosecute all 100,000, so this strategy always leaves the objectionable material out there circulating, causing further outrage and demands for action.

Politically, you can never 'win' with this strategy; like the War on Drugs, you can throw big press releases for individual busts, but the problem will always exist, causing outrage and harvesting clicks for pundits.

Whereas attacking the platform is a single, simple, legible target, and if you hit it hard enough you can theoretically 'win', driving the problem back to the fringed of the internet where most people and pundits don't see or hear about it. That stops the outrage and counts as a big win that scores political points.

Expand full comment

I remember in the late 90s early 2000s how wild the internet was, and how openly visible porn was. It was in ads on otherwise reputable sites, including 1-3 clicks away from places like Wikipedia, following links available directly on the site.

I recall being bothered that I would never be able to share the internet with my children, because there was no way to even try to browse without coming into contact with violence, swearing, and porn.

Trying to go back to the good old days of the early internet may not be a great idea.

Expand full comment

Once it's shared on the hosting site, the hosting site is the one sharing the content.

Expand full comment

Financial scams also tend to fall into the category of “both sender and recipient want to share this information”.

Expand full comment

Obvious financial scams provide a valuable public service of separating fools from their money. If fools spent their money on useful things then they would use those things to inflict foolishness upon the rest of us.

Expand full comment

I agree there's some value in inoculating some people - or rather some people refuse to learn except the hard way. But we shouldn't be rewarding the scammers or giving them a ready profit stream, that only makes it easier for them to target the rest of us.

Expand full comment
author

I don't think this is true - they also separate nice elderly people who don't understand the modern world anymore from their money. I have met some of these people and it is pretty heartbreaking. The penalty for being financially unsavvy (even ridiculously financially unsavvy) shouldn't be penury. While I agree that censorship is bad and that we may want to go easy on financial scams to avoid it I would stop short of saying victims deserve it.

Expand full comment

Not to mention most governments have laws against fraud for a reason… society tends to agree that it is wrong and isn’t protected speech.

Expand full comment

We don't have the capacity in the legal system to apply in the case of social media.

Can you imagine the daily number of instances of libel on Twitter?

Expand full comment

I’m not saying the legal system needs to be involved. I’m saying we all seem to agree that obvious scams are not protected speech, and should be censored!

Expand full comment

I would imagine there are quite few. Libel isn't just saying something untrue about someone -- it has to have some harm on them. If it doesn't, they have no standing and can't sue. Do people lie and insult each other on Twitter? Sure. But is it libel? In the vast majority of cases, probably not.

Expand full comment

I spent the last five years of my mother's life as her caretaker, and when I arrived on the scene, the first thing I had to do was extricate her from various scams she'd gotten into.

I understand that there can be a hazy line between a bad but legitimate contract and a fraud, but unless we have the capacity to use the law in individual cases (which we don't with social media, which is the crux of the problem), then I'm for drawing a fairly bright line around this sort of speech.

Maybe have an opt-out for commercial speech or putting a wall between speech and commerce.

Expand full comment

I would argue that scams, especially foreign scams, drain money from the economy. Fools sending money to scammers in India aren't spending it on local services and goods.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

If a fool is passing me on the far-side shoulder in his motorcycle at high speed while there are driveways nearby, then I do not care that he bought his motorcycle directly from the local motorcycle shop who assembles their products on-premises from made-in-America components - I care that he had enough money to buy a motorcycle because he hadn't been scammed hard enough.

Expand full comment

These are two psychologically different kinds of excessive optimism and I don't think they have as much correlation in the population as your argument requires.

Expand full comment

A lot of those scams hit elderly people losing their faculties, who would otherwise pass that money on to their children when they die.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

You have described a particular category of fools.

Expand full comment

"Old people with dementia deserve to have all their money stolen" is not a take I expected to see today, but the internet contains multitudes.

Expand full comment

> If fools spent their money on useful things then they would use those things to inflict foolishness upon the rest of us.

Did you forget your own goalpost, or are you claiming that no one's children are useful?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

*Some* people's children know when their parents have lost their faculties and take appropriate countermeasures.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

It’s not consensual if the recipient is being deceived though

Expand full comment

That would seem to justify 'moderation' of all untrue content, which is about 50 steps further down the road of 'censorship' than we're already at.

Expand full comment

Exactly. Even people in North Korea are allowed to say things that the authorities already believe.

Expand full comment

But then you need an objective standard for "deceptive" content. If I want to convince you that crypto is great and my new coin is the future of finance, but somebody else is convinced that all of crypto is an obvious scam, then who gets to decide if "the recipient is being deceived" in our communication? What if I want to convince you that the only way to reach Heaven is through Jesus?

Expand full comment

This is generally a problem with the Non-Aggression Principle. The idea of not initiating force or fraud sounds fantastic, until you realize that it is impossible to define "fraud" without defining "truth".

(Which, I suspect, is why most libertarians only focus on the part about force.)

Expand full comment
Comment deleted
Expand full comment

Is this the kind of thing that lawyers help with?

Expand full comment

The legal definition of fraud requires not just lying, but lying intentionally for the purpose of stealing someone's stuff. This is a tiny subset of all the false things people say online.

Expand full comment

Prediction markets for regret?

Expand full comment

On a per-comment basis? Doesn't seem very scalable..

Expand full comment

The reason why no social media app does this is that users are not the customer. Advertisers are the customer. It might be a better user experience to opt in or out of various forms of moderation, but the money comes from advertisers who simply do not want their ads to appear under somebody saying some heinous stuff. So heinous stuff gets moderated away for everyone.

Expand full comment
author

I actually have a little trouble understanding this - are people too dumb to understand that if Coca-Cola does an ad deal with Twitter, and 1% of ads are for Coke, and your friend Bob who you deliberately follow says "I hate Jews", and there's a Coke ad underneath that, this doesn't mean Coke hates Jews?

I think this could be partly solved by not having ads appear near things that offend *you* - so if I opt into eg anti-vaccine posts, I won't mind having ads near anti-vaccine posts.

Otherwise, I wonder if there's some products that are natural complements to offensive posts who can you strike ad deals with. Like ads for alternative medicine products next to anti-vax posts or something.

Expand full comment

Isn't that EXACTLY how it happened on YouTube?

Expand full comment
author

I actually don't know the story of what happened on YouTube - tell me more?

That having been said, I expect YouTube is a harder case than Twitter because ads are on specific videos, rather than in between tweets.

Expand full comment

Here's google responding to the companies, who were responding to the public.

https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon

"Google's bad week: YouTube loses millions as advertising row reaches US"

Expand full comment

YouTube found a partial compromise (not to say that’s it’s a perfect one) by demonetizing lots of videos but leaving them up.

Expand full comment

YouTube has had a few issues with this kind of thing, but the clear-cut case is what happened with Matt Wattson.

Long story short, Matt believed Youtube was doing nothing about content that pedophiles were crowding around (leaving timestamps in the comments of videos with kids in them, for example), so he contacted the advertisers directly and told them their ads were appearing on those videos.

Now, he wasn't wrong, the ads were showing up. But it was never proven that the content was monetized (basically, money from ads would go partially to the uploader), and his argument that Youtube was showing American ads on these videos was incorrect - YouTube shows ads to a person based on what that person's profile indicates, not based on the content.

But the mere hint that a company's ads were being shown next to pedophile-attracting content was enough to get several major brands to pull out.

I could go on, but it would be bad for my stress levels. You can read more about the 4 Apocalypses here: https://youtube.fandom.com/wiki/YouTube_Adpocalypse

TL;DR - People are misinformed about how advertising works with user-generated content, and companies will always look to minimize their risk of being even ASSOCIATED with "bad" content.

Expand full comment

I realize not many people would have much empathy, but I thought I'd share that pedophiles created websites both for embedding Youtube videos (with no native comments on the pages) and for easily rehosting Youtube videos in order to discourage people leaving creepy comments, which could be way worse than timestamps to juicy moments. (In the latter case, it also erased other traces along with comments.) Obviously this didn't quite work, but you can't say no one tried while Google was still sleeping

Expand full comment

YouTube’s a tough case in addition because creators can make a living making videos on their platform, and then have that revenue completely stripped away at a single content moderation change made at the request of huge corporate advertisers. Essentially, YouTube has complete control over what types of content creators can make a living. You might have a highly successful gun hobbyist channel, which is suddenly worth nothing in the wake of a mass shooting and corporations suddenly unwilling to run ads on your channel. Any random moderation change to the YouTube rules could fundamentally shape the entire YouTube landscape, ruin creators careers, and decide what videos get made (by limiting the money given to “unsavory” or transgressive creators).

Hell, youtube has become so squeaky clean that not even gaming channels for adults can afford to keep swear words, most having to employ editors to put sound effects over the word “fuck” or “shit” just to keep it clean enough for advertisers.

Expand full comment

> corporations suddenly unwilling to run ads on your channel

Oh, no. That's also done by employees internally. When Google co-sponsored CPAC a few years ago and their logo was next to the NRA as significant co-sponsors, there was a *huge* internal uproar.

Expand full comment

When I worked at Google in 2011-2013 I was working on the original checkbox that brand advertisers could check to avoid showing their ads on potentially offensive content (Brandrank). There was an adsense version of that for text webpages and while I was there we created similar thing for classifying YouTube videos and letting advertisers opt out of potentially offensive ones. This was just moderation. There was a separate team that handled banning the really bad stuff. A few years after I left the company they pivoted into deleting a lot more stuff or proactively demonetizing it for all advertisers regardless of individual advertisers' preferences. The trigger for this pivot was basically big media outlets writing hit pieces highlighting the worst anecdotes of advertisers showing up on objectionable videos, causing some advertisers to panic and overreact, causing google execs to panic and overreact and turn the censorship dial way up.

Right now they don't even allow individual advertisers to opt in to showing ads on demonetized content -- this would be censorship by your taxonomy. They're leaving zillions of dollars on the table by not letting willing advertisers advertise on willing channels, either because those channels have the wrong opinions, or because they think some advertisers are too dumb to understand the intricacies of an opt-in checkbox. I think this is bad in terms of both morality and business.

Expand full comment
Nov 4, 2022·edited Nov 4, 2022

> I actually don't know the story of what happened on YouTube - tell me more?

This: https://www.youtube.com/watch?v=nBQFls_elpY is an hour long (but IMO very good) 'documentary' about one of the Adpocalypses. In general: some random guy uploaded a video explaining how, according to them, YouTube enables pedophiles.

Mostly it was a misunderstanding (or uncharitably, playing stupid) about how YT recommends videos. The outrage was that by watching videos featuring children, you get more of these videos recommended. Which is bad, because pedophiles get stuff they want that way. Also they comment on these videos.

He advocated pressuring Brands to stop advertising on YouTube.

It went viral on /r/videos (it's actually #2 best of all time still; nearly 200K upvotes -

"Youtube is Facilitating the Sexual Exploitation of Children, and it's Being Monetized (2019)"), the whole thing was picked by the media, loads of articles were written, brands were dropping their advertising on YouTube.

Again; I really recommend that 'documentary', it covers it rather thoroughly. Lots of very weird things happened.

Expand full comment

>"are people too dumb to understand that .... this doesn't mean Coke hates Jews?"

What happens is that someone screenshots it and says "Wow! Coke is running ads on this site that won't take down anti-semitism. Coke is funding anti-semitic hate speech!" And then it goes viral everywhere, way beyond just Twitter, with a reach far larger than the actual ad campaign.

Maybe 99% of people who see those viral posts don't change their buying habits when it comes to soda. But if just 1% does, that's a huge amount of blowback relative to the marginal gain of any single ad campaign, which is very very low. You aren't going to capture 1% of market share for Coke just by running ads on Twitter, whereas you certainly might lose 1% by such a criticism going viral on every platform.

Then you also risk a snowball effect where once you are considered "the hate speech site" by one advertiser, others get nervous and start considering jumping ship. So from a pure cost-benefit perspective, there's a strong risk aversion incentive to nip that idea in the bud before it gets out of control.

Expand full comment

Why can't people just be reasonable rather than complaining that other people might be angry? Nobody is actually angry in this case - people are just looking for an excuse to get clicks.

Expand full comment

It's really more of a game-theoretic behavior where a lot of parties are proceeding rationally.

If I hate Nazis, then I'll want to contrive a circumstance to harm them. The case "Coke tolerates Nazis" is also in this case partially true, as Coke could choose only to advertise on sites without Nazis. And if I see this case and get on the bandwagon then I avoid social disapproval in my social circle & / or make my life slightly more interesting by avoiding Coke for a "moral" reason I get to explain.

The issue is that moderation is mediating the level of intolerance people have of those they despise.

Expand full comment

It seems like news media and tweeters got much more interested in making such screenshots go viral after the 2016 election. There were many years before that where someone could have taken such screenshots, but it didn't become a viral moral panic until circa 2017 for obvious reasons.

Expand full comment

"Screenshot risk" is the common term I've seen for this scenario. The nature of this risk is makes it harder to properly experiment around & assess (i.e. one bad news story is one too many) compared to usual ad campaigns w/ RCTs & sentiment analysis. Which leads both advertisers and platforms to be over-sensitive than what may be necessary while we wait for the Overton Window to shift back (if ever)

Expand full comment

Ads aren't about conscious choices, they're about subconscious awareness. You probably know Coke doesn't support anti-semitism, but if you keep seeing the two in the same room together, seeing Coke is going to start making you think about anti-semitism.

(See also this old political ad, whose goal I'm pretty sure was not to actually inform people of a topic, but to instead annoy people while saying 'Mark Begich' a hundred times so they'd blame him for the annoyance.

https://www.youtube.com/watch?v=imaCgYgd4EU)

Expand full comment
author

All right, new plan, Coke can pay Twitter to put Pepsi ads on anti-Semitic tweets!

Expand full comment

I'm pretty sure that's illegal interference with business practice on the part of Coke, but I am not a lawyer.

In general, if I think "there's a super easy solution to this bedeviling problem that no one but me has ever thought of, despite the massive amount of money that people must have already spent trying to solve it," my first reaction is to assume I'm missing something basic.

Advertising revenue as the pain point is the basic thing you were missing. I missed it too until I read the comment above, but I also knew I was missing something.

Expand full comment

I don't think that is the error Scott made here, if Scott even made an error. Scott solution is fine assuming his postulates about the goals of the two sides of the debate are correct. The advertisement angle is not something new, and it is mostly wrong, coke does not care if a can is next to a swastika on a screen shot somewhere. They care about twitter mobs. Payment Processing is the biggest 'censor' of the internet, and they censor websites that twitter gets mad about, despite the fact that nobody in the twitter mob has ever even been to the website in question. The fundamental 'error' is in assuming people just don't want to see offensive content, instead of thinking that people want to censor.

The reason I am not sure if this is an error, is because the left mostly tries to avoid explicitly saying they just want to censor people. Scott might be saying something like, "hey, if you really mean what you say, and they really mean what they say, then shouldn't this work better for everyone?" but it won't because the people who mob coke over that picture are just actually trying to censor ideas, not because there is some compelling argument to be made from the screenshot.

Also the Pepsi thing was a joke.

Expand full comment

>the left mostly tries to avoid explicitly saying they just want to censor people

Do they? My impression is that the masks have long since come off, and deplatforming "nazis" is openly and loudly cheered.

Expand full comment

That sounds like a fun race to the bottom. I wonder if that’s how the Franchise Wars start

Expand full comment

I'm pretty sure we have an entire branch of law devoted to making that illegal precisely because it's an incredibly effective tactic, but IANAL.

Expand full comment

Haha. This would end up with Pepsi paying for their ads on anti-semitic tweets but with plausible deniability. Cool

If anything, they would only end up being subconsciously associated with antisemitism only by the antisemites who chose to read antisemitic tweets, so it's all good.

Expand full comment

I think in the age of twitter shit storms Coke doing ads (and thus funding) on any site willing to host (under any settings) Bobs comment is just not feasible.

Whatever the ads gain Coke will be peanuts compared to the reputational damage to them when the screenshot of Bobs post with the Coke ad underneath trends on twitter.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

> ...are people too dumb to understand that if Coca-Cola does an ad deal with Twitter, and 1% of ads are for Coke, and your friend Bob who you deliberately follow says "I hate Jews", and there's a Coke ad underneath that, this doesn't mean Coke hates Jews?

Yes, people are too dumb to understand this; and, in a way, so are you and I. Ads are not designed to work on a conscious level; almost everyone is savvy enough to ignore them by now. Instead, ads work by creating a vague association between whatever it is you're looking at; the product being advertised; and good feelings. Seeing a Coke ad next to an antisemitic message ruins that association. And this is not even counting the deliberate attacks that such an ad would provoke, as @quiet_NaN says in his comment. The bottom line is that a single Nazi-adjacent ad could cost Coke millions of dollars in revenue -- so of course they'd spend millions (at minimum) to prevent this from happening.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

"I actually have a little trouble understanding this - are people too dumb to understand that if Coca-Cola does an ad deal with Twitter, and 1% of ads are for Coke, and your friend Bob who you deliberately follow says "I hate Jews", and there's a Coke ad underneath that, this doesn't mean Coke hates Jews?"

Unfortunately, our brains work based on association. It doesn't matter how many times you logically tell yourself that Coke does not endorse antisemitism just because you saw their ad next to antisemitic content, your brain will associate the two and the negative connotation will transfer over to Coke. Advertisers know this very well. Most ads exploit this feature. Why do beer commercials show scantily clad women? Are most people too dumb to understand that just by buying a certain beer, this doesn't mean you will be surrounded by scantily clad women?

Expand full comment

They don't do scantily-clad women over here, it's cool hip people having fun, like this new ad for a new beer:

https://www.adworld.ie/2021/09/10/havas-rolls-out-new-campaign-for-stout-brand-islands-edge/

Personally, I think it's a terrible ad because it's trying too hard, but then again I am definitely not the target audience. Same with this one, which is trying to be funny:

https://www.adworld.ie/2022/10/14/kick-raises-a-glass-to-dublin-in-new-campaign-for-5-lamps-beer/

Mostly beer ads today seem to be all about the associations "having fun with your friends and great people" so that you learn to unconsciously associate "drinking this brand rather than that brand will mean I have a good time".

Even this ad is a bit too trendy, but at least it does acknowledge beer is for drinking, people go out to drink for the alcohol buzz, not necessarily to feel cool and trendy:

https://www.youtube.com/watch?v=ReIL5L8tps4

Expand full comment

They used to sell it with sex - I recall a TV ad for some lager brand about three decades ago in which a relatively diminutive young man instantly grew in stature by about a foot after a sip, and immediately engaged a couple of highly attentive ladies in conversation at the bar. It wasn't Hai Karate aftershave, but clearly it was the next best thing.

Expand full comment

This is not about the risk of Coke spontaneously developing bad associations by people seeing the ads - it is about campaigners against the content using demonetisation as a way to harm it. They purposely created the associations by publicising the fact of the ads appearing.

Nobody who was purposely watching the content would develop a negative opinion of advertisers because they were beside it, after all. And nobody who wasn't watching it would develop one either, as they would not see the advertisements.

Expand full comment

"Nobody who was purposely watching the content would develop a negative opinion of advertisers because they were beside it, after all. And nobody who wasn't watching it would develop one either, as they would not see the advertisements."

This is just plain wrong. Go into Google Scholar and type in "negative associations on brands."

Expand full comment
Nov 4, 2022·edited Nov 4, 2022

Maybe you could just give me the gist of your objection rather than asking me to search the literature?

We are talking about videos on YouTube, after all. Nobody watches them who has not chosen to. It is not as though a parade of those whom you find for whatever reason objectionable passed down Main Street, with a Coca Cola float in the midst of them.

Expand full comment

>and your friend Bob who you deliberately follow says "I hate Jews", and there's a Coke ad underneath that, this doesn't mean Coke hates Jews?

In consequentialist terms, it kind of does though?

Like, Coke could spend it's add dollars on any of a million different pieces of physical or virtual real estate, and whichever platform it chooses to send money to will become that much more stable and powerful.

If it chooses to send that ad money to a platform that moderates hate speech, there will be less hate speech in the world. If it chooses to send that add money to a platform that markets itself as the #1 place for unmoderated hate speech, there will be more hate speech in the world.

Whether Coke *likes* hate speech is sort of besides the point - 'Coke' is an abstract concept filling on for a vague mix of objects and people and relations between them, it's not actually a conscious agent that can like or dislike things.

In consequentialist terms, what matters is the effect Coke has on the world, or the effect you have on the world by supporting Coke. And yes, who they give their advertising money to *does* affect that.

Expand full comment

On the other side of this I worked on a recipe site with pretty food pics. One day we started to get ads for poison and traps with pictures of rats displayed next to photos of pasta Bolognese. It was both hilarious and unappetizing.

Expand full comment

Automatically placing ads based on *context* can make this worse. Now they seem relevant, even if that definitely wasn't the advertiser's intention.

I have seen somewhere a screenshot (maybe fake) of an article about Hitler, and next to it an advertisement for gas. This is the kind of thing that people will remember.

At least, no one will stop using gas even if they are disgusted by seeing the ad in such context. But I can imagine that companies producing some products for children or parents might be negatively affected if people saw their products advertised next to some text about "child attraction" in a way that creates a similarly strong mental picture.

Expand full comment

"are people too dumb to understand"

Yes.

Or they are motivated not to understand. Maybe they hate Coca-Cola company as being a huge capitalistic monolith oppressing the workers, or because it sells sugary pap that contributes to the obesity epidemic, or they prefer Pepsi. Kicking up a fuss about "Coke is anti-Semitic!" serves their purposes just fine, because any stick will do to beat the dog.

The best result for them is if this causes all right-thinking individuals to hate Coke, to boycott their products, and the company goes out of business. But even merely forcing Coca-Cola to expend time and money on proving they are not a bunch of Jew-hating Nazis and apologising and swearing to do their part in combating fascism and anti-Semitism is useful, because the broader result will be that somebody somewhere will continue to have a vague notion they picked up that Coke is the drink of choice of people who hate Jews, and that Coca-Cola endorses this, and the negative associations will continue to ripple outwards.

Expand full comment

"Any stick will do to beat the dog" is a fantastic metaphor for partisan attacks. I hope you don't mind if I steal it.

Expand full comment

Before I read your comment, I had already copied that great quote to my desktop, which I often do, and was working on a long rational explanation of how I would give credit later if necessary for such quote-grabs, which involved web searches, the genericity versus peculiarity of the quote, and my own memory, and wondered how this would continue to function at the end of my life if senility set in, or even afterwards if someone read my hard disk.

Expand full comment

For what it's worth, I keep quote grabs in a fortune file and record the author and link whenever I add to it.

Expand full comment

Is it that people think Coke hates Jews, or do they think that Coke is financially supporting a platform which is profiting from and publishing/publicizing antisemitism?

I mean, I'm sure in the old days, companies had to decide, do we advertise in Playboy, or not?

Expand full comment

To quote you "You say “Why are you doing this?” They say “Because every time you eat one of our lunches, you’ll associate the ice cold taste of Coca-Cola and the sweet warm chewy chocolate chip cookies with our company, and you’ll get positive feelings about it, and maybe those positive feelings will influence your prescription habits.”"

Just as much as pharmaceutical companies want to you associate their products with Coca-Cola, Coca-Cola does not want you to associate their product with Nazis. This doesn't strike me as more irrational than any other advertising.

Expand full comment

Just yesterday I saw a news article about a little girl attacked by dogs, and accompanying this article was a big ad for dog food. Presumably this was driven by a simple matching AI, and no one thinks that company endorses attacks on children. Still, the association may provoke a feeling of revulsion that that advertiser would not want.

Expand full comment
Nov 3, 2022·edited Nov 4, 2022

I don't think Bob would think that Coke hates jews, if not because it has become a signal to pull out your ads at least after blue outcry has happened.

"Why do you allow/enable [insert bad thing]?" -- asks/accuses activist to whomever they choose in the complex causal chain

"Because it is not our purview to care about those things" -- answers no one in the causal chain, as it would be followed by news about how they are at best enablers of the bad thing.

Expand full comment

> I actually have a little trouble understanding this - are people too dumb to understand that if Coca-Cola does an ad deal with Twitter, and 1% of ads are for Coke, and your friend Bob who you deliberately follow says "I hate Jews", and there's a Coke ad underneath that, this doesn't mean Coke hates Jews?

Not stupid, just playing stupid IMO. Also, discussion about this on /r/ssc (in a thread about this very post): https://www.reddit.com/r/slatestarcodex/comments/ykpegd/moderation_is_different_from_censorship/iuvk5wn/

It's very weird that seemingly a very significant portion of the subreddit approves of censorship.

Expand full comment

Let’s say someone graffitis hate speech all across the public sidewalk directly in front of your small business. Do you want the city to clean it up? What if the city does clean it up and the same people come back and write more, over and over again, and the city refuses to clean it up? Would you renew your lease in those circumstances?

Expand full comment

But Elon buying the company changes the calculus somewhat. Product will have plenty more leeway to make radical design decisions that may negatively impact revenue in the short term.

I’m still bearish overall. But they have a chance to pivot in ways that no public company would ever do.

Expand full comment

Honestly, recent moves make me skeptical.

So, cutting employees in a rapid layoff + suggesting new fees tends to imply a desire to boost revenue short-term, even at the cost of strategic benefits long-term. (Users hate new fees, and losing people reduces capacity and causes short-term disruption)

Expand full comment

This is consistent with Substack not having much interest in trying to censor ideas--the subscribers are the customers, not the advertisers.

Expand full comment

On the flip side, if users *could* control their moderation (which I think they should) then many users would moderate away the ads, which would ruin the business.

Expand full comment

Ads would automatically be exempt. Most ads are managed on a different level than other content. Most companies love it if you tell them your favorite and least favorite ads.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

How does it work with Apple (which takes money from iphone users) requires e.g. Telegram to implement restrictions? It makes some posts that cannot be seen in iphone app, but can be circumvented by using web version of telegram. Ditto for Android, where you also have option to install APK from outside Play market.

Also, many kinds of content are persecuted by government with police and courts. You can't say that advertisers are driving this!

Expand full comment

I have heard this but I keep randomly seeing videos on my timeline of people being killed by cars, or blown up by bombs in Ukraine, or being mauled by dogs, on all of these social media websites, and the advertisers surely know this, and they don't pull their ads. It would be kind of weird and suspicious if advertisers actually cared that much about very very specific kinds of speech, but not other things that are potentially even more offputting to consumers.

Expand full comment

This is why I think that an activist-response theory is more likely than a direct revulsion theory.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think this is correct. But not necessarily a fatal objection.

It would be possible in principle to have multiple moderation frameworks and serve ads within them; for example in mainstreamland you get all the normal ads, and in conspiracyland I guess it’s ads for fallout shelters or something. Alex Jones was able to make lots of money running ads on his content.

If you fall into a moderation bucket that doesn’t pay for enough ads, you either need to be a paid user (Blue) or you don’t get to see the content. (Maybe you use micro transactions to pay for the individual moderated content). Or maybe the producer pays.

If you go further and imagine a distributed network like IPFS, you need to pay someone to host your content (Filecoin), and “official Twitter” is only interested in hosting monetizable content. But other entities could step in and host if this was pluggable.

Basically I agree with your diagnostic that the ad-funded model causes a conflict here. But there is scope for innovation for sure. See also the Brave browser.

Expand full comment

The angle intentionally left out but the standard elephant in the room is that of the corrupting power of the moderator. Everything your outgroup says looks like an info hazard, the more convincing, often because it's the more true, the more dangerous the info hazard.

Expand full comment

Yeah, who watches the watchmen?

Expand full comment

There isn't a great all-around fix for the agency problem, but it's still a very different world when the moderator of my information stream works for me (however imperfectly) than when the moderator works for Chairman Xi.

Expand full comment

The response to tyrants should be the same whether they are in government or not.

Expand full comment

A tag filtering system seems like it could be moderator free though right?

People would just tag and rate tags

Expand full comment

Only works if the community is already a pretty strong hive-mind, though.

On a platform that was 50% 'snowflakes' and 50% 'people who want to trigger the snowflakes', any filtering system that relied on community ratings would be completely incoherent, as there are equal numbers of people trying to make them wok vs trying to sabotage them.

Expand full comment

I think you need a web of trust for this to work. “Pick nine people who you a think are reasonable”, and compute the tags/flags from them. (Or scale N up as needed, make trust somewhat transmittable, etc.)

Expand full comment

People abuse tags the same way they abuse up/down votes. If you leave it to the community to tag things they'll mark every single thing they dislike with the child abuse tag.

Expand full comment

Rather than tagging, you could ask people to predict the likelihood that some tag authority would agree with a post being tagged this way. And the actual tag weight would be weighted average of the prediction that accounts for past prediction accuracy.

The tag authority could be selected using some Georgist mechanism

Expand full comment

"actual tag weight would be weighted average of the prediction that accounts for past prediction accuracy" makes for most of the heavy lifting here and absent such an authority still makes it exploitable or the rating will end up just a ratio of the twitter opponents and supporters (which is of course different thing than weather it should be banned or not).

Expand full comment

>the rating will be a ratio

See my reply to this same comment, just below yours: https://astralcodexten.substack.com/p/moderation-is-different-from-censorship/comment/10199552

Expand full comment

This is most of the way there, except the requirement to have a tag authority isn't really viable at scale - now you need some trusted authority to look at and tag every post on your site, or at least some significant fraction of them, which requires a great deal of effort from the authority that will have to be incentivised or compensated in some way, and if you have a trusted source tagging all or many of your posts, why do you need average users tagging posts too?

Instead of an authority, you compare the individual user's tag or vote to the aggregate of every other user's tag/vote, again weighted by past performance. Perhaps, to avoid people simply learning to predict the specific bias of a given site, you compare a user's tag/vote to a small bloc of other users's votes, the bloc size being small enough to be statistically unlike the overall site average. When you can't predict the bias of the bloc you'll be compared to, the only consistent Schelling point is objective accuracy.

Expand full comment

> When you can't predict the bias of the bloc you'll be compared to, the only consistent Schelling point is objective accuracy.

I think it may actually be "being a moderate"

Expand full comment

There is also the "friends and family" argument for censorship, which I'm not sure fits in the examples given. If my friends and family are seeing lots of lizard-people content, I don't want to completely turn a blind-eye to it. But if everyone is blocked from seeing it, then the problem goes away.

Expand full comment

I get the impression that the "social dynamics" angle is actually a larger part of this.

The problem isn't just strictly "people are saying things I don't like somewhere", or even "I may accidentally see something unpleasant" so much as "The social circles I reside in, and derive value from may undermine my self-worth".

You can never solve problem 1. Problem 2 is typically rather easy to solve. (another commenter mentioned a system like subreddits). The 3rd problem is probably closer to real. Reddit doesn't (appear to me to) have this type of moderation concern to the same degree that Twitter & Facebook do, and that's because (in theory) everybody can enter into the same social circle in Twitter & Facebook, but they just don't on Reddit.

And if the problem is really the social dynamics, then you can't solve on a comment level, and you have to solve on a network level.

Expand full comment

But also - if my friends and family are replying to that type of content, do I not see half of their posts, or do I see that content through their posts?

Most social media platforms aren't one sender transmitting information to one receiver. They're large ongoing decentralized conversations between millions of people. If I don't want to see hate speech, does that mean I don't see people denouncing hate speech because they are replying to the hatespeech they are denouncing? Can I see half a conversation and not know what anyone is reffering to, or not see any of the conversation including the parts of the conversation I like and want to see?

Expand full comment

Welp. My long list of reasonable policies that will never get implemented just got a little longer. :'(

Expand full comment

Moderation in this sense can de facto act as censorship if, for instance, the 'banned posts' channel consists of 99.9% pornbot spam and is therefore highly impractical to use.

A situation like this seems somewhat likely if we were to implement a minimum viable moderation product of this sort due to the 'seven zillion witches' problem as you laid out in https://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/

Expand full comment
author

I agree that there would be ways to deliberately screw it up if some outside force was making you do it and you wanted to ruin it. But I think there would also be trivial ways to fix it (eg have the multiple different sliders/filters, and one of them is "pornbot spam").

Expand full comment

From a product standpoint, you probably just want to minimize cognitive load rather than maximize buttons.

Unless the goal is literally "I want to give people as many options as possible", it's a lot easier to McDonalds-ize the experience, and have hidden algorithms make any decisions, as only an unprofitable subset actually want to do "DnD Character Creation" for their social media experience.

I also think that this will prove rather tricky. Algorithmic configs are likely to be leaky. They're already leaky today on one standard setting with lots of human enforcement.

Expand full comment

From a product standpoint, the knobs and dials would be hidden away and the user would just see “see more of this” & “see less of this” next to posts, that would automatically turn the knobs and dials for you

Expand full comment

I think that helps reduce the complexity of onboarding, however, it may make a terrible starting point if there's a 4chan-like unmoderated experience to start, where you have to keep on dialing it back.

It doesn't help as much with how ineffective moderation is today, in that even the single line is challenging to enforce.

Expand full comment

Algorithmic configs are not the only approach. Imagine if you could empower certain trusted friends or paid (eg fact-checking) services to block content for you. We're prototyping research systems that let you do this.

Expand full comment
Comment deleted
Expand full comment

It's only a single point of failure if you only user one source of assessment.

Expand full comment
Comment deleted
Expand full comment

I guess I don't see how these services would be able to work without algorithms.

There was a period of time where even satire pages were receiving the Facebook "fact-check" problem notification.

And I very much suspect that nobody is going to want a tiny internet that has only been verified by trusted humans. I also really don't think most average humans actually DON'T want to mess with these configs of "which algorithms to use".

I mean, maybe the internet has been getting simpler, and simpler with fewer and fewer points of friction to engage because of "big corporations", but I really bet it's that those corporations are making educated guesses that people don't want the friction of deciding these things.

Expand full comment

In a sense, any computational platform is inherently using "algorithms", but there's a huge gap of understandability, and thus predictability and control, between e.g. "majority vote by people I trust" and the opaque ML algorithms used for ranking by facebook and youtube.

You say "tiny internet" but it's worth remembering that almost all the content people are reading was written by someone. Readers outnumber writers, so assessing all that content is a human-scale problem. I agree with you that my immediate friends won't suffice to assess the entire internet, but on the other hand they only need to assess the tiny bit of it that I am reading, so again tractable. Also, we gain far more scale if we can figure out how to *propagate* assessments through the trust network---e.g., maybe I get an assessment from someone who is trusted by someone I trust. There are also opportunities to make targeted use of algorithms: for example, if NLP can be used to determine that two news stories are conveying the same information (a common situation), then assessments of one can be copied to the other.

Expand full comment

"In a sense, any computational platform is inherently using "algorithms""

We both know that's quibbling in the sense that my use of "algorithms" is responding to your comment on "algorithmic configs" and that the Travelling Salesmen problem (or merge-sort) were not contextually included.

"You say "tiny internet" but it's worth remembering that almost all the content people are reading was written by someone."

Also, is it really worth remembering? It strikes me as relatively irrelevant and that you're slow-walking a questionable idea rather than seriously trying to engage on it.

So, the core concept isn't # of readers, so much as "trust-network", and "trust-network" has advantages, but probably doesn't solve our problem with social media platforms, as many people have a "trust network" that amplifies rage-bait content. The rage-bait content makes these people engaged, but also unhappy, and push for removal from platforms. Which then potentially gets us back to our problem.

I mean, it's worth asking what our problem really is, but... I think most fake news already IS propagated by trust networks. I also think most arguments about banning really ARE about situations people are outraged by made very very public, rather than their 9 year old randomly stumbling onto Nazi Twitter. And I think most people banned are acting within their same social network, but doing something considered crude or abusive by the platform. (which is using algorithms to say what "crude and abusive" looks like, but is trying to minimize social disengagement)

And in none of these senses do trust networks solve the challenge. And they even introduce their own in the sense that every member of your social network now has to be assessed for the weight for your trust network. If this is ALL behind the scenes, then we're almost recreating the current algorithmic system.

Expand full comment

For twitter, the configuration problem seems easy to solve. Show people tweets of people they choose to follow, and don't show them tweets from people they don't choose to follow.

That's how twitter used to work, until an A/B test suggested it would be increase engagement if we filled everyone's feeds with ragebait instead.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

From an engineering perspective this doesn’t seem trivial. Classification problems like these are hard to do better than, say, 90% accuracy on (or 89, or 96, or some other number that is impressive but not good enough. Even gmail’s spam filter routinely makes both type 1 and 2 errors. It is probably ”good enough” insofar as anything ever will be - but this is a high bar). Additionally you’ll start an arms race whereby spammers calibrate their content to leak into the popular non-default filters.

Expand full comment

IMO this still won't solve it, because it won't usually be as obvious as pornbot spam.

Everyone will spend time over arguing about - OK, do Nazis belong in the same bucket as intentionally misgendering transpeople? Your answer is to err in the direction of having different settings, but - do you have different settings for Nazis vs Klansmen? I bet not. So you're still making a judgment about who is really Nazi-like.

And if you do have different sliders for Nazis vs Klansmen then you've made it a massive headache, and people will opt for the default, and then everyone argues about default settings...

Expand full comment

But if your filters are too fine-grained, you get bubbles, which is OK from freedom of speech perspective but hardly good for society as a whole.

Expand full comment

This is a good distinction.

Another factor is the need to drive clicks/engagement. More inflammatory content drives more engagement, which drives revenue, creating a strong incentive for the platforms to not offer such user-friendly opt-ins.

Expand full comment

It doesn't even need to be single filters (and the ensuing arguments about who gets to decide what's disinformation or antisemitism or whatever). "Fox News says fake" and "NPR fact-check: failed" and "ADL says antisemitism" and "Proud Boys endorses" and "counter to CDC guidelines" and so on could all co-exist in the same ecosystem (ideally with cites available). It's the Good Housekeeping seal of approval on steroids. It requires some sort of annotation system and a way for users to decide which annotations they want to know about or filter based on.

I prefer an Internet where platforms give users information and the responsibility to use it, as opposed to trying to make blanket decisions on users' behalf.

Expand full comment

The key switch is when the internet became popular for most people.

Decision-making is something people associate cost with, and so the push for most experiences is to minimize the "cost" of action. Games get streamlined. (See Elder Scrolls Morrowind to Oblivion to Skyrim to see that in action) The same happens with user experiences. Instead of a thousand scattered forums, everything is flattened into a smaller number of social media sites, because it's easier to decide and convey.

Expand full comment

That there aren't still a massive amount of active forums is my most hated myth about the internet.

Expand full comment

To be honest, you're right. But the same is also true with gaming for that matter. Top games streamlined, whereas more sophisticated consumers who want experiences like historical ones have a high volume of indie games to pick from.

But that also is why most of this discussion gets to a weird point. There are a lot of options. The issue is not that "people do not have options" they want the easy & default option to be most conducive to them. And this is a bit weirder, as the complaint really then cannot be about "I can't have what I want", so much as "the social order isn't catering to me and I dislike that". The latter complaint actually can have validity, but it's not a freedom of speech issue, so much as a societal support/norms issue.

Expand full comment

This is exactly what we're trying at http://trustnet.csail.mit.edu/about

Expand full comment

People are not just worried about seeing "disinformation" or whatever you call it, they are worried about others. They'll want to see the stuff so they can argue against it.

Or at least some will - others will filter out stuff they don't like and end up in a bubble where they're never exposed to dissenting ideas.

Expand full comment

Exactly right. The point is that each person should be able to choose for themselves whether they want to be exposed or bubbled. In fact, in our experiment people made different choices at different times---sometimes they wanted the restfulness of a bubble, and other times they wanted the challenge of a broader input. But they wanted to be able to choose the times. Natalie Ashton talks about this as "epistemic respite". https://www.logically.ai/articles/why-twitter-is-epistemically-better-than-facebook

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

This feels like basically a shared and streamlined version of the kill files of old: https://en.m.wikipedia.org/wiki/Kill_file

Expand full comment

The problem with kill files, and anything else that requires users to take initiative to curate their experiences, is that most users are either too dumb to figure them out or too lazy to bother. Personally I'm both.

Expand full comment

Instead, we get a world optimized for the stupid and the lazy.

Expand full comment

Reminds me of the far more analog but far more initiative-intensive "Enough Already" device.

https://www.youtube.com/watch?v=-SzB5OQUcOU

Expand full comment

I built a prototype about six years ago of a Chrome plug-in called The Grey Lady.  You could pull up an article on NYT.com and copy-edit it, and other users who followed you on TGL would be able to view your copy edited version of that article.

The idea was to be able to make this work on any website, and enable comments per copy-edited version and on the source article via TGL. Journalist friends at Reuters, Bloomberg etc. liked and hated it. There was zero monetization that we could figure out. I didn’t even bother trying to pitch other investors on it.

Expand full comment
Comment deleted
Expand full comment

we have a chrome extension that lets you do this with news headlines. You edit the headline; your friends see the edited headline anywhere the original is mentioned on the web.

extension: https://chrome.google.com/webstore/detail/reheadline/iignpdlabbnnacdkchpnpljkhdlkblbh

paper; https://people.csail.mit.edu/farnazj/pdfs/Study_of_News_Headlines__CSCW_22.pdf

Expand full comment

I've wanted this for at least a decade. Installed!

Expand full comment

This is brilliant and I had a similar thought but monetization would have to be subscription to the tool and payment to editors and contributors with some distributed audit mechanism to keep people honest.

Expand full comment

Have you asked Substack for this feature for ACX?

Expand full comment

I'd be fascinated to see the analytics Substack gets out of the experience.

I imagine one effect is likely: the lower stakes would lead to an increase in bannable comments, and possibly also thereby to an increase in borderline, low-quality comments.

Expand full comment

A huge thing that some of the (close-to-being-censored) people complain about is being shadowbanned or reach-limited. This idea is cool but I wonder if some folks would still be mad about being moved to the Bad Stuff channel, as it would limit their reach.

Expand full comment

First thing I thought of, too.

Swastikas and rabbis are easy enough, but if mods decide that the essay quibbling about Holocaust numbers or whatever belongs to the Pretty Weak band as opposed to Smartly Open-Minded band where 70% of all readership is, then we're right back to its writer screaming about being silenced.

Expand full comment
author

I think the specific super-simple implementation I proposed in this post is better than status quo but I agree it's still suboptimal. I think there are lots of much fancier things you could do.

See https://slatestarcodex.com/2015/05/06/the-future-is-filters/ for some examples, although I admit that so far the prediction in the title has totally failed to come true.

Expand full comment

Shadow banning is morally reprehensible without strict guidelines on use. It is convenient and effective sure, but it has significant deleterious impact on the mental health of targets.

Expand full comment

In general, making the system less predictable seems like a bad thing.

I already hate the fact that when I post something on Facebook, some algorithm decides whether it will be shown to my friends or not. Imagine what would happen if Substack tried something similar -- showing each Scott's post to ten randomly selected subscribers first, and if they do not sufficiently engage with the post, it will not even get shown to others -- Scott would probably move to a different platform.

Shadowbanning is a good weapon against spammers, because generally, anything that makes a system more difficult to use, is useful when used against an adversary. Question is, what is your false positive rate? And how many users will worry about the possibility of being shadowbanned, even if they are not (yet).

Expand full comment

Under this model, is your governance of the comment threads here moderation or censorship?

Certainly any number of posts that people get banned for have very high engagement.

Expand full comment
author

Combination of both.

I think I would implement this system if I could, but I don't control Substack (and even when I had my own WordPress, I wasn't technologically savvy enough to make it work).

That means that my only way of making sure people who don't want certain content don't get it is deleting it for everyone, which I regret.

I also sometimes delete things because people would freak out about it and this blog would lose reputation / ability to be cited in respectable places. I regret having to do this but I think on balance it's worthwhile, and I might continue to do it (to a lesser degree) even if the system I describe above were in place.

Expand full comment

Doesn't this commitment completely invalidate your main post ? If you could wave a magic wand (or hire a bunch of cheap software engineers, which is kind of the same thing); and implement the perfect moderation system of your dreams that would allegedly benefit everyone... then you still wouldn't do it, because it wouldn't actually benefit you, your blog, or arguably the readers thereof. Unless I'm missing something ?

Expand full comment
author
Nov 3, 2022·edited Nov 3, 2022Author

Not sure if we're interpreting this the same way - if I could wave the magic wand and implement this, then I would definitely do it.

Expand full comment

You said:

> I also sometimes delete things because people would freak out about it and this blog would lose reputation / ability to be cited in respectable places. I regret having to do this but I think on balance it's worthwhile, and I might continue to do it (to a lesser degree) even if the system I describe above were in place.

I interpreted this to mean, "I would still perma-delete certain posts even if this system was in place", but perhaps I was wrong ? Certainly, merely tagging these posts as "banned" (and thus allowing opt-in users to read them) would not prevent the attacks that you are worried about -- would it ?

Expand full comment

One distinction is that, unlike Twitter for example, no one would argue that Scott's blog is anything close to "the de facto town square", so Scott has somewhat less responsibility to fully support free speech norms.

Expand full comment

Right, I'm not talking about responsibility -- after all, it's Scott's blog, he can do what he wants with it. If he wants to ban all posts that do not include a cute poodle photo, he can do it. But my point is that his two stated goals -- roughly, "reduce censorship while maintaining moderation", and "censor posts that are likely to damage my reputation" -- are mutually incompatible.

Expand full comment

One added wrinkle with the distinction Scott is making is that online discussions, conferences, communities, etc., may have some kind of shared moderation they want to keep things working.

ACX comment threads are a shared communications channel between Scott and his subscribers, and presumably Scott wants to moderate them according to what he and his subscribers want to see. That might mean Scott decides he wants no more discussion of (say) ivermectin or hbd or the Ukraine war and bans the topics. It seems reasonable to let him do that and figure it's between him and his subscribers to hash it out.

Suppose I want to have an online academic conference on cryptography. I will want to moderate that to at least keep the discussions on cryptography, prevent spammers or trolls from showing up and disrupting the discussion, etc. This is the stated goal of the conference and its participants. I want to go watch presentations and participate in discussions about (say) multiparty computation, not be innundated with dick pics and ethnic slurs by some bored teenager or crazy person.

There's potentially some conflict about what moderation the organizers/participants want, but that's a conflict that ideally should be resolved within that community, right? You may be very unhappy that the evolutionary biology conference doesn't want to hear about the evidence for your creationist theory, and maybe they're all wrong, but it's ultimately a kind of collective decision the participants get to make whether or not to listen to you about it.

This is still a different situation than if the moderation decisions are being made externally, on behalf of some outside group or entity. The creationists can have their conference and kick out the evolutionists, too, that's fine, but neither creationists nor evolutionists should be able to forbid the other from having conferences.

Expand full comment

Your situation can be managed using the flagging paradigm. The online conference organizers flag certain content as "related to cryptography". People "attend" the conference by setting their tools to ignore content without that flag.

Expand full comment

Besides Bugmaster's point, I will say that I'm pretty unconvinced that facilitating the further disintegration of joint societal norms (and, indeed, the very notion of joint society) is a good path.

Now, personally, I'm sort of hoping that Twitter ends up either providing useful data on some of these theories on functionality, or simply collapsing altogether, as either one seems likely to be a net positive for the world, or at least my portion of it.

Expand full comment

On the one hand, I agree with you that Twitter specifically and social networks in general are a net negative for society. On the other hand, I don't think there's any way of stopping them, besides some sort of totalitarian control over media (which would be worse); post-scarcity society (which would be better); or Singularity (which would be... er... let me get back to you on that). Social media exists because all of the incentives align towards it, not because some solitary evil genius clicked the "create Twitter" button.

Expand full comment

Eh, I am unconvinced that Twitter is not uniquely bad and that network effects make it unlikely that any replacement will quickly/effectively replace it if it collapses.

Social media more broadly I tend to think of as strongly negative, but probably isn't going anywhere, despite Facebook/Meta's best efforts.

Instagram and Tik Tok have their own issues, but don't have nearly the grip on intelligencia/media/politicians and I don't think are really set up to have such a grip.

Expand full comment

I predict that if Twitter were to shut down tomorrow, it would be replaced by something similar (and probably worse) in short order -- much like Facebook replaced MySpace, or Instagram is replacing Facebook, or TikTok is replacing, well, most of everything.

Expand full comment

Eh, most networks have been replaced via competition, not collapse. But more broadly, I think twitter due to its format and formation history has a unique and uniquely bad grip on our cultural imagination. But, well, we'll see.

Expand full comment

> Combination of both.

What part of what you do/allow/ban is "moderation"?

In your definition, "moderation" is about what people have to read, while "censorship" is about what they are allowed to post. I think it's a neat and useful distinction, but I guess one reason why the two are so often conflated is historical.

For most of the internet's existence, there was no easy technical way to let people only see some posts but not others. There was the example of kill files above (https://astralcodexten.substack.com/p/moderation-is-different-from-censorship/comment/10187873), but as far as I understand, they only serve to ignore specific users. Otherwise, whether on bulletin boards, IRCs, forums, or LJ, I think the only way of preventing a user from seeing some (type of) content was to prevent another user from publishing it. I think this is the reason why your non-politics open threads were, by this definition, censorship - you still can't do proper moderation, even with the resources your reach and Substack give you. But this situation might also lead to a strong status quo bias.

For example, last year, Instagram introduced a "Sensitive Content Control setting" with the three options "Allow," "Limit (Default)," and "Limit Even More". A lot of people, from gun advocates to LGBTQ activists, complained (https://www.theverge.com/2021/7/23/22590188/instagram-sensitive-content-filter-creator-post) that this limited their reach, and explicitly framed it as censorship, while others complained that those things (nudes vs. gun ads) were thrown into the same bin. However, Instagram said that the default setting hadn't actually changed, they'd just given people an option to have either less or more moderation if they liked. Granted, this might have been partly a communication issue, but it still shows that even with a change which on the face of it seems to be a Pareto improvement, the social media company can take a lot of flak. While in theory it should have made both the free-speech and avoid-harassment sides happier, it angered at least some of the people on both sides.

Expand full comment

It is the Reign of Terror, beneath which we all humbly bend our necks.

Expand full comment

I see the pleasing elegance of this system on paper but I’m curious what it would look like in practice. If someone makes a “banworthy” post and gets marked as banned, their posts are invisible to the majority of the users of the website. Isn’t that basically a real-life version of the “shadow banning “ phenomenon that so many on Twitter have already complained about?

Expand full comment
author

I think one difference is that it would tell you that you were banned.

I think this is a Pareto improvement on the current system - while having less reach is probably unpleasant, it's strictly better than having zero reach.

Expand full comment

How many people on Twitter do you think are shadowbanned as described right now? I would be extremely surprised if it was even a double digit number.

Expand full comment

I’d gladly configure my setting to “opt-in to Banned posts,” even if that meant it showed up on my profile.

Visualizing a single tag of “Banned” misses the opportunity here, though. With a little variety of tags used for filtering, those tags can become (ie will become) their own communities.

Expand full comment

Based on the tactics used by social media companies in the last decade, the overwhelming majority of users do not use these configurations.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I proposed an idea a while back (one Dorsey apparently had before me but didn't implement) that somewhat draws on this distinction. Twitter (or wherever) should allow custom algorithm creation. Users can go to a marketplace and choose an algorithm. Do you want HalakhahMax which promotes Jewish content and shuts down on shabbat and filters out anything even vaguely anti-Semitic? Great! Do you want Breadgorithm which only shows capitalist posts with appropriate socialist dunks and puts little corporate logos on all politicians and only shows ads from left-friendly corporations? Fine! Go nuts! Twitter would only enforce what was legally required. Basically a marketplace for moderation/content algorithms with Twitter only handling the limits, like CP, government requires (which would be enforced over and above the algorithms).

This'd be a win win if your goal was solely revenue. What algorithm a person chooses is basically a giant targeting sign for advertisers. The person who chooses the HalakhahMax algorithm is probably Jewish or at least friendly and would be an ideal target for Israeli products or matza or whatever. (I'm being a bit silly but you get the idea.) And the person gets to customize their experience to a far greater degree. This could be incentivized pretty easily by Twitter doing revenue share with the algorithms too. Which would incentivize the algorithm makers to make popular and high value algorithms to appeal to advertisers. (I have a more complete treatment somewhere.)

The reason none of the social media companies do this, as far as I can tell, is that they're so based on the idea of an optimized algorithm to boost ad rates and curate discourse that the idea of opening it as a market never occurred to them. Algorithms are seen as closely guarded "special sauce" rather than something that could be a fungible value add.

But it ties into your point here: because it is a marketplace, an option, all the marketplace algorithms would be moderation and definitionally not censorship. If you're a raging Neo-Nazi and everyone opts into algorithms that block you then that legitimately isn't some central authority trying to take you down. It's that no one wants to hear your weird theories about Jews and is choosing an algorithm that excludes you. And if you insist that's effectively censorship then that's the liberal point about not having a right to a platform in its proper form.

Expand full comment

>If you're a raging Neo-Nazi and everyone opts into algorithms that block you then that legitimately isn't some central authority trying to take you down.

To be absolutely clear, you will get shut down on twitter et al for a lot, LOT less than literal neo-nazism. You may not have intended to imply otherwise, but framing it the way you did reinforces the false narrative that all right-wingers being banned from social media are "raging neo nazis" or anything close to it.

>And if you insist that's effectively censorship then that's the liberal point about not having a right to a platform in its proper form.

The entirely opportunistic liberal point*

if liberals were being banned from social media, the idea that these platforms are the private property of corporations to do with as they see fit would not exist to these people. Even this year, we've heard a chorus of these types of liberals demanding the government block Elon Musk from buying twitter.

maybe conservatives would feel the same in reverse, but that's a lot different than acting like there's anything fundamental about this point as far as liberals are concerned.

Expand full comment

>If you're a raging Neo-Nazi and everyone opts into algorithms that block you then that legitimately isn't some central authority trying to take you down.

You'd be shut down for much less than this. In some of the art spheres I lurk in, lines are being drawn for benign reasons like their stances on AI-generated art.

>Which would incentivize the algorithm makers to make popular and high value algorithms to appeal to advertisers.

So curious how updates to an algorithm would work in that instance. If filter creator A released a popular algorithm, people set and forget, then the algorithm slowly becomes more "advertiser friendly" because the creator A gets kickbacks from specific companies, is that at most an ethics violation?

Also as an implementation detail - what inputs would Twitter have to provide to even let a marketplace get started? Who owns the compute and storage for running every algorithm on the marketplace over all of the content on Twitter?

Expand full comment
Comment deleted
Expand full comment

Then users can switch to another filter producer if they care.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

"If you're a raging Neo-Nazi and everyone opts into algorithms that block you then that legitimately isn't some central authority trying to take you down. It's that no one wants to hear your weird theories about Jews and is choosing an algorithm that excludes you."

Isn't this how 'cancellation' on social media works in the first place, albeit more organically and slowly? Word spreads that a person is radioactive, no one wants anything to do with them, it snowballs into a vicious back-and-forth, hashtags are spawned, and they get a Netflix special... er, they suffer severe reputational and professional damage on flimsy grounds. There's coordination and hashtags, sure, but ultimately the mechanism is that enough people are seen to decide to ostracise a particular individual.

If the algorithms rely even somewhat on 'similar user behaviour/satisfaction' and the like, then at best we get a Schelling-ban for the neo-Nazi, which the neo-Nazi will indeed blame on the existence of the algorithms, because they definitionally make some decisions on their own.

Expand full comment

No. Cancellation occurs because activists are able to appeal to a central authority which users cannot easily work around. In an opt-in filter world, people could at least choose to be exposed to that content.

The alternative is "go create your own Facebook" or whatever, at which point the cancellers go after the payment processors so you can't even be user-supported.

Expand full comment

> The alternative is "go create your own Facebook" or whatever, at which point the cancellers go after the payment processors so you can't even be user-supported.

From Reddit comments (about Kiwifarms ofc):

> There is a certain unnamed forum going through this now, where when a service of some sort gets removed, they find a work around. The problem they are running into now is that they are at the point where their traffic is not getting passed by the backbone ISP's. Seeing how above and beyond that is for these sorts of things, it is very telling how little press this is getting.

> But hey, its happening to bad people, so it must not be a problem, right?

> I can't post the name of the site due to reddits doxing rules, and because of the escalation (and companies) involved, it hasn't been making much news. Neither Zayo or Voxility are going to be announcing taking such steps from the rooftops and if you're a lay person you are absolutely not going to know those companies names.

Expand full comment

I'm fairly sure Kiwifarms is still going, which perhaps acts as somewhat of a counterpoint.

Expand full comment

True. Quote from Kiwifarms owner's post from Monday:

> The last week was pure downtime because I seriously over-estimated how much I could rely on Tier 1 ISPs. I also made a stupid decision to wait for a hot copy of the database instead of taking the 12 hour old archive I had and just rolling back the same day. The 12 hours of content was not worth the 7 days of hard downtime. To add insult to injury, the static files posted in that time are not recovered yet because I am still waiting for remote hands to fix something on their end. If I could do it again, I'd not have waited.

> In my defense, even the screaming r***ds on Twitter overestimated ISPs. They've been complaining to hosts for months, even when those hosts don't care. The revelation that ISPs would easily and willingly give hand to censoring the Internet was a shock to all, and now that will become the new default strategy. Jumping between providers won't work if major players like Zayo and Voxility are conspiring to shut down individual communities. Worldstream, a major datacenter in Den Haag, instructed a large company to stop doing business with me. So it's not just the DDoS mitigation, it's the fiber optics, and the concrete warehouses who are actively capitulating to a few r***ds on the Internet.

> Now that I understand the true depth of the rot and corruption in the system, I will not rely on any single ISP again. It is not possible for them to force us off Tor. Even with DoS attacks, we can mostly mitigate those and keep refining it over time. Improving Tor's performance will become a top priority if our reliance on Tor becomes more important.

> The fight is to be on kiwifarms.net, on clearnet, accessible to the average user of the Internet with no special tools or requirements. I have fought a registration barrier to reading threads on this site for 9 years because I don't think people should be required to do anything just to see something. My position on that doesn't change now just due to circumstances.

> I am absolutely livid, and at the same time very sad. I first started using the Internet at about 9. I saw many websites that were just ideas propped up by individuals or two brothers, and they became massive cultural phenomena from basically nothing. Newgrounds, MySpace, Runescape, and 4chan all fit into this category. I realized very young that I wanted to try and be one of those people. Unfortunately, those people were my age when I was a kid, and they all got out at the right times. I'm now trying to prop up an antiquated format in a modern era where the average r**d has a Twitter account and gets to decide what kind of website is too offensive to stay up by sending emails on their f***ing iPhones. I feel like the Internet is an old friend I am watching die in real time thanks to a cohort of sex pests.

> My plan is to return to Clearnet today, and my new providers are very confident in their plan to keep it up. So, I am too.

Expand full comment

> (one Dorsey apparently had before me but didn't implement)

He sorta is implementing it. I'm not sure what is the relation with Twitter. They seem friendly with Musk. Maybe they have shared plans.

https://atproto.com/

https://blueskyweb.xyz/

Expand full comment

Fascinatingly, the trans community is already there.

Shingami Eyes (https://shinigami-eyes.github.io/) is a Chrome/Firefox extension that allows a set of trusted users to mark users or websites as <insert bad thing here>, and then other users will see those users underlined in red. It’s a completely decentralized opt-in moderation that protects users without censorship.

It probably classifies some people in a way that would be objectionable to this community (I haven’t used it so I have no idea). But that’s fine! Who cares if the group that runs this extension doesn’t like you—they’re free to avoid you, and you’re free to keep posting.

Expand full comment

For Twitter, there's also Megablock (https://megablock.xyz/), which simply blocks a tweet's author and anyone who liked the tweet. Haven't used it myself - I scarcely use Twitter at all - but I imagine consistent use would result in pretty rapid silo creation.

Expand full comment
author

Yeah, I find that fascinating and basically a good idea (although I have a friend who thinks she was unfairly blocked by it and I feel bad for her)

Expand full comment

I think one of the major problems with this, like "block lists" is that both the block list adders and the block list users are incredibly unmotivated to do due diligence. There's also virtually no appeals process. Your average user will absolutely not bother to check if a block list is hitting innocents or being used by power users as a social coercion mechanism.

Expand full comment

Interesting that under their guidelines "being trans" is categorized under "not enough to mark as trans-friendly".

Expand full comment

This is to filter out critics of the trans activism movement who are themselves trans, such as Blair White and Sophia Narwitz.

Expand full comment

I haven't used or looked at that extension, so I don't know if it already has this, but an obvious feature to add to it would be auto-downvoting those posts in addition to underlining them.. That way, even people who haven't installed the extension get the benefit of not being exposed to the views from bad people!

Expand full comment

That's just asking for an exploitation as a DDoS style attack on posts you don't like (DDoE: Distributed Denial of Exposure maybe?)

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

If all it is is "the users of this extension don't read your posts", then okay.

The problem, of course, is that if you're motivated enough to want an extension that marks out "this site is transphobic", it may be because you're overwhelmed by all the transphobia out there and you, as a trans person, just don't want to deal with more crap.

Or it may be that you're an activist whose next step will be "This site is transphobic! We must shut it down because of all the harm it is doing!"

And this needn't even be malicious intent; if you believe that Thing is indeed bad and harmful and does violence, why would you be indifferent about it if you could easily do something to fight it? We all have Thing we believe is bad and dangerous (be that malaria, AI risk, or the like) that we would/should fight if there was an easy way to do it.

"This handy doohickey means you need never see bad stuff again" is not the problem. The problem is "what comes after, when people start using it?" - is it "great, I need never see anything more about ivermectin" or is it "now we know where and who the bad stuff is, let's organise to get it taken down and that person kicked out of civilised society".

Expand full comment

How is this different from Coincidence Detector, the one that put triple parentheses around names?

Expand full comment

Trans people aren't politically toxic.

Expand full comment

It would be very much in the spirit of the discussion to act on the perception that something is or isn’t politically toxic, a term which of course can’t be precisely defined.

Expand full comment

Weird considering that ""transphobic"" things are some of the most heavily moderated against on the major social media sites. It's not like everyone's talking shit bout transgendered people and are getting away with it, and I certainly haven't seen any trans advocates opposed to centralized shutting down of anti-trans posters.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I've pondered an analogous idea but instead of in the world of information/content, in the world of laws that constrain freedom in the name of protecting people -- e.g. drug criminalization, regulation of financial services, etc. What if we kept the system but those of us who want to trade our protection for freedom have a way of opting out?

And similarly, when I pitched this to a friend, as in this post, he suggested that although it might be complex, we could have the ability to opt out in different areas, e.g. health, financial, etc. As broad or specific as you want.

Expand full comment

Assuming setting up a bunch of filters were even possible (false positives and true negatives happen) or politically feasible (why isn't there a filter for X niche issue?), there's the question of what's the default setting you're going to give to a new user?

The power of the default is immense, and lets be honest, the vast majority of people would not know or care. If you set all of them too high or all of them too low, how many users would leave instead of adjusting them? If you set it in up in the registration flow, how many users wouldn't make it all the way through because of the friction? There is no benefit to a social media company for implementing this sort of function.

Expand full comment
author
Nov 3, 2022·edited Nov 3, 2022Author

The default could be exactly what it is now - this is why I start with the very simple version, which I think is a Pareto improvement.

Expand full comment

"See banned posts" reduces a wide spectrum of potential harms into one binary. When I imagine the improvement in terms of "access/Free speech," it's from seeing - "speech."

The proposed switch almost certainly gets reduced to just seeing wave after wave of spam, which nobody really wants. You can have massive volumes of spam, interspersed with disgusting imagery, graphic violence, pornography, all the way to CP that get reported to NCMEC and terrorism cells reported to the FBI, but in terms of volume, it's going to be spam which nobody really wants, and I think is universally disliked.

I think this is one of those cases where a simple version isn't a Pareto improvement - it's a "it's going to be worse in the short term till it gets better" situation in complexity.

Expand full comment
author

I still think it's a Pareto improvement - even if "banned posts" is really terrible and 99.9% of people don't want to do it, those people are no worse off, and the 0.01% of people who really do want that for some reason are better off.

And then very simple changes to that - eg separating out "spam" and "all other categories" would make the magnitude of the improvement much bigger.

Expand full comment

I think the people who hate Politician X the most (in both parties damn you) will be the ones who are unable to resist toggling on the "show banned posts" button. And they are also the loudest. Maybe we need two buttons - "hide banned posts" and "hide people who refuse to use the first button".

Expand full comment

Unsubscribe from posts and comments from userX is a wonderful thing that made FB more less painful, for me.

Expand full comment

I also agree 100% on that.

This is a problem with a shared social environment, not on the comment level, and likely not even a matter of "OMG, I suddenly saw something that surprised me". People find certain ideas disgusting.

Expand full comment

Isn’t this already happening, but just not in a single shared forum? Like, you already have the option of posting to (and reading from) totally uncensored forums, as well as forums that are censored in specific ways. It’s just messy, and inefficient.

One reason a single shared forum with the toggles you describe might not take off is that people often *like* to feel like they’re speaking to a specific community. Both for bad reasons, and for more sympathetic ones (like, you know what the norms are, or you expect discussions to be high quality given the other people there, or you’re less worried about context collapse).

One interesting experiment in this space is Radiopaper. The central artifact of the site is a public exchange between two people, but the exchange isn’t published until/unless the second person replies. (Other users can comment on existing conversations, but these comments aren’t published unless the commented-upon person replies or approves the comment.) It’s designed to minimize trolling and promote high quality discussions, but I also wonder if the ability to moderate who publicly engages with you (and not just what you see) will be attractive to people.

Expand full comment
author

I think the main way this isn't true is that the Internet is getting more winner-take-all, most people want to be on the sites where the most interesting people and discussions are, and so the policies of that site have a major effect. If I changed my moderation policies on ACX, it wouldn't be trivial to find an exactly identical site where you could have conversations about ACX posts (actually, this is Data Secrets Lox, but that's a weird edge case). Censorship policies on Twitter and Facebook matter because that's where most of the conversations will happen regardless of whether people approve of the policies or not.

Expand full comment

Do you think that if SCOTUS rules that social media sites are common carriers, that Twitter and Facebook would implement something like what you suggest to enable a neutral but not-terrible experience for their users?

Expand full comment

To be honest, they’d probably do it in a worse way, where there would be different moderation toggles on offer, but the most popular ones would have a reciprocity condition (“my posts can be seen only by people who agree to only see posts which are moderated by such-and-such group”). You could recreate the terribleness of the current, awkwardly censored environment, as long as that’s where the important conversations are happening.

Expand full comment
founding

Heavily agree with this. In practice banning bleggism from the standard platforms will cause alternatives specifically for bleggs and friends to pop up, and these alternatives turn very quickly into only discussions of bleggism instead of the more broad purpose the standard site had. This leaves a void for those of us who want to discuss knitting or skiing with a diverse audience that happens to include bleggs. (This is different from the "million witches" idea, because in this case I explicitly *want* to talk with witches, just about things other than witchcraft.)

Expand full comment

I don't think this works with your original China example. In that situation, you argued that the important thing was just freedom of information - the main thing that matters is that you can access criticism of Xi Jinping if you want to, it's not shut out of your information environment. But now you're arguing that you need to not only access that information, but bring it into spaces where the most interesting conversations are happening, and that doesn't work with a simple opt-in model. If you have to opt in to seeing criticism of Xi Jinping, then you can only talk about that with people who hate Xi Jinping, and the information gets kept out of all the interesting conversations with people who love the Party.

Expand full comment

> And it would make the avoid-harassment side happier, since they could set their filters to stronger than the default setting, and see even less harassment than they do now.

Is this what the “avoid-harassment side” is worried about? Or are they worried about *anyone* seeing the harassment? Would the simple knowledge that the harassment exists and is potentially visible be a problem for many people?

Expand full comment

Yeah I was going to say something similar - this doesn’t fit my model of how the anti-offensive-material crowd behaves.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I mean, if someone posts: "EC-2021 is really XXXX who lives at YYYY and works for ZZZZ, internet do your thing" below every post of mine, my not seeing it doesn't really solve the problem.

And it's interesting Scott doesn't seem to think that? Was he fine with the Nytimes piece because he didn't have to go read it and had to opt in?

Expand full comment

Turn on "Replies only by people I follow." Boom, problem solved.

Expand full comment

Hrm... I think that's a useful distinction, though I think some activities that are generally accepted as "moderation" would fit your definition of censorship, unless "your customers" could also be read as "the customer base you're trying to have"

For instance, let's say you're trying to create a forum devoted to some particular topic. A bunch of people show up, enough that they may outweigh the small initial userbase, and start posting about a bunch of stuff that has no real connection the focus topic, but ends up starting big discussions that don't go anywhere useful and sort of destroy the idea of the forum being a thing for that specific topic, thus killing the ability to really attract users that want to discuss that particular topic.

If "your customers" could be read as "the customers you were trying to have" or something, then it fits your moderation definition, else it fits your censorship definition, but I think it would be generally understood to be a moderation activity?

Expand full comment

This essay addresses something often overlooked in policy debates generally: the power of good defaults, both in public policy and in society more generally.

The economist Nick Gruen has written about this well in relation to public policy at https://clubtroppo.com.au/2005/08/20/designed-defaults-introducing-the-backstop-state/ :

"Wherever possible, and before it resorts to coercion either through regulation or monetary incentives, the Backstop State will seek to assist its citizens by setting ‘designed defaults’. Citizens would remain free to make alternative arrangements. But they could also rest assured that, if they did not exercise this right to choose, they would fall back on a default option that reflected expert opinion about what was the most beneficial ‘default’ possible ... Of course we should retain the choice to take matters into our own hands. But if we do not choose to do so, it is efficient for experts to design a default which is as well suited to people’s circumstances as they can make it."

One part of the trick, of course, is to choose the appropriate defaults. Another part is to surface the alternatives in just the right way. But it can be done.

Expand full comment

I'm hoping someone with a better memory can fill in the details on this. Back in the days of NNTP / usenet, you had a .kill file: a file that you maintained on your own computer that determined whose posts would get immediately deleted from your feed. This worked moderately well for a while, because email addresses were scarce, and spamming by creating a new email address for each post was impractical to most people.

Some time later (and this is where I get quite hazy) people started sharing their .kill files, and I vaguely remember a way of distributing .kill files.

I don't understand why we platforms do implement something similar today. If you are a Biden-is-an-amphibian person, then get your moderation done by people in your in-group; if you instead trust the Biden-is-a-lizard crowd, have others in that in-group provide your moderation.

Expand full comment

Centralization and scalability - you can add additional moderation to what Twitter already does, but if Twitter doesn't want to host something they find abhorrent then you can't *force* them to. And even if the central provider didn't filter anything (maybe we replace Twitter with some hypothetical data-warehouse service which is hosted in Sealand to avoid legal issues), the Biden-is-a-lizard crowd doesn't have enough staff to sort through 500 million tweets a day to find the misinformation about Biden being an amphibian. They probably can't even keep up with the basic spam filtering.

It's possible you could find a workable arrangement with this model - maybe you have some sort of mostly-neutral provider who only does spam filtering, then layer that with another provider who specializes in filtering porn and gore, then once it's small enough to handle you can send it to another provider who filters it down to your preferred political views, but it would take a decent bit of infrastructure and I'm not sure how any of these parties would make money from it.

Expand full comment

The first amendment provides some exceptions to free speech, like direct calls for violence or obscenity, or noise regulations.

Expand full comment

> direct calls for violence

It isn't that broad. It needs to be incitement to imminent lawless action. So posting on a message board that everybody should go burn down city hall and kill the mayor would still be protected. Because, though it's a call for lawless action, it isn't imminent. You'd basically have to be standing on the sidewalk outside of city hall and handing out torches.

See: https://en.wikipedia.org/wiki/Imminent_lawless_action

Expand full comment

From 1994 in response to https://en.wikipedia.org/wiki/Serdar_Argic

"...at the time, there was a fear of the free use of third-party cancellations, as it was felt they could set a precedent for the cancellation of posts by anyone simply disagreeing with the messages. [...]

The Serdar Argic posts suddenly disappeared in April 1994, after Stefan Chakerian created a specific newsgroup (alt.cancel.bots) to carry only cancel messages specifically for any post from any machine downstream from the "anatolia" UUNET feed which carried Serdar Argic's messages. This dealt with the censorship complaints of direct cancellations, because carrying a newsgroup was always the option of the news feed, and no cancellations would propagate unless the news administrator intentionally carried the alt.cancel.bots group. If sites chose to carry the group, which most did, all of Serdar Argic's messages were removed from all newsgroups."

Expand full comment

I do feel that all these free speech v censorship debates miss the main point.

There is a genuine free speech argument around scientologists, Nazis, pro-anorexia sites but that doesn't seem to be the main issue.

Xi, Putin and others censors seem most concerned about things which are either mainstream views or the truth.

There isn't really a compromise available in the vast majority of free speech debates where someone powerful wants to hide something and is willing to abuse the apparatus available to censor real information.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

These are exactly the same issues, if you're the sort of person censored by Western gatekeepers. Scientologists and Nazis are low-status people who are declared to be Wrong About Everything. Putin and Xi also declare their opponents to be Wrong About Everything. The only difference is that you more or less agree that the Western mainstream is right and dictators are wrong.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think it's fine for a social media company to implement this type of plan, but I also think it's fine for it to moderate in any way it wishes, so long as its terms are clear. If Twitter did this and I were a Twitter user, I'd want to see all messages--I'm not worried about being offended and I'm curious--but if I saw that Twitter was a vehicle for activity I felt was legitimately illegal (sharing child porn), toxic to the political framework I value (spreading conspiracy theories designed specifically to undermine civic trust), or a debasement of social norms (normalizing routine discursive cruelty), I'd unsubscribe. Why would I want to pay money that supports and magnifies that type of behavior? If there are enough people like me, the platform can kiss its ambitions for growth goodbye.

I think there is legitimate grounds for censorship of online activities that are prima facie illegal (e.g., child porn; incitement to violent crime)--the government can require platforms to report and enforce bans on such activity. That's "censorship," but we've already sanctioned it by statute--we don't normally treat bans on child pornography as censorship.

A private company instituting its own bans is not censorship: it's a business model. I don't think the government should dictate what business models corporations adopt; I think users should, by opting in or out. If the demand their social media ban certain behaviors and unsibscribe if they don't that's not censorship, that's freedom of speech and association.

Many people may not care that the social media they use for fun or convenience serve to amplify conspiracy theories or antisocial norms--they can use Scott's filters or not. But I think many customers will care, and if the filter buttons enable these negative effects, their business is likely to go to a service that bans those behaviors. The optimal result would be multiple social media platforms serving different audiences. If conspiracy-theorists want to spread fear and loathing, they'll be free to do so, just as they can now on Parler and throughout the dark web. I don't see any reason Twitter or any other private company should feel the slightest ethical obligation to accommodate them if they feel their posts are unethical.

PS: And if Scott wants to ban me for this post and erase it from the site, that's up to him. I promise not to complain about censorship or cancel culture (though I can't promise not to be surprised)!

Expand full comment

The challenge is that a multi-location internet has always existed, and there really isn't a large barrier.

Twitter isn't new technology. Parler didn't take coding geniuses to create. And any "winner takes all" aspect of the internet is really about how people want to connect to the larger societal community. If I'm honest, I really suspect that some part of the moderation debate is a disguised status debate.

Some bans are absurd, but most banning discussions are not actually about people getting removed from a website for saying "Republican". They're about perceived social transgressions, and whether those are violations of the social fabric, or acceptable acts.

Expand full comment

Well, the debate may be a status debate, but I don't really see what difference it makes. Status debates are as legitimate as any other social one-upmanship--they're often petty and socially unhealthy, but they've been part of social life for several millennia. There's no principled issue about them I can see; they have nothing to do with "free speech" or "censorship," and they don't require government regulation--government shouldn't be part of them at all.

In the long run, I think we'd do better to devote efforts to teaching young people media literacy and about apophenic tendencies we all share and need to keep under control so we don't get caught up in cultic thought and behavior. Hopefully, future generations can be forearmed against this net of troubles.

Expand full comment

The challenge of a status debate, is that status debates aren't questions about how to manage something using some technological feature. They're explicitly zero-sum.

So saying, "We can do X, Y, and Z, and remove the conflict!" may not provide much value if it doesn't capture the actual conflict, and if the actual conflict is intrinsically zero-sum.

No disagreement on the last paragraph. (TBH, I think my original comment was not an argument against so much as a riff in response, although I do think the ideal framing may be messier than "gov. regulation vs not". While I'm skeptical of the framing "gov. vs not gov." as I can imagine free regulated societies, and unfree unregulated ones, I don't have some clearly articulated vision. A hypothetical duopolistic world between Amazon & Walmart would not be very free, but it may be more transient and easily changed than the CCP, as an example.)

Expand full comment

I see your point about zero-sum, melee. I hadn't pictured the issue that way, perhaps because I don't post on Twitter.

BTW, Noah Smith has an interesting post on status issues this morning on Noahpinion. In the comments, someone suggests that status is itself a zero-sum game, and Noah respons, "High status is. Normal status can be extended to everyone!"

I think that interchange reveals an ambiguity in the way we use status: one concerns striving for recognized superiority, the other for recognized inclusion. If so, then those two function very differently in terms of creating social value, by which I don't mean social capital: I mean socially-based self-fulfillment, the currency of the latter being something like mutual empathy. I have formed non-hierarchical personal relationships with people in the real world that initially grew out of adversarial online efforts to top one another on comment boards. What had to happen first, though, was a point where attention to the argument and winning it (gaining status) shifted, for a reason extraneous to the argument, to interest in the person. So I wouldn't reduce all social media interaction to status competition--there's more going on, even if people don't build on that very often.

Expand full comment

I always hear about censorship as a debate over politics, which is not really about whether MY posts get censored, so much as whether "my group's" posts get censored.

As other posters pointed out, these same sorts of conversations also exist about "shadow-banning" where a poster is permitted to post, but their reach is limited.

The issue is that if this isn't about "my personal posts" but about "my tribe's posts" then there will always be a zero-sum aspect, as no matter how you configure things, the actual contest is about the high-end of status. "normal status" still exists if you instead post on any other corner of the internet. "high status" only exists if you're viewed and respected on a central hub social media site.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I don't actually think banning is generally about politics, melee; debates about banning generally are. Posts and posters are most often banned for elements of personal conduct (trolling, threats, libel) or for patterns of false statements that may have political intent, but that are problematic on social media because they are false.

In any case, when it comes to the political dimension, I think the issue isn't really status at all. When someone is banned from a central hub site or a tool like grossly false claims or invective is banned, the impact tends to fall very unevenly on the Right, and the issue is political discrimination on that basis. The problem doesn't seem to me "You're disrespecting my tribe!" so much as "You're disadvantaging my politics!" I just don't see how the category of status sheds light at that level, although I'd agree it certainly does contribute to understanding the personal level.

Expand full comment

All we ever wanted from social media sites was an infinite reverse-chronological list of the posts made by the people/organizations we follow.

We do not have that because it costs more than we are willing to pay for it ($0.00).

Expand full comment

Tumblr still does that!

Expand full comment

Didn't they ban porn?

Expand full comment

I thought all you wanted was a reverse-chronological news feed from people you follow. Now you want one with porn, too? :P

Anyway, yes, Tumblr banned porn, IIRC to avoid getting kicked off the Apple app store. They partially reversed this ban recently (as in, a few weeks ago) - they allow nudity with the proper flags, but not "sexually explicit material."

(That's sort of vague, but people are speculating it will end up like DeviantArt, where outright porn is not allowed but fetish art that doesn't focus on genitalia is still fine.)

Expand full comment

> IIRC to avoid getting kicked off the Apple app store

Twitter has porn, it's been on the Apple app store the entire time.

That's not the reason.

Expand full comment

> All we ever wanted from social media sites was an infinite reverse-chronological list of the posts made by the people/organizations we follow.

No comments?

Or perhaps comments could be implemented on top of this protocol. A comment is fundamentally a post written in response to (i.e. hyperlinking) another post.

From this perspective, if you write a post and I write a comment, my comment will only be visible to those people who follow me. So there would be no functionality "display all comments on this post", because the comment links the post, not the other way round. Or maybe the author of the post should have an option to link the selected comments back.

Expand full comment

"The current level of moderation is a compromise. It makes no one happy. Allowing more personalized settings would make the free speech side happier [...] And it would make the avoid-harassment side happier"

I agree that allowing more personalized settings would be a boon, but I'm not convinced that the vast majority of users, who, I suspect, occupy neither the free speech nor the avoid-harassment side, are dissatisfied with the status quo. If, e. g., someone uses Twitter merely to follow their favorite celebrities and talk about sports, what do they care that another, less normal member might get banned for jokingly threatening to murder someone, or because the moderation algorithms mistook their mockery of racism for the real thing?

Also, I think a case can be made for not allowing malefactors further opportunities, even if they are stuck on a blacklist. Imagine a social media site where Donald Trump was present, but unable to be viewed by default: He'd still exert a massive social-gravitational pull, with many users hopping the content-fence to view his latest posts, such that doing so would become a requisite for understanding the latest discourse-brouhaha. Similarly, it's not as if someone with tens of thousands of followers couldn't orchestrate harassment-campaigns from behind a blacklist. An outright ban is a blunt instrument, but it has the virtue of not allowing someone to stick around and find ways to game the system.

Expand full comment

This is pretty much why I've never found the "ignore" option in forum software particularly useful. Even if you no longer see posts from whomever you don't want to see, you'll still see people replying to that user, people referencing whatever that user's latest antics are in unrelated conversations, etc. Even if you can't see the onion, you'll still be aware of the onion-shaped void left in its place.

Expand full comment

I think one aspect of non-censorious moderation this misses is harassment in the form of "publishing private personal information that directly opens somebody up to harassment."

So, for example, if you post a nude picture of me with my personal address and phone number without my permission, I think that clearly falls on the side of "harassing me" even if I have a filter that keeps me from directly seeing your post, because the entire point is to enable people to harass me outside of the reach of the website's moderation policy, and because it isn't really expressing a "point of view" in any meaningful way that would make the value of this speech outweigh the harassment aspect.

As far as I know everyone who does moderation has to deal regularly with situations like this, and "censorship" doesn't seem like a useful word for it. (Obviously a policy like this *could* be used for censorship, if e.g. Biden decided that critical posts about his policies were personally harassing him and asked for them to be taken down. But the baseline feels fundamentally different.)

Expand full comment

What’s to stop Twitter from arguing that they already do this? It feels like there’s a case to be made that the existence of Twitter/Parler/Tumblr/etc creates a range of “flags” about different kind of speech that are allowed/forbidden, just at a higher level of friction. Now agreed, Twitter could reduce friction by replicating the diversity of the internet within their site, but at some point “degrees of friction” seems like it muddies the waters between “moderation” and “censorship”

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Ignoring the personalization/moderation side for a second, one of the more practical issues with the kind of "everything should be always up for debate for anyone willing to listen" is that nothing ever has any finality. There is never any "okay we are done for a while".

Similarly, the censorship debate is often argued as all or nothing, for all time.

Neither seems particularly optimal for society.

That is:

Take a thing that 99% of society agrees is *truly* horrible.

If you don't censor (we'll get to moderation in a second), Every day you get to debate with the 78 million people (1% of 7.8 billion people) who feel the other way from first principles. That's a lot.

This isn't particularly optimal in value for society.

Never debating at all doesn't seem particularly optimal either.

But this is where we end up most of the time (IMHO) - either full censorship, or no censorship, for all time.

Moderation doesn't fix this. The problem is in the arguments for censorship in the first place. As you say, they do actually have some value - whether you opt in or not, it's still not particularly valuable to society (for most reasonable definitions of value) to *constantly* argue over whether the "truly horrible thing" is horrible or not. Even if people think it is fun to do!

A more reasonable position (to me) would be to change how often we accept re-debate on things, or the degree of re-debate, or ...

Maybe we only re-debate whether it's okay to run over a group of kids in the park for no reason only every 5 years. Or maybe we only do it on "say any crazy thing you want tuesday".

Maybe it varies depending on what percent of society agrees, or all sorts of factors.

Yes, this would be hard to get right, require lots of balance, hard choices, etc. Same as anything else in the world. IMHO, we go for the all-or-nothing extreme positions because they are seductively easy to achieve - turn it all on or off. Not whether they actually achieve the best result.

Lots of the above view could be transformed into "what should the defaults of moderation be", but it wouldn't change the basic point - if the goal of all of this debate, moderation, and censorship is to have some valuable outcome for society, we should be arguing about how to achieve the most value for society - that seems super-unlikely to be achieved through 0% censorship, or 100% censorship, as much as anything in the middle is painful to achieve.

I will also point out - we formed representative governments and similar representative structures because direct representation didn't scale in lots of ways, one of them being that direct debate on issues among the entire population was not just hard physically (and admittedly easy virtually now), but because it was not an effective way to get anywhere or decide anything even when you *did* get everyone there.

So it's also somewhat amusing to me that some folks seem to believe that direct debate among 7.8 billion people on every topic is a useful exercise in public discourse. If we are doing it for shits and giggles, sure, whatever. But as a mechanism of useful public debate and discourse? We already know it isn't. We've consistently rediscovered this is basically every society ever built, at every meaningful size population. Regardless of any level of censorship or moderation involved.

Expand full comment

The difference is network effects and monopoly power.

Expand full comment

What about the advertisers?

Your minimal viable product doesn’t seem advertiser friendly at all so it would have to be a literal paid product and not a ad-based free platform

Is moderation that’s conforming to advertiser demands censorship?

They’re exerting control over communication where both the sender & recipients consent

Expand full comment

The problem with your ideas around moderation is that this Filter you're discussing has to be decided by someone, and getting filtered is the same thing as getting censored. Look at all the recent controversies on YouTube over getting videos age restricted.

Even the *smallest* barrier to your posts being seen is an equivalent to the free speech corner, and thus it's just a really clever way for censors to pretend they aren't censoring.

Expand full comment
founding

I both agree and disagree with this. I agree that your message isn't being shown nearly as much, but moderation (as Scott calls it) simply reduces the *degree* of its reach, while censorship places it into a totally different *kind* that has to find a completely separate platform.

Age restriction lies somewhere in the middle; the fact that you can't view such a video without logging in (and ostensibly can't at all if you're not able to convince Google there's some iota of a chance you're 18+) makes it semi-censorious rather than just hiding it from the default. My idea of true moderation would allow for a single switch flip or URL parameter that disables all filters.

All in all, I generally believe that free speech (the social value, of which the First Amendment is simply the US gov's promise to stay out of the question altogether) is more a freedom to listen than a freedom to speak, and this article cuts straight to the crux of distinguishing the amount of burden a prospective listener faces. If I can flip a switch in my settings, it's relatively minimal. If I have to find out about thisdomaindoesnotactuallyexistqwertyuiopasdfghjklzxcvbnm.onion and download a separate browser, it's prohibitively expensive.

Expand full comment

I don't like this distinction, and this isn't the argument I think should be made in the first place. Moderation is censorship. I cannot speak to the public on a platform in any manner I want because some third party (the mods) have decided I cannot violate their arbitrary rules.

The key point is the distribution of power. In a setting with very few global rules and many variations of local rules, individuals reign mostly supreme. Don't like how a locality runs their things and they won't change? Leave and make your own. This is like Reddit, where admins handle site-wide (but ideally limited) rules and violations of said rules, and volunteer moderators who have a stake in the group's success manage their own areas.

In contrast, a place where the global:local rule ratio is more equal (or just weighted more to the global side) is one that is engaging in censorship. This is equivalent to Facebook or Twitter, where one centralized ruleset governs everybody (leaving no one happy except those who agree with the status quo).

In my opinion, the argument should be "make Reddit-like segregation the norm".

Expand full comment

A counter-argument to this is that media fragmentation, which can be described as Reddit-like segregation, has plausibly played an important role in the destruction of bipartisan consensus and increasing polarization in the US. When everybody watched the same news it created the shared narrative and incentivised avoiding spicy provocations of large fractions of population, whereas these days everybody has their comfortable outrgoup-bashing echo chamber.

Expand full comment

The problem is that it makes media capture a dangerous thing. If we have only a few news orgs and they are all pro-war because of their ideology, it doesn't matter how anti-war the country is, that viewpoint will never be expressed with the power of the pro-war one.

A million echo chambers is a better failure state than a zero-sum public space because no one group usually carries enough power to unilaterally enforce its laws. So when you try to get laws made, you quickly run into needing to convince others that your ideas are correct, as opposed to relying on authority granted by being in the Overton Window/status quo.

Expand full comment

I agree that more user-control over what I can and can’t see would be an improvement over the current “Twitter can ban anyone for any reason” scenario, I think there are two issues with that. The first is that advertisers are often the ones who want to control over where their ads appear, and what content it appears next to. This is partly why I think charging for a social network leads to a better network than an open one driven by ad revenue.

The second problem is one many in the comments here have listed, which is that deciding what hits the filter and what doesn’t is itself the problematic issue, not whether or not the ability to filter exists. Here, I shockingly would say that it would be better for the government to ensure a “right to post” in the bill of rights, and to then set restrictions that are sensible and can be revisited by an elected body... NOT random, nameless Silicon Valley bureaucrats with no accountability or interest in the public good.

Expand full comment

If the moderator works for you, there is an agency problem (maybe he moderates things in ways you don't want but you put up with it because his moderation is better than you can get otherwise). But that's still different from a moderator who works for someone else--the local government, the locally dominant church, some committee of information hygiene, etc.

To echo Scott, it seems like very often people switch between the moderator-works-for-you scenario (you don't want randos sending you dick pics) and the moderator-works-for-society scenario (covid misinformation, hate speech, and the latest Hunter Biden sex tape should not be accessible to you even if you want it).

Expand full comment

What if the moderator ought to consist of either A) a large panel of people with diverse opinions who must negotiate over decisions or B) User-driven moderation with upvotes/downvotes like Reddit. It strikes me that in both those situations, the moderator doesn’t necessarily work for one interest group, which seems good to me.

Expand full comment

'I never found a thought yet that I was afraid to think, and if I ever do find such a thought I'll go ahead and think it just for spite.' -Dr. Seuss or free quote from memory of Soren Kierkegaard?

Expand full comment

I guess Dr. Seuss didn't have anxiety, then.

Expand full comment

Fear and trembling when faced with green eggs and ham? Gotta admit Mr Either/Or does sound like a Seuss character. Lol

Expand full comment

I come to the site on the premise that "it's your blog and you can moderate or ban in whatever the heck way you choose". (Is "heck" OK on this site?) What I would love to see on ANY commentable site is a detailed, nay explicit, map of moderation protocols, rules and regulations, WITH LISTS, so commenters know where they stand. Our problem is that you hold all the moderation/banning power. What might seem perfectly reasonable to everybody on the site may get your goat, and earn a ban. Knowing just where the (your) line is would be useful for civil discourse.

I comment on our online national newspaper and, after 5 years, I still cannot work out what I cannot say without getting my comment rejected. In an article on fascism, sometimes you can say fascist or fascism in the comment, other times its deep-sixed and I have to delete the word. An opinion article on someone's racist comments or proposals will usually, but not always, be canned if I comment on that person's racist behaviour. Even replacing "racist" with "racial intolerance" doesn't always do the trick. It's a total lottery with no transparency. They also have auto-rejection for certain words, which I have laboriously compiled over time, but most are anodyne in the context. The funniest was an auto reject for repeating the name of a S American river, the Rio Negro, which was in the article. The weirdest was my use of "hit" as in a hit song - the bot thought I implied violent action. Hint - do not use auto-reject bots as moderators.

Expand full comment

It's pretty hard to write a Perl script that can figure out what the heck you're trying to say and distinguish between a constructive comment that nevertheless uses a bad word, and something vile. The only sure-fire way to do it is use a human being, and then you get this:

https://www.bbc.com/news/technology-57088382

Expand full comment

Twitter generates about 500 million tweets a day, so I think some amount of automation is inevitable. (And also makes it inevitable that they will get things wrong. Even if the system is 99.999% accurate they'll end up unjustly censoring 5,000 people a day.)

Also, no matter how explicit your moderation protocols, someone will find a way to rules-lawyer them. People who want to be offensive will walk right up to the very edge of the line, and then when the mods finally lose their patience they'll be like "Why did you ban me? That wasn't hate speech, I was just discussing the hypothetical consequences of removing minorities from our country."

I frequent a message board which has a very detailed, well-structured process for moderation decisions, and even a formal appeals process, but their rules still have a catch-all along the lines of "If you are being consistently disruptive, you will be banned even if we can't find a specific clause to ban you under."

(This doesn't really apply to your specific problem of wordlist filtering, that's just lazy design and/or being too cheap to hire actual human mods, but the general problem is basically impossible to solve at scale.)

Expand full comment

I've previously toyed with even stronger positions than this (e.g., monopolies that host content should be restricted to only banning amendment which the government can ban under the first amendment. The argument being that anything else is a kind of abuse of monopoly power.)

However your reasoning here seems wrong to me:

"Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform."

The position you go on to outline effectively assumes that customers won't decide to leave, so long as they can't see the offending content.

A lot of users will simply refuse to interact with a product if it has speech which, in their opinion, exceeds a certain threshold of offensiveness- even if they can't see it. They may also not want to be on the same platform as people who enjoy viewing such content for various reasons. Hence attracting users, and the real consumers, advertisers, may completely removing speech. Brands in particular would not be much comfortable with the argument "yes we have Nazi's on the site, but don't worry, we've sent them to a seedy-underbelly/dungeon". Under your definition then, completely deleting content can count as moderation as it is the "business activity of ensuring that your customers like using your product."

Expand full comment

You can of course argue that users and brands are irrational or immoral to draw this line in the sand, but that doesn't change the fact that catering to their irrational or immoral preferences is part of the "normal business activity of ensuring that your customers like using your product."

Expand full comment

For my part, despite the free speech concerns I've raised about monopoly, I would worry that allowing a space to Nazis on a site, even if they were somewhat digitally-sequestered, would bleed over into the culture of the platform in all sorts of ways.

Expand full comment

I think this is true at the extremes, but also that the actual content that gets moderated is often pretty far from the level that would lead lots of people to not want to associate with the site.

Expand full comment

This take seems right to me. Imagine if someone at work sent out an email with a racist joke that is covered with spoiler tags. People would still find it offensive, even if they didn't read the joke.

Expand full comment

I think that the "advertisers might hate it" is a canard; some will, some won't. But a portion of the censorship has been encouraged or requested by government entities, which is in itself unconstitutional.

Also there's clearly something broken in the market mechanism. It's not clear what, to me. Is there hidden government influence? Is there some kind of collusion? I don't know. But when a lot of powerful firms, and without any obvious business reason, coordinate to freeze out a person or an entity from broad swathes of services or the public square, that's Not OK, regardless of how despicable that person is. Parler is mentioned; that was quite a flex.... but I recall the Kyle Rittenhouse trial, and how his defense crowd-funding got kicked off of all the major sites (GoFundMe, etc.) and then the random Christian one that took him on got dropped by PayPal and had some kind of difficulty with Discover Bank? Whatever your opinion of the case (and before it was even decided!) in what realm of functioning society is that sort of thing OK?

Expand full comment

A good point, well made.

But it does make me curious. If Substack allowed commenters to block each other, would you consider "moderation" of your own comment section completely unnecessary, even ethically undesirable?

I'm not trolling you, either. I don't know what I would do. I think it's a very difficult question to answer when it gets closer to home, and your name starts being associated with the conversation, even indirectly. (And I don't think the people who run FB or Twitter are entirely unconcerned with this point.)

When I look at places that resolutely refuse to moderate at all, like the comment section at Reason, I'm impressed by how often it descends to the sewer and one just doesn't want to bother wading through the shit soup to find the occasional bright gem -- and I have a very high tolerance for wading through shit, as well as a very thick skin.

Expand full comment
deletedNov 3, 2022·edited Nov 3, 2022
Comment deleted
Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I always thought Slashdot had hit upon a useful combination. Everybody in the community has, on rare occasion, mod points he can use to moderate comments. More often, a larger number of people from the community "meta-moderate," in the sense that they decide whether a given moderation was good or not. How you are meta-moderated affects the probability that you are given mod points in the future. (Both moderators and meta-moderators are anonymous.)

Expand full comment
Comment deleted
Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I don't know, I don't read reddit (and it's been a while since I read Slashdot, although I guess having a 6 digit user number was something ha ha). Can you up/down vote indefinitely? Slashdot only allowed a range of -1 to +5, and only -1 was invisible by default, and you could always ask to see what was hidden. Maybe the very limited range was helpful. I feel a little like this is where digg.com went off the rails, in allowing a massive piling on.

Also remember it was only a small set of users who got to moderate any one page, not everybody. And a significantly larger crowd of users would be judging how well you moderated. So if you seemed like a vengeful dickhead you might never be given mod points again. Plus the opportunity to moderate was sufficiently rare that you wouldn't really waste them piling on something you didn't like. Whenever I moderated, I would spend my few points looking for some overlooked interesting point and boost it.

Expand full comment

It’s a good idea and method for smaller communities that already share certain norms and values, but on Twitter most users would simply use “moderator points” to ratio people they don’t like.

Expand full comment

Well, you could do that, but remember you don't get many mod points, and you don't get them often. So a pile-on is sort of pointless: once a post gets -1 mod points, it can't go any lower, so there's no point to piling on. And similarly once it gets to +5 it can't go any higher, so there's no point to saying yeah me too. The most bang for your buck is finding something at 0 you think is worth seeing and boosting it to +1 or +2, and hoping someone else agrees and takes it to +5.

But yes it might rely on a community that is largely self-selected and interested in a high signal-to-noise ratio. I'm working off of memories 10+ years old here.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Potentially even simpler and more elegant is for the platform to let third parties provide opt-in filters (moderation services) that users can subscribe to. This would let the platform completely avoid being the arbiter of what is acceptable, except as required by law. And it enables quality moderation without the platform having to eat the costs or pass them on to users.

Quality, politically-neutral filter offerings might charge a fee or run extra ads, but I can easily imagine various non-profits and political groups providing good "free" filters in hopes of nudging public discourse in their desired directions.

In the end, what probably happens is that very few people actually use more than the most basic free anti-spam filters. That still looks like a win to me, as it becomes harder for people to use moderation as a weapon and an excuse for taking offense. Among other things, this could help blunt cancel culture.

Expand full comment

I'm actually working on building a client to do that. I should really get some blog posts written to explain it but that is what I am working on at the UMass Amherst Initiative for Digital Public Infrastructure right now. I'm hoping to get a demo up and running in the not too distant future.

Expand full comment

The problem is that in order to click that "see banned posts" button, you have to *want* to click the button. And in order to want to click the button, you have to know that there are banned posts worth reading. And you wouldn't know that, because you haven't clicked the button. And this describes 99.9% of the users, in my experience.

Expand full comment

Let me see if I can present an intellectually honest version of the mainstream response to this idea. It would go like this: sure, optional filters are all well and good for NSFW content and other relatively minor use-cases. They're already used for such purposes and additional uses would be fine.

However, the CENTRAL issue with social media, the elephant in the room, is that as everyone now knows, if it isn't heavily filtered then populist authoritarians will use it to spread lies and conspiracy theories and seize power all over the world. And at least in the US, the populist authoritarians are telegraphing more and more openly that the next time they take power will be the *last* time -- i.e., that as soon as they're back in power they'll do as Curtis Yarvin advocates and stop the machinery of democracy itself, as did the Bolsheviks in 1917 or the Nazis in 1933. Right now, then, the whole survival of Enlightenment civilization depends on Silicon Valley, which still has some leaders smart and public-spirited enough to understand all this, putting its thumb on the scale and using its power to prevent the lies and conspiracy theories from getting the sort of traction that they unfortunately would in a "free marketplace of ideas." In the language of this post, what's needed is not merely moderation but censorship. Indeed, the stakes have become so clear that if Silicon Valley *won't* censor, the suspicion arises that it's because it secretly wants the populist authoritarians to win.

This is an ugly, cynical, and depressing theory -- there are excellent reasons why it's almost never stated so openly! Alas, I also give it at least ~25% probability that, a decade from now, we'll look back and say, damn, the theory was true.

Expand full comment
author
Nov 3, 2022·edited Nov 3, 2022Author

I think my response to this case is - if you believe it, why defend democracy?

Like, it sounds like the claim is that, when the common people aren't protected by censors, they naturally tend towards electing evil people. If the censors succeed in protecting the common people, then the common people will instead support whoever the censors tell them to - if you have progressive censors, they'll support progressives, if you have theocratic censors, they'll support theocrats, etc. But then all the work (of deciding the right form of government) is being done by the censors, not the populace. So why bother letting the common people vote at all, other than that it makes us feel good to falsely imagine that "democracy" is still in the loop somehow.

I really do feel like this kind of argument reduces to "I openly believe we need to destroy democracy, because if our enemies get in charge, they might destroy democracy". I think this is sort of where the populists are coming from too - given that the liberal elites are openly plotting to destroy democracy (eg by censoring every opinion they don't like, making it impossible to get a job if you speak out against them, burying articles that critique one party just before the election, etc), they need to take strong action to prevent that (eg breaking up tech companies, breaking up universities, some kind of Yarvin-esque anti-elite reign of terror, etc).

Everyone here has valid concerns, but the only way to avoid destroying democracy is to not advocate for the destruction of democracy. I think what a lot of people are doing is now is a kind of brinksmanship of "well maybe we'll destroy democracy a little and see if our enemies are bold enough to advocate for destroying democracy a little more", and I think they definitely are. I also think people are doing some kind of bargaining for "well, democracy's going to get destroyed anyway, so maybe we can at least keep our side in control of the post-democratic order", whereas I don't think this is true at all.

I think one very likely way for democracy to die is by a leader whipping up panic that they need to take strong action or else their enemies will destroy democracy, and I would prefer to have strong defenses against that.

Expand full comment

But Aaronson just seemed to suggest the censorship of "lies and conspiracy theories", which is much less extreme than your interpretation that "the common people will instead support whoever the censors tell them to", i.e. in the absence of lies and conspiracy theories the people will not necessarily support the candidates or policies of the elite censors, so at least in principle the approach is consistent with maintaining democracy.

Anyway, I'm not at all sympathetic to Aaronson's argument, but it just feels like your response is not quite on target.

Expand full comment
author

Thanks, this is a fair point, and I should write a post responding to it.

The short version is: I don't think "lies and conspiracy theories" are a natural, neutral category. For example, if you look at Trump-won-the-election stuff, what you find is mostly people who know a little-but-not-very-much statistics, pointing out that some precinct's vote count was very weird according to some test. When you look at good debunkings, it usually looks like "that test was applied incorrectly" or "in 1000 precincts, one of them should look like that by chance". The false claims aren't lies (I'm not trying to make any claims about whether or not the people pushing them are being honest or not), they're true facts being interpreted unskillfully or out of context. I think this is much more typical than someone completely making things up.

But "facts being interpreted out of context" isn't a natural easily-censored category - there's an infinite amount of context for any possible claim. For example, I believe that any attempt to talk about how only 20% of Google programmers are women (or whatever the real statistic is) in a way suggesting Google might be sexist is out of context and misleading unless you mention that only 20% of interviewees were women, or only 20% of comp sci graduates are women, or whatever the relevant statistic is now. Is this exactly as misleading and out of context as saying that some precinct only has an 0.001% chance of votes naturally turning out that way if you run XYZ test (where the context is that that test is statistically inappropriate)? I would say yes. But then we would have to censor all the "OMG not enough women in tech!" stories. But at that point, you're censoring wide swathes of the mainstream media, and a censor who chooses to preferentially censor some things but not others has almost limitless power to bias the narrative.

I think what actually happens is that we're much harsher on "Trump must have won the election because XYZ bullsh*t statistical claim" than we are on other equally bad statistical claims, because "we" are much more concerned about the negative results of people believing that. But this reduces to the previous problem of "censors censor things they believe it would be bad for the public to know", not a value-neutral "let's censor lies and conspiracy theories".

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Yeah, I agree that the notions of "lies" and "conspiracies" are vague and ripe for abuse, though I suppose Aaronson could reply that the technological development of social media is such a threat to democracy that we have no choice but to weaken our traditional free speech norms and hope that the wise silicon valley censors will draw these lines in a somewhat reasonable way, i.e. that some abuse/censorship may be the price we have to pay to preserve democracy. One concrete example in favor of this pessimistic view is the role that Facebook and viral content seem to have played in facilitating mass violence against the Rohingya, which is a technological impact that Mill may not have considered when writing On Liberty.

In general, I wonder if it's not really possible to rebut Aaronson without directly tackling his claims about the threat to democracy from social media, i.e. presumably if some new communication tech really combined with free speech to create a sufficiently large threat to democracy, then we would need to consider weakening our free speech norms.

As far as counter-arguments to Aaronson, perhaps one that could be made is that including factchecks and context alongside "misinformation" is plausibly as good or better at protecting democracy than fully censoring the information, while avoiding the worst downsides of the latter. Also, I'm just generally skeptical that free speech on Twitter really is an existential threat to democracy (even w/o factchecks), and arguably the risk of abuse-of-power by the censors is an even greater threat to democracy, but it's such a speculative concern that it seems difficult to offer decisive counter-arguments.

Expand full comment

I actually think we should be going in the opposite direction you're suggesting and go after the radicals who promoted this garbage in the first place and remove them from all positions of power and influence as well.

The reality is that the whole "The power structure is run by a bunch of racist white men" narrative is just a reformulation of "The power structure is run by a bunch of rich Jewish moneylenders".

The purpose of this is to undermine trust in actual experts and corrupt institutions, which is why these groups have been promoted by Russia and other bad actors.

The fact that there are so many of these people just shows that this propaganda campaign has gotten far along. It needs to be reversed and these people need to be removed from power.

Go after the leaders bilking money out of people, go after the people who have foreign contacts with Russia/China/gangs, and use it as a means of systematically destroying these groups and removing their membership from power.

This sort of thing is what we did to the communists and to the Klan, and it worked. You just hurt them, make it impossible for them to hold any position of power or influence, and just make them be wretched outcasts from decent society and economically ruin them.

This is how you destroy these organizations and make people not want to be associated with them.

Expand full comment

This seems a little too strong. Even the most committed democrats already concede that democracy needs to be a little bit destroyed to function well.

Maybe democracy has a hormetic aspect, and some information filters (which social media is said to have eroded) are just as necessary to keep it in the optimal zone as, say, the American electoral college, or senators having to be 30 years old, etc.

Expand full comment

Agree. That's "defensive democracy", although I think that "democracy is destroyed" only if your original definition of democracy was something like pure majoritarianism. Otherwise, it can be considered that it is preserved...

Expand full comment

Yeah, the cynic in me thinks that all of the crying around “democracy being destroyed” by pundits and politicians is less about actually preserving the limited democratic system we have and more about laying the rhetorical groundwork for a new, more advantageous-for-the-party-that-idealizes-it version of “democracy.”

On the Democratic side, it’s about abolishing the electoral college, statehood for DC and Puerto Rico, etc.

And on the Republican side, sadly (I say this as a conservative), it seems to be about keeping the person of Donald Trump in power...

Expand full comment

In addition to the thoughtful replies already here, I think one central part of the honest mainstream response would be: well, we’re still not advocating GOVERNMENT censorship. It’s just that, for all previous American history, you had either newspaper editors or radio or TV broadcasters acting as the unofficial censors of what could easily be disseminated to a mass audience. And there were problems with that, but it never led to the election of a Trump — a candidate who more-or-less explicitly runs against the entire Constitutional system and wins, the exact failure mode that the Founders obsessed about and designed the entire system to try to make less likely. Ergo, we’ve learned empirically that we need *someone* in Silicon Valley to play the role that newspaper editors and TV broadcasters used to.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

As far as the defense that this is non-governmental censorship, free speech defenders have long recognized substantial threat from private restrictions on freedom, e.g. Mill wrote that "Protection, therefore, against the tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling; against the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them"

Also, the distinction between between private and public censorship has eroded in the social media context, as the government pressures companies to censor vaguely defined "misinformation" and "hate" speech, while dangling the threat of of removing section 230 protections, which would likely drive them out of business. There is a legitimate threat that we could gradually shift towards a scenario with de facto government censorship, mediated by increasingly influential social media companies, circumventing the 1st amendment.

You argue that we have always had some degree of private sector censorship, which is true, but I think this misses key factors that differ in the modern context; in particular, it much more difficult for conservatives to create competing platforms in opposition to the current social media oligopoly, due to network effects, which were much less of an issue in traditional media. So in the pre-social-media era there was less barrier to creating comparable competing conservative sources, from WSJ, to national review, to Firing Line, to conservative talk radio and fox news. And in the broadcast context where there were some legitimate free speech concerns, regulations were passed to combat these risks, e.g. equal time rules, fairness doctrine, etc. As a point of comparison, there was no significant media effort to censor Nixon's ability to speak after Watergate, whereas Trump was perma-banned from Twitter on dubious grounds despite being broadly popular with mainstream Republicans (not that this is the worst example of Twitter censorship).

Earlier you raised the example of Curtis Yarvin's authoritarianism, apparently as evidence for an anti-Democratic Republican agenda, but he is a fringe figure and hardly relevant to understanding the goals of mainstream Republicans. Also, the notion that Trump ran against the entire constitutional system would seem to be "misinformation" as far as I understand, though I'm sympathetic with the idea that his later stolen-election lies have undermined democracy to some degree, just as Democrats undermine liberal democracy with calls to pack the courts and restrict free speech, or with their misleading and over-the-top claims about the threat from white supremacy, "whiteness", or an exaggerated epidemic of racist police violence, misinformation which has spurred riots, exacerbated racial tensions and undermined our social fabric. We've seen how this kind of left-wing authoritarianism can lead to the collapse of democracy (e.g. Venezuela), and the best protection is surely not the current social media approach of offering nearly free rein to left-wing authoritarians, while selectively blocking conservative opposing view-points.

Anyway, you raise some interesting points, and I'd be curious to read any further deep dives from you or ACX on this issue.

Expand full comment

This comment is my favorite thing you’ve ever written.

Expand full comment

A lot of the value of democracy is that it restrains and balances power. I think the argument in your second paragraph falls apart if you state it in a less exaggerated way. I believe it would be better for our democracy if twitter declined to host election conspiracy theories, but I only believe this because Twitter's power is very limited. There are other sites. If Twitter goes overboard with censorship there'll be a backlash and it will lose market share. The fact that Twitter does not have the power to put people in jail, and is instead constrained by market forces and influenced by popular opinion, does actually matter. Arguing over social media moderation is part of the push and pull of our democracy; it is not outside it.

Expand full comment

Some Germans use Aaronson's argument to argue for censorship, only they claim to fight Nazism, not safeguard American democracy. Goebbels supposedly once said that granting the Nazi party freedom of speech was mere stupidity by its opponents, concluding that they had no reason to complain after it was taken away from them. This factoid is used in support of the view that Nazis should be banned whenever possible.

Limiting freedom of speech may prevent Nazis from doing the same later, but the word "Nazi" is about as ill-defined as "fascism". (Also, successfully labeling someone a Nazi is much easier if the target is of low social standing.) Similarly, the concept of protecting democracy can be used to fight about any policy one disagrees with, e.g. on transgender legislation.

Trump's intention to extend his presidency shows that he should have never been a candidate for it. But how is it chained to his political ideas? His ongoing appeal seems to be based on his unfortunate but popular character traits, not accomplishments in office.

Expand full comment

Democracy only works within a certain Overton window, and that we should work to maintain our society within that Overton window so that democracy can function.

As such, it is in fact entirely rational to try and support that Overton Window by suppressing those outside of it. If it moves, it needs to be an intelligent movement, not a movement because Russians are spreading misinformation on Twitter.

People rely on experts to form meaningful opinions about reality; only about 15% or so of the population is actually capable of forming truly informed opinions based on literacy data, but over half are capable of being educated on what informed opinions are and therefore are fair game for persuasion.

As such, people who try to destroy the system by which the public is informed of reality are in fact outright dangerous to society because they are harming society by preventing the majority of the population from forming educated opinions.

Having some sort of "safe space" where the actual smart people can discuss more out there ideas is useful but it isn't something that is going to be a great money-making platform. If you wanted something like this to exist, you'd have to do something to hide it - like, for instance, use very sophisticated language to make it impenetrable to the public, and make it deathly boring for normal people.

This is basically what Academia is supposed to be.

Expand full comment

That doesn't really seem like a mainstream theory, if by that you mean some majority swathe of Americans. I can readily see it among young urban techies, but that's a rather small fraction of Americans per se. I think if the God Emperor were to say abruptly "OK, that's it! All social media sites and interaction are hereby ended! The Internet can only be used for ordering shit from Amazon, filing your taxes, streaming music and/or movies, sending e-mail to your grandkids, and looking up alternate medicine theories with which to piss off your GP" then about 200 million Americans, plus or minus a bit, would just shrug their shoulders and say um well OK whatever.

I mean, I would. I'm mildy amused by writing on blogs (obviously), but I've never used Twitter, Facebook, Instagram, reddit, and whatever else gets people swearing at each other, and if the whole kit and caboodle vanished in a puff of greasy smoke tomorrow and simultaneously the price of gas went down by $1 a gallon, I'd remember it as a great day (because of the gas).

Expand full comment

Right, by “the mainstream,” I meant the NYT, WaPo, Slate, New Republic, universities, foundations—the institutions that were once mainstream and still regard themselves that way, even as their actual constituency has shrunk to urban enclaves. From the mainstream perspective, this provides the same occasion for existential panic as the Taliban retaking all of Afghanistan except Kabul, with enthusiastic support from the non-urban populace. In such a situation, which is more “democratic”: to surrender to the Taliban, or to continue to fight for “democracy” as you’d previously understood it?

Expand full comment

Ah I see, sorry. Yes, I agree with you on that. And as their constituencies shrink, they become naturally more biased towards the desires of whoever is left, their core. As the puddle evaporates, the salt (or whatever) gets more concentrated.

Expand full comment

> As the puddle evaporates, the salt (or whatever) gets more concentrated.

Yep, "Evaporative Cooling of Group Beliefs" describes this dynamic.

https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs

> Evaporative cooling sets up a potential energy barrier around a collection of hot atoms. Thermal energy is essentially statistical in nature—not all atoms are moving at the exact same speed. The kinetic energy of any given atom varies as the atoms collide with each other. If you set up a potential energy barrier that’s just a little higher than the average thermal energy, the workings of chance will give an occasional atom a kinetic energy high enough to escape the trap. When an unusually fast atom escapes, it takes with it an unusually large amount of kinetic energy, and the average energy decreases. The group becomes substantially cooler than the potential energy barrier around it.

> In Festinger, Riecken, and Schachter’s classic When Prophecy Fails, one of the cult members walked out the door immediately after the flying saucer failed to land. Who gets fed up and leaves first? An average cult member? Or a relatively skeptical member, who previously might have been acting as a voice of moderation, a brake on the more fanatic members?

> After the members with the highest kinetic energy escape, the remaining discussions will be between the extreme fanatics on one end and the slightly less extreme fanatics on the other end, with the group consensus somewhere in the “middle.”

> And what would be the analogy to collapsing to form a Bose-Einstein condensate? Well, there’s no real need to stretch the analogy that far. But you may recall that I used a fission chain reaction analogy for the affective death spiral; when a group ejects all its voices of moderation, then all the people encouraging each other, and suppressing dissents, may internally increase in average fanaticism.

Expand full comment
Nov 4, 2022·edited Nov 4, 2022

> This is an ugly, cynical, and depressing theory -- there are excellent reasons why it's almost never stated so openly!

Mainly, it undermines liberal democracy itself. "We have to protect our democracy, because our opponents are evil and if they win then they won't allow us to win again - so we must not allow them to win.".

In any case, if supposedly liberal side was actually worried about autocracy, they'd be making system more democratic (for example moving towards 'liquid democracy'). Leaving the system as it is, it clearly just loses legitimacy over time. Maybe crap representative democracy simply can't work in a world with Internet.

Frankly, it seems to me that US left is currently more authoritarian than US right. While the latter might bullshit about vote results, I don't think them winning can enable them to hold power indefinitely. They'd have to somehow deal with "permanent bureaucracy".

Expand full comment

This seems to be a variation of "If people vote for politicians not of the Democratic Party, then that's the end of Democracy!" which has been going around. The irony is pretty thick there, objecting to Democratic results by casting them as the end of Democracy, but I think you've captured the current "mainstream" gestalt correctly.

In contrast, believing the idea is completely wrong, I'm happy to bet anyone $1,000 that if Republicans control the House, Senate, and Presidency in the 2024 election, the normal elections in 2026 and 2028 will continue as usual, and thus "Democracy" will survive that event.

Expand full comment

I'd open to taking the other side of that bet with you. But can we operationalize it to: P = "Republicans control the House, Senate, and Presidency in 2024," Q = "mainstream center-left institutions in America (NYT, WaPo, Democratic Party, etc.) all accept all major 2026 and 2028 election results as being legitimate, just like they reluctantly accepted the 2016 result," you win if P and Q, I win if P and not(Q)?

Expand full comment
Nov 6, 2022·edited Nov 6, 2022

The problem I see with your Q is that it's smuggling in the added risk that your identified mainstream center-left institutions simply shift farther along the partisan scale to the left as the more center part of their audience evaporates over time. I.e. having predicted an occurrence, they decide to pretend it's occurred.

I'd be open to a Q constructed more objectively, say, the number of Democratic Party registered voters who cast a ballot which is counted as part of the totals in the 2026 and 2028 election doesn't substantially change.

But perhaps if you define what you mean by "the end of Democracy" more specifically, that would lead to a better empirical measurement of whether it's actually occurred or not. My initial reaction was more along the lines that it meant the elections wouldn't even happen, or if they did, they'd be an obvious dictator-like sham with 90% votes and/or single party candidates.

Expand full comment

Back when I read Slashdot regularly (as a lurker) it was vanishingly rare to see anything get outright deleted, and incredibly common to see posts get modded down to -1 where you couldn't see them in the default view. Sometimes I looked beneath the filter, and almost invariably agreed with the mods that these posts were not worth reading.

Of course, that's a different world from the one we live in now, pre-Great Awokening. It's chilling to look back on old threads and realize how offensive a lot of upmodded content was - and remember how I didn't find it offensive at the time.

Expand full comment

Chilling? You’re more scared of bad ideas now? I’d say that’s an argument against censorship. There’s value in knowing how other people think. There’s value in not being offended by every stupid idea you come across.

Expand full comment

I guess nasty misogynistic jokes count as "ideas" but I don't think anything of value is being lost by getting rid of them.

Expand full comment

A sense of humor? A sense of proportion? The useful ability to distinguish between "misogynistic" in the sense of Jack the Ripper and in the sense of Ralph Kramden?

Expand full comment

Why make people opt in, though? Why not allow people to block users they don't want to see? That's the best option; maximum potential for communication, and shelter for snowflakes.

Specifically, in the twitter context, unless a person takes their account private/followers only, I think blocked accounts should still be able to view and respond to the blocking party; just because the account owner wants to block someone should not force that choice on others.

Expand full comment
author

I think the main reason is that if there are ten million trolls who will respond with expletives and insults to anything I post, I don't want to have to block all of them individually. I may want to just come on the site one day and be pretty confident that none of my interactions will involve these people.

Expand full comment

Perhaps you misunderstand ... I'm not saying you have to block them all individually - even though that's an additional good tool. I'm saying the platforms could do their ratings and let people opt in to blocking various 'offensiveness levels' ... but the blocks should be off by default. Maximum communication potential, with plenty of choices for turning down the noise ...

Expand full comment
author

I see. In that case I think we somewhat agree, except that I'm trying to make a minimalist case here that's actually a Pareto improvement, even to the point where nobody who doesn't want to has to click an extra box. I agree that there's room for debate on whether the box should by default be on vs. off, as long as I get the point across that having the box at all is (potentially) a Pareto improvement.

Expand full comment

👍 I appreciate your thinking on this; you're highlighting some issues that require a bit more empathy than I'm instinctually willing to grant to the people freaking out most about it.

Expand full comment

One sufficiently bad interaction can have the weight of thousands of good ones, and by the time you know you want to block someone, the damage has already been done. In a fully "opt-in" system, a few dedicated bad apples are enough to make interacting with the system at all a minefield for everyone.

Expand full comment

1. A few weeks before the 2020 election, Twitter prevented users from sharing a political article from the New York Post. People who wanted to see it could not. Was this action moderation or censorship?

2. On many occasions, federal agents have asked social networks to ban certain arguments related to COVID-19, and networks have done so. Is this moderation or censorship?

Expand full comment
author

I think by the description above both of those are clearly censorship, is there some other side to this that I'm missing?

Expand full comment

Ok, good. I agree. Twitter sometimes serves a box that says “Show additional replies, including those that may contain offensive content.” It's poorly executed, but this seems similar to your proposal.

Expand full comment

This seems like such a no brainer to me. I hope your platform can incept it into the public conversation. Nobody would even be mad if twitter did this. My personal preference is that the users themselves have the power to form polities and do their own moderation but this is at least an easy step in that direction. The reason I think it has to go to the users themselves is that if you don’t then you’re always going to have a really small body trying to police a much larger one (way beyond what seems reasonable) or you’re going to have to try to train some neural net or something on the moderation. But in either case people will still chafe because there is no adjudication.

My twitter product roadmap, in case there is anything interesting here, is around the idea of vesting the users themselves with the rights of a kind of digital citizenship.

https://extelligence.substack.com/p/my-twitter-product-roadmap

Expand full comment

Hacker News has a system like this, that uses user downvotes as moderation -- first your post starts to gray out as it goes negative, and finally it becomes "dead" and not shown on the comments page. But there's a "showdead" option in the settings, and it really doesn't feel like censorship after that because you can see everything if you want to. Plus, there is a "vouch" option so users in good standing can help rescue things that were unfairly moderated.

Expand full comment

>If you wanted to get fancy, you could have a bunch of filters - harassing content, sexually explicit content, conspiracy theories - and let people toggle which ones they wanted to see vs. avoid.

The problem is that all of those categories are EXTREMELY subjective.

1. Who, precisely, gets to decide if a particular piece of content is "harassing"? A majority of users of the site? A majority of the employees of the federal government agency that is covertly meeting with the leadership of the site to determine policy?

2. Sexually explicit is straightforward enough, in the sense that I know it when I see it but cannot describe it.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Our research is exploring ways to put your argument into practice. Squadbox (https://homes.cs.washington.edu/~axz/squadbox.html , https://github.com/amyxzhang/squadbox ) is a platform that lets every individual email recipient decide what kind of email is harassment, and rely on their friends to block (only) that. Trustnet (http://trustnet.csail.mit.edu/about , https://people.csail.mit.edu/farnazj/pdfs/Leveraging_Structured_Trusted_Peer_Assessments_CSCW_22.pdf) is a platform and browser extension ( https://chrome.google.com/webstore/detail/trustnet/nphapibbiamgbhamgmfgdeiiekddoejo ) that lets every individual (i) decide who they trust, (ii) assess information for accuracy, and (iii) block information that has been assessed as inaccurate by sources they trust. Both tools give the user access to useful content flags but let the user decide for themselves what to do about the flags.

Expand full comment

In a similar vein, at the Initiative for Digital Public Infrastructure at UMass Amherst, we are developing an update to our existing social media aggregator called Gobo. One of the intended features of the new version is "Third Party Scoring Services", services that allow users to send them posts and get back scores which can be used by our system for sorting and filtering. This could be something that looks at trustworthiness of media or fact checking, or this could be some machine learning service that determines whether a posted image contains a dog.

The idea is twofold:

1) Allow users to aggregate their social media and thereby use multiple social networks within one client

2) Allow users to have more control over how their feed is filtered and sorted by specifying or modifying the features that are used to score posts

And my research goal is to create a tool to allow users to audit the performance of whatever third party scoring services they are using as well as the algorithm being used on the site.

Expand full comment

Scott, what you're calling "moderation" simply isn't what anyone in the history of social media platforms has called "moderation." You're using the word in a pretty idiosyncratic way.

Which doesn't mean that I don't like it! I think some version of this could be the best solution.

But what you're talking about is a capability for "fine-grained muting," not "moderation." The argument you're actually making is that, with sufficiently fine-grained muting, there is no need for censorship.

Here's the thing though. If you consider the predominant arguments in the mainstream media about, e.g., the impact of social media on the 2016 election, it's quite clear that the force of those arguments is about the impact even on those who are *willing* to see certain content, not simply on the experience of those who *don't*. This is the first and third of your (rejected) arguments for censorship, even if American journalist naturally don't use the c-word.

Expand full comment

I think this distinction misses a layer. There are many forms of speech that I think ought to be permitted legally, but ought to be banned by social media.

A good example is misgendering. I think every social media site ought to ruthlessly ban anyone who engages in misgendering. But I don't think governments ought to ban that speech. If people want to privately email each other transphobic stuff, I think banning that would be overreach. But it should not be allowed to exist on a site viewable to the public.

Social media, to me, is not base level speech. It is "polite society." If I want to scream slurs, that ought to be allowed legally, but it ought to leave me disinvited to any dinner parties. And IMO, social media ought to be treated as analogous to the dinner party.

It takes a lot of resources and hard work to make social media available to me. Posting ought to be a privilege I earn through good behavior, not a fundamental right. If a company deploys their resources to allow speech to be publicly visible on their platform, they are in fact advocating that speech, and it is both permissible and necessary for them to filter out content they believe to be morally wrong. Failure to do so is dereliction of duty. If you want to be a silent content delivery system, you shouldn't make that content available to the public - it should be like email or SMS where the content is only visible to the specific intended recipient(s).

Expand full comment
founding

so you wouldn't have allowed the Skokie march?

Expand full comment

I think my principled position is that the march should have been permitted so long as it stuck to public infrastructure, but it would also be entirely legitimate to boycott out of existence any company that chose to employ a documented marcher.

Expand full comment

Should it have been legitimate to boycott out of existence any company that chose to employ a documented homosexual, back in the day when homosexual acts were still illegal and society deemed holders of such views to be dangerous sick perverts?

Expand full comment

"boycott out of existence any company that chose to employ a documented marcher"

That's an awful lot of collateral damage to punish one person for having the wrong opinion. And the generalized meta principle of "boycott out of existence any company that chooses to employ someone who has an offensive opinion" doesn't fail gracefully in cases where you are wrong about the object level of which opinions should be considered offensive.

Signal-boosting this related post: https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/

Expand full comment

People should be free to define their own identity and others should feel free to negotiate whether they want to accept it as it is presented. If you don’t need validation, then you don’t care. And if you do need validation, then you have to accept that not everyone will see the world with you in it as you see it yourself. I’m not defending bullying or deliberate attacks of any kind, but misgendering could happen by accident or cultural misalignment (eg not understanding the gender of a name in another language). If someone decided to exist as if we lived in the 1600s, and got upset each time new technology was flashed in front of their eyes, they would have to carefully negotiate their interactions or risk constant acts of invalidation. Or alternatively, they could project their desired self and embrace societal response.

Expand full comment

> A good example is misgendering. I think every social media site ought to ruthlessly ban anyone who engages in misgendering. But I don't think governments ought to ban that speech.

I genuinely don't understand this. I believe social media accounts should allow anything legal to post.

> If people want to privately email each other transphobic stuff, I think banning that would be overreach. But it should not be allowed to exist on a site viewable to the public.

Ok, I'm going to post transphobic content on my private newsletter with 9 billion subscribers. Is that allowed? If so, why do you set the constraint as "must be non-discoverable by people who fail to opt in"?

> If a company deploys their resources to allow speech to be publicly visible on their platform, they are in fact advocating that speech ... Failure to do so is dereliction of duty.

I don't know how anyone could possibly believe this. You basically believe that social media shouldn't exist. Social media websites are not, and can not, operate like newspaper editors.

Expand full comment

I think the dinner party metaphor is useful. The host isn't responsible for every single thing any guest says. But if I attend, and someone is loudly and clearly saying a bunch of offensive stuff, and the host obviously hears it and lets it slide, it's fair for me to assume the host is more or less okay with that offensive stuff, and I will judge the host accordingly. And if I continue to attend, I can expect other people to judge me based on my continued attendance.

Opinions may vary on what speech crosses the line - misgendering is something I picked for my own views. But if you continue to attend dinner parties with people who loudly proclaim certain viewpoints, you should expect to be judged for it, and it shouldn't be a surprise if you are penalized for it in other parts of life.

I do think actual "public squares" ought to be content-neutral. But I think that ought to apply to ISPs and hosting sites, not social media sites.

Also, I don't think social media should be forced by law to remove content. But I think applying social pressure is legitimate speech in its own right, and there is nothing wrong with a social media site deciding to remove content in order to protect its own reputation, just as a dinner host may ask a disruptive guess to leave.

Expand full comment

I will allow that if the point of the party is to allow for controversial speech, it's okay for everyone to agree to be chill about it. But if you don't want your guests to be judged on the things they say, you shouldn't publish the transcripts of the conversation as a public record anyone can go read, and you should carefully vet the guest list, not throw it open to anyone who shows up.

Expand full comment

> But if you don't want your guests to be judged on the things they say

I don't? I think it's perfectly fine for people to be judged on what they say...

Expand full comment

The thing is, companies like Reddit and Twitter are more comparable to a national government than to a dinner party host. They have more users than a lot of countries have citizens. A very large portion of all worldwide public discourse is controlled by a handful of Californian tech companies.

If you agree that the government should not censor certain views, then it would seem reasonable to say by analogous reasoning that Reddit should not censor those views either. They can put limitations on where and how people express those views -- just like, if someone is shouting anti-trans bigotry (or even socially-accepted liberal views for that matter) at the top of their voice in a crowded public street, probably a police officer will walk up and ask them to quiet down.

But if Reddit decides to block your views side-wide, you are being censored as much as if e.g. you're an Italian citizen being censored by the Italian government. (Italy chosen as an example because its population is roughly the same size as the Reddit userbase.)

Expand full comment

The problem I have with state censorship isn't scale. The problem is that state censorship is backed up by the implicit threat of incarceration, maiming, or death if you fail to comply. As the possessor of a monopoly on violence, the state ought to be uniquely constrained in ways other institutions don't have to be, regardless of scale.

If you're banned from Twitter, the worst thing that happens is you have to log off and take a nice walk outside. I don't think there needs to be a presumptive right to post.

Expand full comment

For some people, posting on Twitter is more than an idle timewaster -- if you're in the marketing / freelance journalist / influencer sphere, it can be an important part of your career.

Would it be OK if the government promised not to incarcerate or maim anybody for voicing the wrong views, but merely used their power to prevent you getting a job? Something like China's "social credit" system, which AFAIK stops short of actually imprisoning people when their credit score gets too low, but which can definitely make them into second-class citizens in all kinds of ways?

And of course there's the argument that freedom of speech is about the speech, not about the rights of the person doing the speaking. You can't find out the truth if one side of the argument is not allowed to be brought up for consideration.

There was a time when the word "misgendering" did not exist and the idea that gender can be separate from biology was considered absurd; the reason we are no longer in that world is because the people who disagreed with it, had the right and the ability to state their case. Surely the same thing will happen in the future to some viewpoints we consider absurd and offensive today. Unless we decide that we already know all the truths worth knowing, and nobody should be allowed to publicly disagree with the consensus again..

Expand full comment

Nah. You are the one who should leave the dinner party if you are offended.

Expand full comment

"But if you continue to attend dinner parties with people who loudly proclaim certain viewpoints, you should expect to be judged for it, and it shouldn't be a surprise if you are penalized for it in other parts of life."

I'm a religious believer. If I attend dinner parties where some person or person loudly and clearly says a bunch of offensive stuff about religion, and the host lets it slide, should I judge the host? The other guests? Myself, for continuing to go to dinner parties held by that host?

You've got a set of things you clearly believe to be "bad, wrong, false, untrue, and evil" which you want to be discouraged. But someone else's set of "bad, wrong, false, untrue and evil" may contain things you think are "good, true, right, and good". Do I judge you for attending parties where "trans rights are human rights" is an acceptable statement to voice? Should you expect to be judged and penalised for doing that, and is that okay?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Yes, I fully expect to be judged on my actions. It would not be "okay" in the sense that you are wrong on the object level and I am right, but *if* you were right on the object level then judging and penalizing me would be appropriate.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

The entire argument around misgendering is one side says "There are two sexes, based on biology, and gender arises out of that. You can call yourself a woman, but if you were born a man, you're a man, and insisting that I refer to this person as 'she' when my own two eyes tell me that's a 'he' is denying reality" and the other side is "I am a woman, I was always a woman, if I have a beard and a penis I am still a woman".

So depending on which side you take, what is misgendering? Calling a man "she" and "her" is misgendering. Are you cracking down on that? Or do you instead mean "I accept the trans arguments, so I define misgendering as calling a trans woman 'he/him'"?

I think we all know what is meant by "social media ought to ruthlessly ban anyone who engages in misgendering" is "anyone who refuses to say the line that this clearly male person is a woman"

https://assets.bwbx.io/images/users/iqjWHBFdfxIU/ipAye8NEKS.4/v1/1200x-1.jpg

Expand full comment

I think the issue is more nuanced than that. That is to say, yes, "misgendering" currently does mean exactly what you say; however, there is discussion to be had about gender as it applies to freedom of personal expression.

I feel that, in an ideal world, if I were experience a strong preference for using "bunself" pronouns and identifying as a bunny otherkin, people would respect that to the best of their ability; certainly, the same would apply to identifying as whatever gender I chose. Identifying as a woman (while being biologically male) would be treated on par with getting an Elvis tattoo: "weird flex but ok". However, in the same ideal world, I could not retaliate against anyone who told me, "no dude, you look like a fat human male, not a bunny". I could make a personal effort to disassociate from such people, but I could not cause them to lose their jobs or be put in jail.

Now, obviously, ideal worlds do not and cannot exist; still, I think it's a decent goal to strive towards.

Expand full comment

One quick thought about misgendering:

I used to think that it was a trivial thing. It's just a selection of words used to describe a person. The words may betray an attitude that disagrees with the social expression of the person who is being described. But words are not violence, they are descriptors.

Then I remembered a lot of outrage directed at a public figure some years back. Often, that outrage took the form of statements that were approximately "This Person is not a Real Woman!" [1]

This particular public figure presented as female. However, certain members of the news-and-commentary world kept on questioning whether this public figure was a Real Woman.

Were they misgendering this person? Was this form of discourse a vile abomination, or just edgy?

I didn't know the concept of misgendering at the time, but I did know that this kind of language felt like an attack on the person. It felt wrong, and evil.

This set of events happened in 2008, so misgendering wasn't a thing on most people's list of concerns.

If anyone wants to discuss this set of events in the modern web, can it be done without running afoul of rules about misgendering?

---------------

[1] The public figure in question was the Governor of Alaska, and had been selected as a candidate for Vice President of the United States.

If you recognize the person I'm talking about, does that change your opinion of the sequence of events?

Expand full comment

As a trans person - come on, we aren't *that* fragile. This level of red line is maybe appropriate for much-stronger slurs or slur-like actions, "fighting words" which really do have a high signal-to-noise ratio of communicating ill intent (and possibly incipient potential for actual harm). But a minor social fox pass like misgendering...something which happens on the regular in real life, from random strangers who don't follow elite progressive speech norms of prefacing every interaction with What Are Your Pronouns? That's not so simple, and requires more context.

It's one thing if someone's preferred gender is loudly and visibly advertised, such that misgendering them is an intentional act which necessarily communicates disagreement. (I think this is part of the performative/self-defensive act of populating one's bios with pronouns.)* Quite another for ambiguous edge cases, where even well-meaning people can make mistakes. Most of us that "pass" well, that tends to be the optimistic target area...there are sharply diminishing returns to passability past that zone, largely constrained by genetics and money. So even among the upper crust of "stealth"-capable trans people, we rely on a bit of good-faith give-and-take, because the little hints and cues will almost always be there. This goes double for anyone remotely famous - far beyond changing one's individual documentation, there's a trail of historical evidence referring to a different person. So everyone is asked to do a polite two-step about any previous mentions of e.g. "Bruce" Jenner, rather than a revisionist scouring of all such extant references to amend them to "Caitlyn". (Which is the logical conclusion of all-inclusive protections..."Right To Forget" on steroids.)

Note that for __individual trans users and the ally-minded__, if they'd prefer dinner parties brooking no such dissent along these lines, I agree that's a valuable good. Huge net utility gain! But, like Scott, I'd prefer a filter-archipelago type solution where people can create their own sub-dinner parties...rather than a one-party-fits-all model, with top-down arbitration on what is and isn't allowed at The Communal Dinner Party. Because that sort of "reverse/shadow social engineering" just doesn't seem to be effective, and it certainly doesn't Do The Work of changing hearts and minds. It also leads to an uncomfortable dependence on corporations/state power/whatever Authority to enforce a socially-welcoming environment, which can be capriciously revoked at any time. Illusory victory through all eradication of transphobia in Official Spaces is a dangerous failure mode when not supported by actual majority bottom-up support.

*In that kind of clear-cut case, I think there are *still* concerns that expressing such disagreement ought to be allowed in at least its milder forms...putting thumbs on scales of ongoing-hotly-contested issues is, well, heavy-handed and comes with lotsa knock-on effects. Especially when it goes against the prevailing median opinion! "Republicans use Twitter too", and all that. Matt Yglesias mentions this re: Twitter trans policy specifically in his recent post about New Boss Musk: https://www.slowboring.com/i/81292136/content-moderation-changes-will-probably-be-modest

Incidentally, I find that Babylon Bee headline pretty funny myself, and would prefer a world where it's possible to give sacred cows a lighthearted ribbing for satirical purposes. It's exactly the same way that Dave Chapelle made me sad with his anti-trans jokes - but, damn it, the man's still really funny. Tradeoffs abound in all things, and I think the enormous marshalling of activist time, energy, and money to 100% sanctify certain protected classes has gone far past Pareto-optimal (fighting the fights worth fighting, for the potential gains at stake, given inevitable backlash).

Expand full comment
founding

I think we are missing some separation between illegal content (regulated by the government and lawmakers), and legal content that the platform decides to prohibit. child porn, financial scams, etc. are illegal by law. so the question should be what *legal* content should not be allowed. But then we are back in the grey area of what should/shouldn't be moderated.

Expand full comment

This is a little bit like saying 'sure we use child slaves to harvest 90% our cocoa beans in Nigeria, but if that really bugs you then we'll send you personally the chocolate harvested from Venezuela where we don't use actual slaves. So you should now have no problem buying from us and supporting our brand.'

You say:

>Moderation is the normal business activity of ensuring that your customers like using your product.

But many customers don't *just* care about the product as they experience it, they also care about whether they are supporting a company that does harm to the world, or does good in the world. This preference may be weakly expressed in cases where that harm is carefully obscured by being overseas and happening to people without good publicist; but it certainly seems like it's a very strong preference for many customers of social media websites.

If customers don't like using a product that they believe is harming the world and society, then mitigating what they see as harmful about it is ensuring they like the product, which is your definition of moderation.

You may well say 'that's a stupid/invalid preference for those customers to have,' but the great thing about free markets is that you don't get to decide what preferences customers have, customers get to express their own actual preferences and spend their money/time accordingly.

So I really don't think 'moderation is doing what your customers want' is going to gain you any ground here. Many customers are very clear about wanting the type of moderation you are trying to call censorship, and would not generally be happy using a product that works the way you outline.

If that weren't the case, companies could make a lot of money by moderating less. I don't think every social media company in the world is too stupid to think of the filtering idea; I think they know their customers don't want that, and it would lose them money.

Expand full comment

Consumers' vocalized preferences don't necessarily correlate with their revealed preferences (i.e. what service they actually use), and it's the latter corporations optimize for. People have been calling Twitter a hellsite for years, but they've been doing it... on Twitter.

If Twitter followed Scott's suggestions, they'd lose X people over moral principles, but they'd gain not only the Y people who are currently being censored off the platform but also some Z people who just want their personal experience to be better. And maybe I'm being cynical, but I think that last group is by far the largest.

Expand full comment

You're basically arguing that you know more about Twitter's business model and how it can make money than it's own CEO, and they'd be better off hiring you to do the job instead.

Which may very well be true today, but probably wasn't true a month ago.

But either way: you're making a contingent argument about which moderation regime would happen to make them more money. That's relevant to their business model, but it's not relevant to the general moral/ideological distinction being discussed here, and it's not determinative of the general case of the business model where the answer may change platform to platform and policy to policy.

Expand full comment

Social media companies face pressure from governments, infrastructure providers, advertisers, employees ...

Expand full comment

Aside from advertisers (who are their actual customers), I doubt those actually have much impact on their moderation policies in terms of the political stuff this post is mostly aimed at.

Expand full comment

> advertisers (who are their actual customers),

I'm sure that feels clever to say, but if that's your view—users might want more freedom of speech but advertisers, the "actual customers", do not—why didn't you put that in your top level comment?

Isn't it worth distinguishing between what kind of social media platform people want to use, and what kind of platform can be funded, can get access to infrastructure ...?

> political stuff this post is mostly aimed at

This post is about the concepts of moderation and censorship, not about "political stuff".

Expand full comment

Translation: we're all busybodies, more interested in what other people are doing than what we are? Or maybe just a majority? Or maybe just an annoying but effective minority?

Expand full comment

Organizations like Twitter are more like infrastructure than they are content creators. In your analogy, they aren't the ones harvesting cocoa beans.

So what we hear is more like you saying "If any cocoa beans harvested by child slaved have ever ridden in your company's cargo ships, or passed through a particular port, then we want to shut your company and that port down!" when that company isn't responsible for the products it carries, it's just a common carrier, and the way your suggestion ends is that every fractured political disagreement group has to setup their own entire new set of infrastructure, from transportation to communication to finance, just to be able to communicate with each other.

Before you get too excited about that idea, perhaps consider that many of the ideas and philosophies you likely value were once not in power, and could thus be similarly censored.

Just because those currently in power tend to agree with you doesn't mean it will forever stay that way.

Expand full comment

A lot of platforms censor content in order to curate their customers. Consider a movie theatre that shows both pornographic and regular movies. Even if they moderate it, so that only those buying tickets to the porn are exposed to it, people would still not take their kids there because the overall vibe has shifted.

4chan might be the best example of this dynamic on the internet. There are specific boards where anything goes and if you stay away from them you can have a more sedate experience. However the mere existence of those boards drives away the normies and sets the overall tone of the site.

Expand full comment

Twitter is not like 4chan or a movie theather. It's more like the telephone network. Does the fact that some bad people use telephones drive other people away?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

A telephone network doesn't let you listen to random people's conversations. It also doesn't record everything a person has ever said on the phone and preserve it for public comment for the rest of time. Like, a telephone seems to be the worst possible metaphor for Twitter.

Expand full comment

Yeah? Like, worse then a movie theater?

Expand full comment

At least a movie theater metaphor would acknowledge that tweets have an audience rather than a single recipient.

Expand full comment

You can think of moderation without censorship as a matter of freedom of listening or freedom of reading.

Expand full comment
founding

Generally I feel that the social value of free speech should be seen as more about the freedom to *listen* than the freedom to *speak*. The freedom to speak is something nearly everyone agrees shouldn't be entirely absolute in its reach (malware linkspam probably being near the top of the list of things that should be removed from the default view). The freedom to listen, on the other hand, *does* have genuine arguments for being absolute, and I generally agree with these arguments on the principled level if sometimes not necessarily the object level of the things in question (after all, a censorship button that says "only for use on misinformed child bombers" is still, in the end, a censorship button... the promise that it will only be used in this way is only worth the ornate embossing it is printed on, and its mere existence still serves as just as much of an excuse for governments and others to request its usage in response to surveillance).

I like this post because it cuts straight to the crux of distinguishing the amount of burden a prospective listener faces to enter the Matrix -- from a simple flip of a switch on the "moderation" near-extreme to downloading an entirely different browser on the "censorship" near-extreme. The "myth of consensual communication"-ness of web 2.0 is one of a few reasons I've found myself increasingly yelling at the Cloud in recent years, and hoping that more decentralized and privacy-preserving systems become more popular to allow people to choose the level of redaction they wish to see.

Expand full comment

When it comes to the social platforms; also is missing is freedom to *NOT* listen to/see certain content. https://repository.law.miami.edu/fac_articles/296/

Expand full comment
founding

I generally eschew that section of the 2x2 because there are so many places (such as kids going to school) where we understandably violate it as it is, so it's difficult to hold even weakly on principle unlike the other three quadrants. That said, I do think that browsers should make it easier to make extensions that block things like the trending tab, but beyond that I'm not too concerned if a user sees something they're offended by.

Expand full comment

You may not be concerned, but if the offended user is, they ought to have a way to fix it for themselves.

Expand full comment

How is that freedom missing from social media? You can block people.

Expand full comment

two issues:

1) blocking people only works *after* they have impacted you once; when people are mobbed it's that first offense (by many people) that is the problem

2) in our research on harassment ( https://homes.cs.washington.edu/~axz/pub_details.html?id=squadbox ) we found cases where a harassment victim *needs to stay in contact with their harasser*. For example, someone co-raising a child with their harassing ex. So, blocking at the person level is not fine-grained enough.

Expand full comment

I guess my question was: If you treated twitter as a public square in the first amendment sense, would it violate the freedom not to listen? I don't think the freedom not to listen implies some responsibility on the government's part to protect people against harassment, which obviously happens on public squares.

Expand full comment

In fact there are laws protecting people from harassment in public squares. https://stopstreetharassment.org/strategies/sshlaw-bestadvocacyordinance/

Expand full comment

Are these laws required by the first amendment?

Expand full comment

An effort to distinguish moderation from censorship by pointing to the involvement of third parties misses that there is genuine confusion about which parties even *are* involved in any given exchange. Reputational concerns due to proximity are real; there is no cheap signal that lets you ignore that no matter how hard you might wish it to be so.

Even ignoring advertisers and the fact that if you aren't paying money you *definitely* aren't the customer, any social media host themselves has to worry about reputational concerns. The layman's publisher/platform distinction is not only legally bogus, it's a bad match to real user experience - there's a reason you avoid 8kun, dear reader, and it isn't because you've carefully siloed your impression of the site off from your impression of the conversation there. Avoiding that stink takes action on behalf of the host, and it is their prerogative to do so.

> The current level of moderation is a compromise. It makes no one happy.

The level of moderation on any given privately-owned site is entirely within the power and responsibility of that site's owners and controllers, barring some legally-regulated edge cases. If the level of moderation employed by a site owner is making that site owner unhappy, they have screwed up† as a straightforward matter of execution. If the level of moderation on someone else's site makes you unhappy, you can try and persuade the owner to adopt your position and/or you can leave.

†(Are there practical limitations on the "level of moderation"? *Absolutely* there are, and I'm sympathetic to the real resource investments required. But that's a distinct argument that has implications up and down this rhetorical branch, and it's disingenuous to specifically introduce it now.)

It is unforgivably sloppy to try and categorize "the current level of moderation" as a global phenomena (Online? In the media? In "media"? In the English-speaking zeitgeist?) divorced from the owners of that moderation. There isn't a single example of actual bad moderators given in this post! I know there's a bias to attribute negative behavior to systems rather than individuals, but making the argument abstractly is merely trying to launder virtue out of zero data.

Expand full comment

> There isn't a single example of actual bad moderators given in this post!

Thinking about it more, the lack of examples is worse than I thought - it belies the critical state v. private actor distinction. There is *one* counterfactual example given in the post addressing a hypothetical improved version of the Chinese government, but the rest refers to the generic "social media" which is private almost by default.

My preferred definition of 'censorship' heavily leans on the fundamental differences between state and private actors, and that distinction is one of the things I think libertarianism genuinely does do better than competing philosophies. Seeing the blithe equivocation here... ugh. Not great.

Expand full comment

I agree that the opt-in scheme you you describe is not censorship. If you allow the users to control what is shown, that is user-chosen curation. That's not censorship any more than it is for me to not subscribe to certain authors.

However, most social media companies do *not* have the "minimum viable product" you describe. They enforce their filters, so the curation is de facto censorship.

Free speech is not for the writer, it is for the reader. Most companies fall short of the MVP standard, and in so doing they restrict what people are able to read. That's censorship.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

There are two important reasons to do "censorship" (as you call it here).

The first, which is my primary motive for moderating ('censoring'), is that it is a key part of maintaining a culture. Yes, some content is actively harmful to very broad norms (e.g. don't goad people into suicide) and the receiver does not want to see it. But if you want to uphold a more specific culture, then often you have to disincentivize and sometimes remove content that the sender and receiver are both net positive on seeing.

If I run a forum on tennis, and I have a user who likes tennis but is also really into environmentalism, they may start posting about environmentalism in tennis, posting about which brands are made in ways that don't use fossil fuels, and other similar tips. Suppose it finds traction, with many other users discussing this too, and then starts to snowball into discussions about environmentalism in other sports, then in other hobbies, then into the broader environmentalism activism space, with people getting really into it, until a point where there's more daily discussion of environmentalism than tennis. I think it's okay for those who run the space to say "We're not against environmentalism, but this is a space about tennis." and do things like delete user accounts who show up just to talk about environmentalism, or to not count karma accrued on posts about environmentalism, or to announce a site-wide ban on environmentalism content for 3 months. Ideally, much earlier in the process than the point where the site becomes primarily environmentalism content.

This is censorship (as you define it), and it also seems necessary to me for the functioning of walled-gardens that can maintain the integrity of their focus as they grow, and as more memetically fit ideas start to find their ways into the minds of their users and potential-users. I believe one of them must be chosen ('no removing content that sender+receiver consents to' or 'functional subcultures'), and I choose the latter.

The second reason is that *99% of users don't use personalized settings*. If I recall correctly, one of the reasons that Facebook doesn't give you lots of settings for what ads to see, is that they actually did that once, but nobody used it. My own experience of LessWrong is similar; we put filter-tags on the frontpage and most people do not use them (even I barely use them and I was one of the people who thought they were a good idea). So the addition of settings doesn't change 99% of people's experience. You made the general point yourself more eloquently than me [13 years ago](https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-trivial-inconveniences), using basically the same example, but to the opposite conclusion.

If key information is hidden behind settings, most of the people you hope to get that info, will not get that info. So I am not so sure that simply doing 'moderation' is so cleanly separated from doing 'censorship', and that enacting the former is not to a significant extent enacting the latter.

(This is as per your definitions, I haven't thought about whether there are other definitions that more cleanly separate them and still capture most of what we're trying to talk about with censorship and moderation.)

Expand full comment

Quibbling over Scott’s terms might obscure the distinction being made. I think you are making an separate distinction, between the sort of moderation appropriate for a focused forum, as opposed to Scott's suggested roll-your-own. Both of these are distinct from censoring an entire platform. Yes, moderators should be able to escort naughty users to the door in ACX, but maybe not kick them off of twitter. I guess that depends on whether you think Trump was kicked off of twitter because people who didn’t want to see his tweets didn’t know how to block them (or unfollow him?) or because they thought he should be prevented from speaking to the people who did want to hear.

Expand full comment

I wasn't thinking about Twitter when I wrote the above. I just want to defend the line of "Just because an interaction is consensual doesn't mean I am indifferent to having it in my subculture, and I may use various controls/powers to restrict or prevent things". A focused forum is the example, but I think this applies to any subculture (e.g. a local tennis club).

Is Twitter a subsculture, or is the whole culture? I think Twitter likes to think it's the whole culture, but I don't actually think it is, though it is way bigger than a single subculture (it contains 100s or 1000s of subcultures). I personally would find it easier to manage the reddit situation where lots of subreddits form and you encourage them to develop their own norms and moderation, and the site as a whole then should only need to care about very broad rules/dynamics.

Expand full comment

Twitter is much too big to curate like a forum. Every topic that is at all appropriate is appropriate there, and millions of persons discuss everything imaginable. But twitter is at least one of the most prominent places where this censorship/moderation issue has arisen, and one that was recently purchased by someone who at least claims to want it to be less intrusive. I took it to be the paradigm of what Scott was trying to address.

Expand full comment

I suggested this exact feature on reddit 8 years ago.

https://www.reddit.com/r/worldpolitics/comments/1z6kev/why_reddit_moderators_are_censoring_glenn/cfr1hol/

There's the thread if you're interested in how it was received by the users at the time. (Generally positive.)

Expand full comment

So I'm actually gonna point out a big problem with this idea.

Adjacent communities bleed into each other.

If you set up a community with a Technology section and a Cute Kitties section, it biases both of those sections a little bit towards each other. You'll end up with more catlovers in the Technology section than you would have otherwise, and more tech people in the Cat section than you would have otherwise.

If you then add a Racial Slurs section, it's going to have much the same effect.

What you're proposing here isn't adding a Racial Slurs section. It's adding a way that people can fling racial slurs *directly at cat-lovers without the cat lovers being able to see it*. That's not just an adjacent community, that's an interlaced community, where no matter how pleasant what you're looking at is, you're one button-press away from people flaming it.

Reddit, IMO, has problems due to hosting a wide variety of communities dedicated solely to hating things (no, I don't mean necessarily racial hate; /r/fuckcars counts). The people who post in these communities naturally spread out and post in other subreddits, but they (inevitably) take their personality along. By catering to these hate communities you're naturally slightly increasing the amount of hatred in other communities as well - just take a look at /r/urbanhell to see the crossover.

I do not think your idea would work out well, unfortunately. It may reduce censorship, but I think you're going to have a really hard time *building a good community* on it.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think that this discussion would've been better if there was shared understanding about what sort of platforms could implement these changes. On one hand, there are small-ish communities that are united by shared interests, like subreddits, LW, ACX, or old forums. On the other, there's "the town square", winner-take-all monopolies like Facebook and Twitter. It's plausible that changes that could be destructively bad for the first category might improve the second one on net.

Expand full comment

I think my argument is that, if you're making these changes, you are attempting to become the town square, kind of by definition. Small-ish communities survive at least in part by their willingness to boot people who don't match the community; get rid of that willingness and it'll change.

Expand full comment

Thank you--this is really well put.

I'd also add for most platforms, moderation is really costly. As a result, they'll prefer strategies where each moderation decision has a high impact.

Noah Smith has talked about how neo-nazi accounts mob people on twitter. Essentially, a few accounts stick around for a while, and retweet people they think should be mobbed. Then, a ton of people with burner accounts reply to the retweeted tweet with a bunch of threatening or harassing messages.

Banning the retweet accounts is a somewhat effective strategy to stop other users from experiencing harassment by neo-nazis, because if the bans come fast, it will take time for neo-nazis to make new accounts and find each other. Filing all the accounts that do this under "Level 10 Antisemitism and Racism" wouldn't accomplish that, because the neo-nazi accounts could see and talk to each other, so could coordinate the creation of new, clean accounts.

Expand full comment

I'm glad I'm not the only one uncomfortable with /r/fuckcars. A community based solely on hating something is the best possible way to breed undesirable types who will sour discussions and are generally just nuisances to be around. I think /r/fuckcars is only accepted by the Anti-Evil Operations because urban planning isn't a protected class of thought and they think cars are an acceptable target.

Expand full comment

"That it’s a social good to avert the spread of false ideas (and maybe even some true ideas that people can’t handle)."

Does anyone have an example of some true idea that people can't handle? I am suspicious of these types of claims. We have this notion that ideas are mind viruses, which makes some sense as a metaphor. But once you start taking that claim literally, you quickly get into the world of the highly speculative.

PS - I mean actual ideas and not specific pieces of information that could be used to do bad, like hot to make a nuclear weapon or the home addresses of celebrities and politicians.

Expand full comment

Bryan Caplan believes that hereditarianism about intelligence is true, but extremely often inspires its adherents to hold immoral views--to see people of lower IQ as fundamentally inferior human beings, disregard their interests, and even justify violating their rights.

My own experiences with hereditarians, and in particular HBD types--often in Scott's own comment sections--have been similar to Caplan's.

(I hasten to add that I consider Scott himself, along with Freddie DeBoer, to be one of the handful of hereditarians who very much *doesn't* manifest this tendency.)

https://www.econlib.org/archives/2017/04/iq_with_conscie.html

https://www.econlib.org/archives/2017/04/iq_with_conscie_1.html

Expand full comment

> but extremely often inspires its adherents to hold immoral views--to see people of lower IQ as fundamentally inferior human beings, disregard their interests, and even justify violating their rights.

I think it goes the other way around. I doubt many people learned about lower IQ of a group, and then started hating them.

That these beliefs are connected with racism is mostly the result of left-wing completely shutting down the idea. So that you can't express "this group of people, with lower average IQ, is disadvantaged for this reason - and so should be assisted". Which is IMO more obvious than "they're inferior, let's violate their rights" reasoning.

Expand full comment

I agree that there's a bit of a "community of three principled civil libertarians and a zillion witches" dynamic going on here, where only people who *already* hold fringe right-wing political views (and so have nothing further to fear from mainstream outrage) are willing to publicly profess IQ realism/HBD.

But I sadly have to disagree with you that claims of innate inferiority most obviously and naturally lead to a compassionate desire to help the afflicted group. It seems like human cultures have very frequently justified their oppressive social hierarchies on the grounds of the innate inferiority of those at the bottom, and basically *never* appealed to the innate intellectual inferiority of the lower classes to justify economic redistribution a la Freddie DeBoer.

This makes it seem like--while I agree that it's completely *possible* to draw egalitarian political conclusions from hereditarianism--it seems far from intuitive for human beings to do so.

Expand full comment

There are a lot of people that mostly seem to care about what other users get to see. I think most people agree on "avoid harassment", the real disagreement is around mis/dis/malinformation.

Expand full comment

The true problem with censorship is when it silences certain ideas. Child porn as you mentioned is not an idea, it's a red herring as nobody is truly arguing in favor of allowing that. The philosophical position that no ideas should be censored has been debated for centuries and it has a name: freedom of speech.

The problem is that today nobody really knows what freedom of speech actually is. The fact that moderation and censorship has been conflated is one problem, but so is the fact that the philosophical position has been conflated with the laws (First Amendment). It shows when people claim that freedom of speech is a right.

Freedom of speech was meant to safeguard heliocentrism, it wasn't meant to be a right of Galileo.

Expand full comment

Always worth pointing out that the laws of most countries are not nearly as permissive when it comes to speech, and an internet platform needs to follow the laws of the countries where it is used or it will be banned from that jurisdiction. And I'm not talking about repressive regimes here: expressing and promoting Nazi ideology is literally illegal in Germany (for rather obvious reasons), so giving people the option to be Nazis has to be geofenced.

Expand full comment

One problem with user-customizable fine-grained filters on a forum like Reddit or Twitter, is that everybody gets to see a different subset of the conversation.

Alice posts an argument in favor of anarcho-capitalism. Bob argues against it, but I don't see his argument because he also posts in #anchovis and I have a filter to block all people who like anchovis. Other people respond directly to Alice, but I can't make sense of what they're saying because they are taking it for granted that Bob's earlier point is part of the conversation. I respond to one of them, without being aware that I just copied almost word-for-word something Bob wrote earlier. Etc.

(The anchovis example is silly of course, but the idea of globally blocking users for having expressed bad opinions elsewhere is not. There are browser extensions etc which will hide posts from users who post in fora associated with certain hot-button topics, and they will hide those users even in other fora on unrelated topics.)

Expand full comment

For very small communities—chatroom, comment sections, forum, subreddits—it might make sense for everyone to be able to see and reply to all comments. But on twitter? You're not going to see most tweets anyway, blocking is already necessary anyway, unless you want a censorship regime so strict that it entirely replaces blocking. In fact, reddit also has a mute feature, that doesn't seem much of a problem.

Expand full comment

Sounds like you want an “am I missing something here?” button on your moderation configuration. Or maybe threads that have missing pieces can put an icon in the Swiss cheese gap, so you can tell Bob said something in reply to Alice, but he is caught in your filter. That would depend on whether your filter acts on posters or individual posts, I guess. Maybe the icon could indicate whether these actual words were filtered, or the person, and allow you to peek if you want.

Expand full comment

That's a perspective I probably haven't thought to apply before. But in my opinion, it comes down to authority. To wit, when does someone have the authority to tell you what to do or not do. The authority to restrict information from you is merely a derivative of that question.

The anarchist philosopher Robert Nat Wolff argued that under no grounds can someone have authority over another person because it conflicts with personal autonomy. He had some persuasive arguments too.

But it is clearly not how most people see it. For example, responsible parents censor, not merely moderate, certain kinds of content from their children all the time. I would argue they have the authority to do so.

Back in the day, the catholic church used to keep an extensive list of banned books - it didn't work since the fastest way to get people to read something is to ban it. But I see this as a parallel to the issues we face today: did they have the authority to do that. That's fuzzier.

Take the covid mandates and some of the terrible information about the pandemic provided by people with zero expertise. Do platforms have the authority to restrict people from seeing that.

It's more of a matrix: there's information you want to see that is good for you( that's easy.) There's information you don't want to see that isn't good for you( that's moderation). But what about information you want to see that isn't good for you( fascist propaganda) and information you don't want to see that is actually good for you( a reasoned perspective from your political enemies). Should platforms expose people to that information. More important, do they have the legitimate authority to do that beyond this is my platform and I make the rules. It is an important question. Like all important questions, it has no easy answers

Expand full comment

Another hint about Moderation vs Censorship is who decide what's good for you? You? Then it's moderation, people put in place an infrastructure and spend some effort in order to help you pre-decide what's good for you with, ideally as much accuracy and as less effort as possible....

Somebody else decides what's good for you? It's censorship....

Basically, since adulthood (or, more precisely, since end of childhood which happen a little bit earlier), I have very little sympathy for anybody pretending to know better than me what's good for me. In good faith, knowing me well and admitting it's just an advice, maybe, but even then I will be suspicious. Else, no way....

Expand full comment

Yup. You are right. But only an omniscient person will know what's good for him or her in every conceivable situation. Capitalism succeeds because it allows everyone to specialize at what they are good at. Another way of putting it is it allows everyone to be deeply ignorant about almost everything except a tiny slice of the world. So the question doesn't really go away: what should be done when you want information that isn't good for you. I'm pretty sure that everyone would like to hear that the solution to their health issues is a bunch of fruit supplements and a big dollop of ice cream. There is a reason medical practitioners aren't allowed to tell you that completely untrue information even if it would make everyone feel better. The uncomfortable truth is the world cannot exist without censorship. What we must debate is how to go about that censorship.

Expand full comment

Well, specializing in a way that allows you to know what's good for somebody else better than him is inherently problematic. I'd argue it should not happen: Even for matter complex enough for an uneducated opinion to be largely useless, experts only advises with the final decision being the actual guy, the one that will support the main consequences of the choice.

Medicine is a very good example, that you also choose, so let's look: The patient have the final decision, once he had be informed....or, at least, it used to be like that....Lately, for some stuff, there is a clear shift (yes, covid, I look at you, but it's not the only case).

There is a very clear change in the west: from prevalence of individual choice to prevalence of "collective" choice. This is already something I do not like, but add to that it's not the really the collective choice (hence my quotes), but the choice of elected representatives or technocrats which at best reflect quite indirectly the average choice of citizens.

And this not only for international affairs or technical stuff ( where referendums will probably not work), for societal matters (where referendums should be used, but are not), but also for private life stuff.

This get closer to the Chinese way, except that the public face(s) getting credited for the decisions change every 4-5 years....

Expand full comment

I'm not sure I fully understand what you are getting at but I think the gist of it is that the buck should rest at the table of the person over whom the decision is being made. I still think there's a debate to be had over that but it's a reasonable compromise beyond doubt.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

There are multiple equilibria that could arise from this solution. Whether 99% or 75% or 10% of people chose to see all-banned posts - if your social group is dominated by those choosing to see them, you'll feel compelled to join it. Whether that's friends, school peers or your business arena. It's less of a consensual choice than you imply. Many would prefer a hard ban that their peers can't co-opt them out of.

Expand full comment

I get the impression this post was written for a certain person - someone who has strong free-speech beliefs, and has recently come into possession of a giant moderation&censorship machine. It's quite an interesting proposal and I hope that person considers it - especially the potential for allowing 3rd parties to create moderation lists, like how spam is dealt with, which means the platform is responsible for a lot less ongoing effort of maintaining perfect politically-neutral censorship over worldwide discourse.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Are you sure that the "we just want people to have a way of protecting themselves from harrassment" position actually has a sizeable amount of support, compared to the good ol' anti free flow of information position?

When you find yourself struggling to understand the motivations behind someone's actions, take a look at the consequences - especially if they continue using the same strategy, even though it doesn't seem to get them closer to their stated goals.

If there is a broad consensus that people should have more control over what kind of information they want to expose themselves to, if we are in agreement that it is each person's personal right and responsibility to make that choice, and we're willing to pay the social price for the irresponsible uses that informational self-determination allows for (not talking about the examples you gave toward the end of your post - those things, for me, fall squarely into the category of "clearly defineable criminal actions that have to be illegal for everyone for the rule of law to function", and if you have a problem with that you would have to argue about your misgivings with the law), then the system you propose should be very desirable. But.

Don't a lot of people simply believe 1) that this is *not* a price worth paying, and 2) that, to begin with, there's no reason to give people "informational self-determination", insofar as experts and trustworthy institutions can do a much better job of determining by what information we should let ourselves be determined?

I agree with your proposed solution because I think of self-determination as a fundamental value for humans. For me, the argument really is as simple as that - it doesn't matter if people abuse their freedoms and the consequences are undesirable, because a world where people don't get to make their own choices for their own reasons is not a world I value, regardless of how much it would feel like a utopia if you were living in it.

But if you look at the fight against fake news, against untrustworthy sources, at no-platformings and cancellings, at all the discussions that people are deathly afraid of having take place, isn't it at least plausible that there's a much more straightforwardly anti-liberal mentality underlying all of this? That the people pushing for censorship aren't simply confused about how to get what they really want (for you and me to protect ourselves from information we think we want to be protected from), that, basically, they just don't hold personal self-determination as a fundamental value? Lots of people in big tech, journalism, academia, etc. believe that, if they can control the flow of information competently, the world will be a better place for it, so that's what they're trying to do.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think there's a fourth "argument for censorship": the people owning the platform might have their own personal boundaries to what they're willing to host. Maybe they're Jewish and don't want anti-semitic content on their servers even if in principle they didn't think the *government* ought to ban it from *all* social media platforms. (This is of course just the "bakers don't want to bake a gay wedding cake" issue at scale.)

Users are also not the only clients social media platforms have to satisfy in market terms; they're also bound to the wishes of advertisers, who have their own preferences about what they want their ads run alongside with. e.g. Tumblr doesn't have any moral opposition to porn, but its advertising partners didn't want their ads to run next to porn, so…

Expand full comment

Scale matters. The "gay wedding cake" story would be very different if it was about a national franchise of bakeries which had a near-monopoly on wedding cakes. Or two or three such franchises, but their headquarters are all located within the same small area in Texas and their owners and senior managers all go to the same BBQs and rodeos and hunting trips.

I'm not arguing that companies shouldn't be allowed to set their own rules for their own platforms. But when a large portion of all worldwide public discourse is subject to the cultural values of a small region in California, we have a different issue than if it's just a single small-town bakery.

Expand full comment

For sure. (Though some would argue the issue there is that monopoly existing at all. Bring Back Web 1.0 grumlegrumle.) I wasn't necessarily saying #4 is something I agree with/a slam-dunk, but Scott doesn't fully agree with the three arguments he listed, either. I think #4 is about as important as those.

Expand full comment

Just to state the obvious: The moderation on ACX is a very, very good thing. Even I abstain from aggressive commenting. Here. Mostly. That's good. Compare comment section gone crazy on: mru :D

Expand full comment

Note that by the definition Scott proposes above ACX has censorship, not moderation.

Expand full comment

Forgive my poor English.

I think Scott here implicitly pointed to a uncomfortable truth that traditionally the default policy and political thoughts leant heavily on the side of “censorship” as you listed through 1-3. Their point was that we *need* this kind restrictions to have a well-ordered and functioning society.

That position was seen as self-evident, and I think it was not entirely wrong. (Actually it might be mostly true for any given pre-liberal democracy polity.) The question was not why censorship might be necessary but why it was even possible to get rid of it without fostering disasters.

It was liberals who tried to argue that besides what rights people were naturally entitled to, we didn’t need this restrictions or at least not to the extend most authorities believed it to be to have a tolerable and functioning social and political order.

THAT WAS A NOVEL AND UNORTHODOX idea back then and behinds this idea were a whole set of new understandings of how social and political institutions works and organized itself. If we look carefully into the classical texts written by the proponents of freedom of speech what they were arguing was essentially that the PUBLIC interests were best served by reducing the restrictive policies of speech by authorities and everything bad come with it was a rather small price to pay for societies as a whole.

That is to say, the harassment problem which bothered people as private citizens were not at the centerstage of debates historically as it is not today. The hyper individualist(or rather narcissistic )vibe of modern America makes it everything about particular persons while in truth it is not.

And modern liberal democracy (with its brand of lightly censored publication/communication ecology) is still a very young thing compared to the traditional modes of governance. And to understand the 1st Amendment as something granted unrestricted free speech came out very lately even by the standard of liberal democracy.

Historically, even liberal societies were *not* running on the idea of speeches-unbounded as we now hold so dear.

The “good old days” many lament that are being destroyed by social media is everything but old. It once had worked by establishing some de facto “gate keepers” to contain the spoil-overs of theoretically unlimited speech without the need of FORMAl regulations. What made this possible was the simple fact that the cost of producing AND distributing ideas was high enough to make elites with access to capital, higher education, social status, and professional reputation had a much greater say of the shaping of public opinion than ordinary people. Laypersons could only channel their influence through the elitist pipelines. The raw feelings and preferences were being refined and moderated down through the process.

To put it in other words: we used to had an oligopolistic but competitive market of ideas while marketing it as a free market. The liberal arguments against a government regulated ecology of speech was valid largely because the natural course of things did not run down to its logical conclusion. And it served most people well.

Until the drastic reduction of said costs hugely disrupted the status quo. Suddenly the “free market of ideas” became a reality while it used to be a helpful fiction. The authorities, old guards, and a good portion of commoners are not happy with that: with good reasons.

It is not the case that social media present some *new* challenges to a established “ traditional” liberal order but rather an old debate is revived into public consciousness under new technological environments.

Expand full comment

You seem to ignore the rather ubiquitous and obvious counter-argument to this:

> But my point is: nobody is debating these arguments now, because they don’t have to. Proponents of censorship have decided it’s easier to conflate censorship and moderation, and then argue for moderation.

Which is that many, many people conflate in the other direction, I.e., claim they're being "censored" when all that has happened is comments that are offensive, dangerous, in violation of terms of service, or damaging to the product image/quality have been moderated

Expand full comment

But they are actually right, the entire point of this article is that every single removal of a material that you're not 100% percent sure that every single user of your service would hate constitue censorship. This is demonstrably and trivially true in the cases of reddit et. al., they remove things that even the *majority* of their users would not like removed, not just a bunch.

"Moderation" is when you simply attach a big red flag on the post saying something along the lines of "Our Moderators Think You Would Hate This, Show Anyway [Y/N] ?". If you *truly* think the things you hate are really unpopular then you should have 0 problems with this, eh?

It's essentially a bet. Would you bet me 10000000000000000000000000-to-1 in my favor that the sun won't rise tomorrow ? if the sun rose, I lose and give you 1 unit of money, if it didn't then you lose and give me the other number in money. You *should* have no problem with this if you're sure the sun would rise tomorrow (and that I have the money to pay). Same thing for the wrongthink that you think is universally hated, you should have absolutely no problem with allowing it under a big red opt-in flag.

Expand full comment

How are there no comments here about the fediverse/mastodon? The way picking an instance works and how instances federate/mute/block each other is, like, exactly this.

Expand full comment

Posts like these make me perversely hope Substack keeps being banned in China. Although I suppose under the described system, I could just toggle the "see anti-CCP posts" button for the Classic Scott Experience. Assuming they'd still get written in the first place...

I think one of the biggest values of theoretically being able to see all content all the time is easier quantification. Like, part of the problem with curating a fake public square is that participants get a hugely distorted idea of what actual median people think. Since those are the voices not participating at all, or banned if they do. Lotsa people *want* that sort of echo chamber, obviously. But when inferences about the territory start getting made from intentionally-misleading maps of that sort, well, then you get Twitter. Not every individual has to see all the shitty stuff...if only researchers bother to dive into that abyss, it's still better than having no idea at all what gets through post survivorship bias.

Expand full comment

One scenario that is not mentioned is sharing data about 3rd parties - doxxing, stalking, revenge porn. Not seeing such elements will probably won't be enough for many people, and in many countries sharing such information could cause liability to platforms.

But looking at current landscape makes me wonder about one more thing - how much the issue of censorship vs moderation is caused of scale of current platforms. Amounts of spam on certain platforms would make "don't show" features unusable, and sheer amount of content is forcing to push for automated moderation/censorship, which is causing it's own set of issues.

Expand full comment

Chinese has a built-in workaround, at least for now. The word for “The West” and the President’s proper name are homophones. Criticizing The West for its corruption and state capitalism is a free action. Obviously that changes with the next emperor..er…President…

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Before I talk about the territory, I'd like to challenge your map. I don't think that that's what those two words mean, and I think that inaccurately using a word with strong negative affect for the thing you want to oppose is cheating.

I think that the distinction between moderation and censorship in standard English usage is that if you're one private actor among many saying "you can't say that on my platform" you're doing moderation, if you're a state saying "you can't say that at all" you're doing censorship, and if you're a private actor so large that your platform is a basically a monopoly it's a messy grey area (Google is approaching this point; Twitter definitely isn't).

As evidence that you know this, let me cite the fact that the things you've repeatedly referred to as "moderation" policies here and on SSC fit your definition of censorship, not moderation.

With that out of the way I'd also like to highlight a really important advantage of the kind of moderation you call censorship over the thing no-one actually does that you think would qualify as moderation: community curation. If your goal is to provide a platform for social media, where people - including strangers - can interact, and you want those interactions to be as likely as possible to be positive, then cracking down on posts and posters likely to provoke negative interactions is a really important tool. If you're a publisher (in the old-fashioned literal publisher-of-books/newspapers/magazines sense, not in the context of the platform-vs-publisher brouhaha) whose function is purely to let people put information out there, not to facilitate two-way communication, that may not matter, but for social media sites actively driving away people who have negative interactions with strangers is a strong positive good, because they're going to change and toxify the culture of your site /even if you let people choose to avoid seeing them/.

Conversely, I think that the three arguments for the kind of moderation you describe as censorship as less strong than they might be if advanced for genuine censorship, because of the distinction between "it doesn't happen here" and "it doesn't happen". But in practice, while saying "not here" doesn't stop those kind of speech, empirically it probably does reduce them, so I agree those arguments aren't null and void, and the fact that there are overwhelmingly strong arguments /against/ genuine censorship that don't apply to TKOMYDAC often makes them a good halfway house.

Let me refer you to your own previously-expressed admiration for "archipelago" type community formation. For that, you definitely need TKOMYDAC, not the new kind you're proposing.

Expand full comment

Thanks for this. I was looking for a response that tackled this distinction head-on. The original article refers to censorship as something done by "people in power", which is a blurry formulation that includes both "state actors" and "people with bigger megaphones than I have."

I would submit a much clearer definition of censorship: it's something that only state actors can do. In a US context, the government's intercessions into absolute free speech are fairly circumscribed (laws against CSAM and DMCA/copyright enforcement being the two biggest categories). Literally everyone else is exercising their First Amendment rights in various ways that intersect. "Moderation," under this definition, is just another form of that. I literally don't know what it means to say that two parties A and B who want to communicate with one another are blocked by non-state-actor third-party C from doing so -- A and B can just pick another venue for their conversation.

Expand full comment
Comment deleted
Expand full comment

The entire reason to especially dislike censorship is the legal consequences. Our host here can ban me for any reason or no reason, but that's the extent of his power -- he can't throw me in jail. That distinction dwarfs any other consideration. If I can't post here, I can take my ideas elsewhere.

Ceteris paribus we should prefer the least amount of moderation needed to allow for a free exchange of ideas, though a private forum can make whatever moderation decisions it chooses (and users can react accordingly). None of that is usefully described as censorship.

Expand full comment

You're one of several people pushing back on the use of the word "censorship" here, and I picked yours to respond to mostly because you seem to have spent the most effort both elaborating on it, and pushing on an alternative.

Let's examine the word censorship, though. If you're a state actor, the you stop communication, that's censorship.

Okay. What qualifies as a state actor for the purposes of censorship?

Okay, federal government clearly counts. I think a state pretty clearly counts.

What about counties? What about cities?

These are clearly also state actors, but as we get more granular, we start to observe an interesting phenomenon - namely, it becomes increasingly cheaper and easier to just - move outside the political boundaries in question. We reach a really weird inflection point with very small cities, which are state actors, and very large HOAs, which aren't.

Monowi, Nebraska, might be the smallest city in the US at 135 acres; the Association of Poinciana Villages claims to be the second-largest HOA at 47,000 acres.

For a given restriction on speech, which entity has the clearer claim on the word "censorship"?

(Although, given the fact that Monowi is an authoritarian dictatorship in which the word of a single person rules the city, and nobody else may vote, maybe Monowi. That's a joke, mind; it has a single legal resident, per the internet.)

That is - "state actor" doesn't actually capture what we care about.

Nor does "Authority to use violence to enforce its rules", both because of the obvious indirect causes - private entities can call the police to enforce their rules - and also the maybe less obvious direct causes - in many states property owners can use violence to enforce their property rights.

Scott is not, here, -ignoring- a definition of censorship, but rather, -creating- a definition of censorship, which attempts to capture more of what he, and many readers, actually care about. The point isn't to sneak in connotations using a heavily-loaded word, it is to adjust the word to capture the cases which actually bring these connotations in in the first place.

To explain:

You say Twitter isn't that important - not like Google. Okay, let's grant that Twitter does not actually have a monopoly on communication, which is fairly obvious and uncontroversial.

I'd say it is, however, part of an oligopoly, whose consensus rules have, over the last few years, systematically attempted to strip particular viewpoints out of the public digital discourse. Even if we grant that Twitter is not a monopoly, and cannot control discourse, the abstract entity composed of the set of digital social media giants -does- have this kind of power.

With respect to that gestalt entity - yes, these rules do constitute censorship, per your grant of monopoly positioning creating a state-like entity for the purposes of the use of the word censorship.

In line with the monopoly conception, we can observe the fact that there are many people over the last few months who have been very upset that the people you are saying weren't being censored, will now be heard by the public. Regardless of whether or not you personally feel that Twitter preventing certain viewpoints from being heard amounts to censorship on account of insufficient market power, a whole lot of people, both for and against said policies, clearly disagree with you.

On the gestalt entity thing, this is analogous to collective-oriented frameworks routinely used by left-wing thinkers; "systemic racism", in which individually non-racist things can collectively become a racist gestalt; or "stochastic terrorism", in which individually non-terrorist acts can become terrorist; for two examples. Characteristics absent from individual items in a set can nonetheless emerge from the set itself - and people can be opposed to an element of a set for contributing to that systemic emergent characteristic, even if the individual element does not represent a meaningful instance of that characteristic.

Alternatively, we can describe it in terms of Kantian universalizability, albeit not necessarily of a moralistic nature:

https://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/

And hey, I think there's a neat conceptual generalization sitting there, if anybody cares to explore it. (I have too many projects already, personally.)

Expand full comment

You didn’t address the actual behind the scenes rationale for censorship and propaganda uncovered in the Intercept “Truth Cops” piece:

“Jen Easterly, Biden’s appointed director of CISA, swiftly made it clear that she would continue to shift resources in the agency to combat the spread of dangerous forms of information on social media. “One could argue we’re in the business of critical infrastructure, and the most critical infrastructure is our cognitive infrastructure, so building that resilience to misinformation and disinformation, I think, is incredibly important,” said Easterly, speaking at a conference in November 2021.”

According to this minister of truth no one has ever heard of, Uncle Sam owns our thoughts and feelings and has a duty to maintain that “cognitive infrastructure.” Our minds are apparently a public utility. So it necessarily follows that DHS gets to decide what does and does not enter them. Under the concept of “cognitive infrastructure,” there can be no objection to censorship. There can be only submission.

Expand full comment
Comment deleted
Expand full comment

They never stopped. They just got better at it.

Expand full comment

Although they seems to slip back to too obvious lately....Or maybe it's their ambition that are so high it's no longer possible to be subtle about it...

Expand full comment

Or maybe they know there’ll be no pushback. They don’t need to go to the trouble of running a program like Operation Mockingbird anymore because now they can just put their spies on TV and in the newsrooms, have them say “trust me, I’m a spy,” and reasonably anticipate a critical mass of the American people will accept everything they say as gospel. Why even bother being covert anymore?

DHS clearly wasn’t concerned about covering their tracks while flagrantly violating the First Amendment. They just did it, and are continuing to do it, because they can and no one will stop them.

Expand full comment

You have a very good point. I am European, so not really following the latest internal US scandals. But I am still baffled about stuff like Echelon and Merkel's phone spying triggering almost zero reactions...

Expand full comment

If you're referring to the Intercept piece on this topic, which is where your quote comes from, you may want to review this: https://www.techdirt.com/2022/11/02/bullshit-reporting-the-intercepts-story-about-government-policing-disinfo-is-absolute-garbage/ as the piece appears to be total bullshit. For a brief summary, see Mike Masnick's twitter thread here: https://twitter.com/mmasnick/status/1587864857398767617

Expand full comment

Shocker. Someone’s “debunking” the Intercept piece. But the documents speak for themselves. Basically the “debunking” amounts to “some of the documents were already public, and I agree with what DHS was doing anyway.”

Notably, there is no challenge to the authenticity of the documents and quotes that form the core of the piece. In other words, there was nothing wrong with the reporting. Are you suggesting the quote I provided is fake? It is not. And the quote is Orwellian on it’s face. I don’t need anyone to interpret it for me or defend its substance or provide “missing context”, thanks.

These days, the difference between a “conspiracy theorist” mired in “misinformation” and a good informed citizen is that the conspiracy theorist objects to flagrant authoritarianism and the informed citizen supports it. This “debunking” is nothing other than DHS cheerleading. There was nothing wrong with the Intercept reporting, and the reporters involved have impeccable track records. The hit job is garbage.

Expand full comment

No, I'm not saying the quote was made up, I'm saying that the piece elides context and avoids discussing what actual recommendations/actions were proposed:

"It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program."

Expand full comment

Copying a comment I made on Reddit about this:

Masnick's argument is not only inaccurate in places, but fundamentally misses the reason to care about this by treating every takedown he makes as equally important.

CISA isn't controversial? MDM is public? Who gives a fuck about any of that, it's not the important point of the story.

"They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing"

None of this might be notable except for "boosting authoritative sources". By definition, this is suppressing speech - what may have naturally come to the top is now subject to an "authoritativeness" factor. DHS uses the example of election officials, but it's not hard to see that this would mean "mainstream and largely left-wing media sources" are the gold standard while anything that isn't sits in an uneasy position unless it also confirms what those sources say.

We've already seen how this played out with YouTube, where individuals were restricted from talking about current events if they wanted to keep their monetization status, but news companies were free to publish their own stuff without issue. Or just look at Twitter, which issued the "this may be false" warning on certain claims about the election, and implied they had taken a stance on what the objective truth of the matter was.

Then there's the big one - Masnick's refusal (or perhaps, inability) to understand why this is problematic.

"But the big companies, for fairly obvious and sensible reasons, also set up specialized versions of that reporting system for government officials so that reports don’t get lost in the flow."

This is precisely the point of concern! Governments cannot be trusted to not rely on their own power to pressure non-governmental entities into doing their work for them, there are laws about this. A system in which the government can make uniquely marked requests is one in which it can start asking "Hey, why didn't this come down?" with the kind of power dynamic every piece of anti-sex feminist literature can only dream about. Hell, Greenwald already made the point a year ago that there's a non-negligible chance Apple and Google scrambled to drop Parler from their app stores once it was clear they'd be overseen in government committees by Democrats.

For that matter, Masnick literally confirms what I'm talking about later, and it's not hard to see how this information can be used to lean upon the platforms.

"The response from a CISA official does say that their hope is the social media companies will (as the Intercept notes) “process reports and provide timely responses, to include the removal of reported misinformation from the platform where possible.”"

He also claims to not understand the point about the DHS getting pressured to help Bush's 2004 election chances - the point should be pretty obvious by now, in that it's not unheard of for the DHS to get involved with the nation's politics.

Oh, and the EIP point, where he says it wasn't done by the government? The original project literally consulted with CISA in the first place, it's not like the EIP was operating completely separately. Were they in active collaboration? Maybe not, but there's still a connection.

There's also Masnick being aliterate - choosing not to read. In response to the following excerpt from the Intercept article:

"The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”

He says the following:

"Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise."

This is completely false. The report in question explicitly calls for CISA to have a system for "rapid identification, analysis, and applying best practices to develop and disseminate communicative products." This is also the same document saying they should boost "authoritative sources", which by definition is suppressing (or even halting) the spread of other voices.

Let's not ignore his point about the Hunter Biden story, which includes a sneer at the idea that the story could swing the election because "Hunter Biden wasn't running". Y'know, like it might not have impacted his father who was. Nor does he think there's anything wrong with FB suppressing the NY Post link until it was verified, but this only brings in the question of how FB is verifying these things and why that's necessary in the first place.

To repeat myself, Masnick not only missed the point, he misrepresented arguments or facts repeatedly and generally failed to understand why people were upset in the first place.

Expand full comment

Thanks for the detailed response. I didn’t have the energy to do a point-by-point.

Expand full comment

I'm not sure that censorship and moderation are as neatly separated.

What about "curation"?

What about inadvertent "desensitization" and unwitting addiction? How many people have become addicted to cigarettes because of seemingly innocuous past advertisements neither moderated nor censored and curated in a particular way: content A next to content B in magazine C.

Censorship, which does no damage to solidarity, probably does not bother me as much as it should. Libertarianism is a dead end as far as I'm concerned.

Expand full comment

I think the actual difficulty is people using your social network to cause unpleasant things to happen IRL.

Line here seems hard to draw depends on how much of a following the actors have, how "will no one rid me of this turbulent priest" we're getting, and how high profile/capable of dealing with things the targets are...

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Sounds like these definitions mean:

Moderation is for the benefit of the user.

Censorship is for the benefit of the platform (but claimed to be for the user or their safety).

Imagine the wretched results if a platform had various categories of ban - too right wing, too left wing, sexual content, religious content, whatever you like. Then users could shape their news into that which they would want to read (and you know we already do that to ourselves by reading only those sites that tell us things we agree with!) We could all end up reading the news from what seemed like different planets, with total social disconnect. That's nightmare-fuel.

Expand full comment

nah, that's already what happens, news outlets specialize to an audience, either the old fashioned way (right, centrist, left)-leaning newspapers or journalists, or the new more dynamic way of recommendation algo and subscription for numeric hubs like youtube.

The only difference is that with tags or other user-selectable filtering, the final say is the user, and it's kind of transparent....while currently it's the editorial decision which balance a lot of things (pressure from more powerful actors, commercial interest (which could be seen as a particular form of such pressure) but not directly the interest/decision of the final consumer. At best, this is accounted for in an average way when trying to maximize the audience, and it's anyway a very opaque process....

Expand full comment

Came here to say: this post feels different, in a way I didn’t enjoy. Like: “oh, there is an opinion stated as a fact” (when you said “that’s not true at all”. Note I’m not making a claim about whether or not I agree with you).

I don’t like imagining this is the kind of article that triggers either boos or rallying chants. I prefer ACX less political, more observational.

Expand full comment

Maybe you should moderate it away, like, hum, stop reading the article when it becomes clear it's not what you like?

Expand full comment

No..? Frowning at my screen & gesturing my hands.

I’m giving my opinion hoping it’ll influence Scott in the direction that I want.

Expand full comment

Very wise and well stated. I hope Elon or his people are listening.

Expand full comment

Consider /r/AskHistorians, which has notoriously strict moderation criteria. In many cases readers wouldn’t mind reading some of the comments removed for not quite being up to standards. Still, nobody calls this editorial policy “censorship”.

Expand full comment

What types of comments get banned on /r/AskHistorians? I really don’t go on Reddit as much anymore because a lot of the moderation is extremely heavy handed, with a considerable lack of nuance. Even the NFL/NHL subs, which you’d expect to be neutral, are explicitly establishment left biased.

Expand full comment

They remove all comments that don't adhere to quasi-academical standards like sources, depth and so on. The only banned topic specifically mentioned by name in the rules is Holocaust Denialism, which strikes me as odd because, *if you're really sure* the topic is irredeemable why not just enforce good sources policy and the topic will disappear on its own ?

It has a policy that no topics younger than 20 years are allowed which is an excellent way to repel Current Thing Syndrome.

Expand full comment

Any comment that doesn’t look like a whole essay is likely to be removed. The answer should give some background, be detailed and well sourced.

Expand full comment

The most important thing to discuss in the NHL is racism 24/7!

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Meh I found that sub pretty useless and unfriendly due to the moderation policy.

Expand full comment

I would, because the respondents on r/askhistorians display all the worst qualities of academic historians.

Expand full comment

It seems like you follow the logic "if it's bad, it must be called censorship".

I also don't like /r/AskHistorians moderation policy. Once I asked a question there, received no answers and from a discussion afterwards found that some people could answer my question, but were afraid that their comments would be deleted because they weren't sufficiently detailed.

Still, I don't think that AH moderation policy is just that, an editorial decision. It's just the way they think they are maximizing the quality of their subreddit. They are not preventing people from going to say /r/AskHistory and asking their questions there.

Expand full comment

Thinking about this a bit more, isn't this just advocating for the creation of a parallel, unmoderated social network for every existing social network?

Expand full comment

Did you really just say that CP isn’t a compelling thing to censor? The second order effects are intolerable!

Expand full comment

What are those second-order effects?

Expand full comment

This is an interesting and new to me line of thought, and I think the idea of non-censorship or, at least, minimum-censorship moderation deserve exploration.

At the first glance it seems as a pareto improvement on the current equilibrium. But when I think more about it I'm less sure.

In one sense non-censorship moderation gives too much freedom of speech. Every platform that uses it will have to accept the fact that it provides tools for bad agents to do bad things. This is solvable by using minimal-censorship instead. But that just passes the buck of what has to be censored. Where is this minimal level? What exactly is the criterion we are using to figure out that child pornography and bomb making are okay to censor?

On the other hand, this system give less freedom of speech, or at least makes it less meaningful. The whole point why free speech is important is because winning in the marketplace of ideas is correlated with the truth. We accept the possibility of spreading of false information because we can't be sure that our own understanding of what is true and what is false is perfect. But with these system we can get multiple poorly connected marketplaces of ideas, wins in which would be less meaningful. If implemented poorly the system will make engaging with the opposite ideas even harder, creating even more polarised society, while also empowering the communities based around false beliefs.

Expand full comment

The conflation trend also leads to the definition of moderation always creeping towards censorship: preference falsification happens, the dissenters can't vent so they co-opt more ways of covertly talking about the topic which then leads to the emergence of 'dog whistles' which of course means ever more moderation covering an even broader spectrum of terms.

Moderation is about having online speech conform to a minimum, clear standard: 'don't call for violence', 'don't repeatedly spam harassment', 'don't reveal others' personal data'(this is one kind of ban that no opt-in moderation could ever solve) et cetera. Simple, broad rules that can't be interpreted in different ways to skew an ongoing public discussion. Censorship is easily enacted by making similar standards blurred enough to allow selective application or having them be defined by 'impact'.

Expand full comment

I have never seen the words “preferences falsification”, but that really is a great summation of what I think drives a lot of people nuts about this behavior. Things aren’t really moderated on “we’re people harmed”. It’s more “can extreme partisans make a case someone might be harmed”.

Expand full comment

I once went on Twitter and saw that someone had posted a picture of feces in response to a reasonable argument by a female journalist (Megan McArdle). And then I went off Twitter...something about the combination of insulting and disgusting and sexist really repelled me.

On the other hand, I have no problem with sexually explicit speech, "bad" words, or fringe opinions. I'd love something like the self-guided moderation rules proposed here. Is this a nod to Elon's "choose the experience you want to have" idea?

Expand full comment

There also seem to be a lot of reasonable people who think it's okay to allow personal insults (like "fuck you" "you're an idiot"). I think that sort of thing is really toxic to discourse, but hey, choose your own experience.

Expand full comment

I disagree, much more serious insults get thrown around rather casually. If someone makes a relatively innocuous slightly “right adjacent point”, and the response is “of course you think that nazi”, “fuck you” or some other personal insult is a great response.

They just made a much more serious personal insult.

Expand full comment

Not directly addressing your main point here, but how is the feces picture sexist?

Expand full comment

I wouldn't love for pictures of feces to be normal responses to reasonable arguments, but "sexist" isn't when you insult a woman. "Sexist" is when you insult a woman (or a man) by something that simply translates to "haha you're just a woman" (or a man). There is nothing women-specific or even women-suggesting about a picture of feces. If the poster in question replied with a picture of a kitchen (to imply it's the only place fitting a woman), a picture of a tampon (implying the journalist is on her period and "not stable"), or a 1950s caricature of a woman being spanked then that would have been a sexist response.

Diluting words like that is why I have a subconscious module in my brain that sees "Racism" or "Sexism" or "Bigotry" and immediately blare red in alarm because they are markers of a Certain Ingroup, although they are very useful words that once denoted very real and vile prejudices.

Expand full comment

100% Agree and I feel the same way when I see those words. I feel like I may even over correct sometimes and end up with false negatives in my mental inventory of certain situations. I guess given the current social context and the fact that I’m not a rationalist, I’m ok with this state of affairs for now.

Expand full comment

A fully general counterargument to any criticism is the bulverism "you only think that because you hate <some group I belong to>". Using social justice (-ism -ist -phobe) words loosely is often a way to invoke that bulverism. I agree that people should be more careful about choosing which negative affect words to use to describe a thing. "rude and puerile" would be a good choice to describe the feces.

Expand full comment

I don't think this holds up. Just because the message itself isn't coded based on sex doesn't mean the motivation isn't.

If someone responds to literally every post by a woman* with a picture of feces, I'd feel comfortable calling that person sexist. It's just the most plausible explanation for their actions. Obviously there are differences in scale and behavior patterns etc. but the principle is sound. Determining how many cases it takes is a different question, of course

*I'm just using woman here because it was in the original example

Expand full comment

Using only the message itself is the only guaranteed way to make false positives = 0. It's also simple and stateless : no other data beyond the message is needed, no need for a "history". There is no question that somebody who uses a woman's biology or social context against her in an unrelated argument is a sexist.

But you can never really be sure that a person who responds to every single woman he argues with using a picture of feces is a sexist. How many women are there ? Maybe he just follows the ones he hate.

Feminists, for example, are numerous on twitter (but almost nonexistent as a percentage of all women globally), and they are known to say extremly dumb and unfair things about men (especially on twitter). It wouldn't be fair to call a man who responds to all feminists with pictures of feces a sexist (notice that this a completely different question of whether this man is right or justified), as he never used a feminist's femininity against her, just retaliated against an opinion (very likely anti-men and insulting) with a crass insult.

This way of judging is also practically begging to be selection-biased : you won't ever notice all the countless times he replied to women (who even knows who's a woman on the internet?) with "awesome" or "queeeen", you will just notice the feces.

Sexism is an almost comically strong label : a man (let's be honest, it's almost always used against men in favor of women) who hates an entire half of his species, and not only that, but the half he is *supposed* to be the most affectionate with. In order for this label to be anything but a cyncial weapon, it has to have equally strong preconditions. Not even 1000 women met with generic insults are enough to apply the label, it has to be specifically a sex-based insult against women.

Expand full comment

Idk, some of the same problems still exist here. Think about the Hunter Biden story and some of the COVID stuff, which were really the most egregious cases of “moderation” becoming censorship. The whole focus of the outrage is that certain people were not exposed to these stories which could have been important in shifting public opinion one way or another. People *should* be seeing this stuff. If 90% of people have the moderation filter on by default then it doesn’t change the fact of public opinion being shaped by the whims of who’s making the moderation decisions. Even if they turn the filter off so they see the stories, they are still “tainted” by the hand of the mods. I don’t think there was any person who was actually unaware of the hunter biden laptop because of the ban. In fact it probably gave it a huge boost in exposure. But the fact that it was taken down caused anyone who’s only heuristic is “trust the establishment” to automatically dismiss it. I feel like the only way this works is if you have the most honest, principled, and intelligent people making the decisions. However I’m skeptical that ANYONE at all is honest, principled, and smart enough for that impossible job.

Expand full comment

"Disinformation", wrongthink you mean.

Fair enough about harassment, but most of the time moderation is a way to shut down the side of the debate that they disagree with. Reddit is the embodiment of this.

Expand full comment

Do you really think there is no intentional propagation of false information that we can fairly call disinformation? It’s all just valid information that is unfairly shunned as wrongthink?

Otherwise I’m having a hard time understanding your apparent assertion that all claims of disinformation are really wrongthink.

Expand full comment

Wouldn't Bad Actors just start calling things that aren't spam spam? A lot of this happened in the first place because there was a reservoir population of Extremely Online obsessives who WOULD make a stink about everything they didn't like, and for some reason the PR departments thought these people represented "the Public". (Also, later, they happened to be using the same language as all the new hires).

For example, if there's a filter for Conspiracy Theories, doesn't everything not on CNN (Or whatever) just become a Conspiracy Theory? The Theory that there ISN'T a conspiracy among the police to kill black people for fun will become a Conspiracy Theory. Democratic Politicians That I Like Are Capable of Telling Lies? Conspiracy theory.

________________________________________________

The thing I like about the idea the most is that people can SEE the toggles on their end, and sticking yourself in a kiddle-pool is a deliberate choice rather than a cattle chute.

Expand full comment

Yup -- like how the term "fake news" originally used to indicate a very specific phenomenon -- websites and mass e-mails with fictional "news articles" with clickbait like "Oprah dies in car accident" without even a grain of truth behind them. But of course within weeks of the term being first coined, people started using the term "fake news" for anything their political opponents said which wasn't 100% mathematically provable fact, without holding their own side to quite the same standard..

Same with "fact checkers". The idea is nice: we're a neutral party that doesn't get involved in the debate, we're just going to check objectively factual statements in order to have some ground truths all sides can agree on. But of course truly neutral parties are hard to come by, and any claim more complex than 1 + 1 = 2 is going to involve some interpretation, technically true but misleading implications, etc. So you get sites like Snopes and Politifact putting the "true but misleading" or "incomplete" stamp on claims they don't like, while applying a much lower standard to claims they are more sympathetic to.

Expand full comment

The liberal solution would be to have different providers of filters that you could choose from, so you might have a conspiracy-theory filter from a right-wing provider that filters out the patriarchy and structural racism or a conspiracy-theory filter from a left-wing provider that filters out the lab-leak theory or whatever.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

This may have been brought up in one of the the 458 comments I skimmed but overlooked. There was an earlier comment policy along the lines of:

"be either true and necessary, true and kind, or kind and necessary"

Please put this or a version of it right above the comment box - or at least mine!

Expand full comment

Here's a link: https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/

(TL;DR: this was SSC's old comment policy from 2014, which seems to have sort-of-revived.)

Expand full comment

yep, that's it!

Expand full comment

The left believe that if the right are allowed to talk to each other without censorship they will whip each other into a genocidal fascist frenzy. So they must use censorship and even violence to maintain a liberal democracy.

Expand full comment

Well, that's exactly what already happened to the left, with a few words swapped.

Expand full comment

I completely agree. I want the "algorithmic news feeds" to work like this as well. I want to tell FB not to show me ANY political content... I really just want to see the best of humanity, not the polarized divisions of humanity. Taking this kind of content labeling one step farther and slightly off topic... I also wish Netflix would give me these toggles. If I'm okay with violence but don't want to see anything more than kissing, I should be able to watch GoT (condensed version) without nudity. This would be excellent for kids. Turn a rated R movie to PG by skipping content programmatically using ML. This is possible today, and I wish it was actualized.

Expand full comment

No escape. They'll say "it isn't political, it's just niceness/common decency" and call anything featuring a hetero romantic relationship Cis Political Propaganda.

Expand full comment

The discussion is to be made outside 'business' activity. Moderation is censorship in every way (graded); The algorithm will moderate (choose) and will, therefore, censor. This is true to most human discourse, except 'moderation' is exerted morally trought a societal network of consent/exclusion - and often implies only self-censorship. The problem with 'business moderation' is the business part; Business entities do not have interest in truth; they are interested in business; the $ comes from conflict, not peace. That is the reason twitter will trow opposite views in your TL, deliberatly; i.e. to 'cause engagement'; the discussion to be made is: how can a business act in order to comply with the constitutional barings of freedom of speach? This is not, as we recall, a problem with the internet (medium) but a problem that arose from social networks themselves. The static field of speech (internet) is different from the dynamic field of social networks. Regulation is, therefore, in order. Must be.

Expand full comment

"If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship." I think this is a useful way to define censorship. Under this definition, Twitter have never censored anything, because they have never banned anybody from sending information to anybody else. (They have blocked people from sending information *over Twitter*, but that's no different from me refusing to let random people upload their opinions on my personal website.)

Similarly, if the government makes it illegal to publish your book, that's censorship; if a particular publisher refuses to publish your book because they think the opinions you express in it are dumb and publishing it would reflect badly on them, that's fine.

To bring it back to the humorous image of Xi Jinping: if Xi Jinping makes it illegal for two people to have sex without his permission, that's bad. If two people ask Xi Jinping to pay for a hotel room for them to have sex in, and he says no, that's fine.

Ultimately, if you're using the word "censorship" to refer to Twitter refusing to pay to store your opinions on its servers, then what word do you have left to refer to real censorship, i.e. using the threat of violence or legal consequences to prevent somebody expressing an opinion?

Expand full comment
Comment deleted
Expand full comment

I'm imagining a version of the Xi Jinping meme where on the left is Bell saying "I consent to you using this telephone system I built as long as you don't talk politics", in the middle are American citizens saying "I consent", and on the right is you saying, "I don't!"

I don't really know what you mean by "twitter enclosed the town square" - before twitter, tweeting wasn't uncensored, it was nonexistent. No type of communication which existed before twitter has been enclosed by twitter.

Expand full comment

You point out the other very important distinction I see ignored more than the censorship vs moderation problem: censorship vs refusal to publish. I think that far too many ignore that a private company choosing not to host someone else's content is _not_ the same thing as an idea being rooted out and having all publishers forbidden to publish it and you forbidden to speak it.

It's not just that it doesn't strictly fit either, it's that we really shouldn't _want_ to allow for forced publication. Typically, people are all for it when the speech is something they like and the publisher is someone they hate (similar to with censorship), but hate it when the roles are reversed, and somehow seem to have too short a memory to realize both situations have indeed happened.

To be fair, you _can_ make arguments for forced speech (if you want to), but still those arguments aren't really the same as those in favor of moderation or censorship, and we should respect the distinctions and maybe highlight that these different possible categories exist so that people aren't so easily trapped by not realizing the difference is there, and thus fall for rhetoric and policy that treat them as if there is no distinction.

Expand full comment
Comment deleted
Expand full comment

I'm actually ok with AT&T having the right not to provide their service to me if they really don't want to (full disclosure, they don't provide any service to me right now anyway).

The primary caveats I see there are: 1) Were they granted some kind of monopoly as a phone carrier in my area by my local government? If so, then they are in a different position than a purely non-government organization and their agreement with the government should outline what they can and cannot do, and hopefully my local representatives worked out some sort of agreement on what could or could not be restricted based on usual processes (basically, now it's a political issue rather than a purely private publisher issue). I wouldn't necessarily call them cutting off my phone line censorship even in this event though, it's denial of service. You can certainly have a discussion about whether it is justified or not, and land on either side reasonably, but that's simply not an identical discussion to censorship.

2) Are they using public infrastructure to deliver the service along the way? This can result in similar analysis as above, though with it's own particularities since using some state infrastructure to provide a service in agreed on terms is a bit of a looser arrangement than a granted monopoly (at least to me).

Twitter's website may in fact (or may not even) move over lines run by telecom companies that run in one of the above two situations, but now the relationship is exceptionally strained, at best, and is really more akin to a newspaper using public roads to deliver the paper (and maybe it really uses various private roads to deliver the paper in some instances).

Though, I'm also not really comfortable with AT&T having any idea what I am saying on the phone to begin with. I know any phone carrier, and others, in reality probably do know, or can know easily enough, but preferably there are provisions in the service agreement restricting this (and, again, some may go back to the two circumstances above as to whether the state agreement gets involved). A telephone call typically isn't meant to be a public facing communication that has any reflection on the provider, but rather a completely private exchange utilizing a paid for service whose sole job it is to simply make the connection between two endpoints, not store, track, display to others, or even hear what is communicated themselves. That kinda comes down more to a, "am I willing to work with such a company that would do that" or not situation though (and... again potentially the earlier mentioned 2 circumstances).

Expand full comment

I kind of like the framework by the DHS.

https://theintercept.com/2022/10/31/social-media-disinformation-dhs/

Spreading misinformation: I mistakenly believe Joe Biden is a lizard person and tell others.

Spreading disinformation: I know full well that Joe Biden is not a lizard person and tell others to spread panic and fear.

Spreading malinformation: Joe Biden is in fact a lizard person, but me spreading that fact it is like... a totally irrelevant detail to the discussion at hand. And me pointing out that fact is in bad faith and against US strategic interests :)

The last category malinformation (factual information shared, typically out of context, with harmful intent) is especially neat. I as the censor get to determine what the context of a conversation is and ascribe intent to others.

Expand full comment

Only power that is hidden is power that endures. The real proponents of censorship aren't trying to argue with you. They want to own and control your cognitive infrastructure.

Expand full comment

I ran a popular website in the early 2010s that had a reddit-like community of people with a lot of time on their hands (mostly retirees and stay-at-home moms). Many participants seemed inexorably drawn toward in-fighting and arguing about hot button topics, so much so our feed showing "most recent posts" was often deluged by a delirious anger about something or another. To combat this, we created a forum called "Drama" that you had to opt in to to view. Within a few months, most every community member had opted into it, because who _doesn't_ push the big red button that says "show me something naughty"? Thus, the vibe was "caustic hellscape" and the moderators had a much nastier set of messages to contend with than previously, because now everybody was operating in a zone *designated for* dramatic discussions.

It was not an experience that served anyone particularly well in my opinion, but it was my n=1 data point in trying to implement a moderation platform similar to the ideal described in this post.

Expand full comment

This reminds me of the perfect scissor-nature of the SomethingAwful.com forums. Back in the golden years, two different people with different priors could see two totally different things.

One would see a paradise, a well-tended garden of discourse and friendly good humor, and be cheered to see people being temporarily banned simply for saying idiotic things, or being jerks, or being lazy with spelling and capitalization.

The other would see a hellhole where all kinds of upsetting fringe ideas were being discussed seriously, and where people were allowed to be huge trolls and get away with it simply because they were funny. And then people trying to make a righteous, appropriately angry stand against atrocious ideas would be banned for their tone!

The Internet (including SomethingAwful) has gradually trended more and more in the direction of serving the second person and leaving the first person frustrated. In the olden days, most communities would boot you not for what you said, but for how you said it. Now, you're much more likely to have free reign in terms of tone and vitriol, but be moderated on the basis of the content of your communications. And now everybody agrees that everything is definitely worse, but somehow, some people think the solution is more content policing.

Expand full comment

The MVP really isn't viable, economically, because some things still are illegal, and must be taken down, and that means you have to go over all the bad stuff essentially twice, both to decide whether to moderate it, and to decide whether to censor it. Conflating moderation and censorship means you only go over things once, you can stay well clear of the line of "actually illegal" instead of skirting it, and you can paper over a lot of the differences between legal jurisdictions you operate in, you lose all these advantages with the "pure moderation MVP".

Expand full comment

When I signed up for Twitter, I would see posts from people I followed in reverse chronological order. They wanted me to see those posts; I wanted to see them (which is why I followed them). Now, Twitter decides which tweets by people I follow appear in my timeline. Is that censorship? Seems so according to the "If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information" definition of censorship. Are all algorithmic timelines therefore censorship?

Expand full comment

What'd I'd like to see is a way to filter based on how intelligent the argument is. I'd delightfully read a eg. good argument written by a white supremacism just to get an idea of where actual thought is going. However, they usually boil down to lots of ad hominem arguments, motive-attribution and obscenity.

Expand full comment

I mean, with a bit more AI, an automatic incoherency filter might be possible... wouldn't that be something?

Expand full comment

Jared Taylor is one of the more articulate and polite ones in the cluster of thingspace that the "white supremacist" label often refers to when used by the media, although he rejects the label because it centralizes a weak-man version of his position. He never advocates violence or harasses anyone, but he was banned by twitter anyway for advocating voluntary segregation. Last I checked on Metaculus he was the favorite to be unbanned from twitter among a list of controversial political figures who were previously banned: https://www.metaculus.com/questions/10860/who-will-twitter-unban-before-2023/

Adrian Davies is also one of the most articulate ones: https://www.bitchute.com/video/tNKG7Hq8VHQT/

Expand full comment

Love it when an article not only illuminates a subject which I have been trying to get a handle on, but also gives me the tools to clearly express what were previously only a scramble of thoughts. Thank you.

Expand full comment

“I like when other people do my thinking for me.”

Expand full comment

Absolutely. I've tried to stay away from books, film, news, conversations with other people, and all western knowledge, but I'm just too weak minded.

Expand full comment

“gives me tools to clearly express”

Expand full comment

This is somewhat off topic but at least tangentially related.

So do you think demonstrating that you have no qualms about inciting a riot based on a lie that you knew was a lie that was dismissed by scores of court cases and your personal lawyers plus your own selected Attorney General (Barr: “I told him it was bullshit.”) should result in a ban?

Being a Free Speech absolutist has a sheen of nobility, but really, you need to leave the abstract and have a look at recent history and make adjustments to a great idea that could not have anticipated Twitter.

If millions of people are willing to accept as gospel whatever self serving fantasy pops into one weird dudes head, you need to think it through.

Even Yuval Harari has suggested that there is at least a small chance that 2024 could be the last free election in the United States. (Last Friday on Bill Mahr)

I had a link to the Financial Times but it was paywalled.

> Harari winced, and solemnly suggested that American democracy is now so troubled that “the next presidential election could be the last democratic election in US history”. He added: “It is not a high chance, but it could be the case.”

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I would add a fourth possible - and probably most reasonable - point for censorship (under the definitions in this post). Let's say A wants to incite violence against N, and B, C and D want to read it. Now N has a really strong interest in this not being communicated, and it has little do with the discomfort of *reading* it. Even *laws* tend to crack down on incitement to crime, threats of violence, fraud, conspiracy and other acts that *are* speech. You don't automatically have to tell the platform to crack down on such illegal speech (you could just let the criminal system handle it), but it's also not automatically unreasonable to tell the platform to have it disallowed (after all, prevention of crime is preferable, and it seems unlikely that the justice system could handle any but the worst cases).

It's probably reasonable to have censorship against criminal activities, and moderation for everything else. Of course, being a European I know all too well that the laws can go a bit overboard - we do have weaker freedom of speech than the U.S., especially in the area of "hate speech".

Expand full comment

I think it's easy for people in stable democracies to remember social media's role in the farce that was Jan 6 but forget its importance in whipping up anti-Rohingya rhetoric.

Expand full comment

What if Pence had wilted under pressure. Would it still have been a farce?

Expand full comment
Comment deleted
Expand full comment

I’m a lot less sanguine about how that would have played out.

Expand full comment

I don't really buy that this addresses the avoid-harassment side's worries. Is it really better to know that thousands of people are talking nasty shit about you, you just don't have to see it? Like if you're not on twitter but you know you're going viral in a negative way, I think a lot of people are not going to feel very comforted, and might even not be able to resist "reading the comments section"

I don't think our brains are good at coping with large numbers of people telling us we suck and we should strive for an information ecosystem that just reduces the volume of the yellers (i have no good ideas on how to do this without sacrificing other good things about virality/free speech in a way resistant to political manipulation)

Expand full comment

What is it called when the government directs a company’s “moderators” to remove information which later turns out to be true

Expand full comment

The categories were made for man, not man for the categories. That guy who sits there during a debate and prevents you from talking when it isn't your turn, and stops you from drifting too far off topic, he's a censor, right?

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

A big distinction missing here is that whether something counts as censorship depends on scale. You (i.e. ACX) can remove whatever comments/ban whoever you like and it's not censorship, same thing with any small publisher or non-top 3 social media. It's basically only when Facebook or Twitter or TikTok bans someone, or more so when there's a coordinated banning across platforms, that anything is plausibly getting censored (even then, it's not like people are unable to figure out what Trump has to say). So you could kind of see censorship as a feature of industry concentration and lack of competition as much as anything else (government censorship being where there is effectively only 1 information source in this framing).

Expand full comment

This is precisely the point. If we now also take into account that the reduction of social media reach without deletion is somewhere between moderation and censorship, we see that the distinction presented by Scott is lacking.

It's probably a multi-axis thing. I'm not sure where properly to stick the labels "censorship" and "moderation": (1) Size of the platform. (2) Transparancy of the action: Can users see that actions are being taken? (3) Optionality: Can users (technically) choose which content to see?

There is the additional axis of (4) automatic personalization: how different is the content for everybody based on mere browsing patterns. This complicates things, I'll leave it out.

Lets combine:

1 high, 2 high, 3 high: What Elon Musk wants for Twitter and Scott calls "moderation". Everybody can tailor his experience, everything can be discussed. Very confusing for users, users might not like being on a platform with nazis.

1 high, 2 high, 3 low: No Options, but transparent interventions. Scott calls that "cencorship". A bit like the laws in a democratic european state: Interventions are mandatory, but are made openly, there is a list of restricted works etc. To me, this is censorship only in a broad sense.

1 high, 2 low, 3 high: Strictly speaking, this is nonsensical (so the axis model is wrong). If users have Options, it is necessary that they can see which categories are made.

1 high, 2 low, 3 low. A large platform where some content is secretly deleted. This is what censorship is to me. Also censorship to Scott.

1 low, 2 high. 3 high. See above, only with a small platform.

1 low, 2 high. 3 low. See above, only with a small platform. Scott calls that "censorship", I'd disagree.

1 low, 2 low. 3 high. (impossible)

1 low, 2 low. 3 low. Scott would call it cencorship, I'd call it illiberal, it lacks the size for it be "censorship".

Expand full comment

On the Internet we're often dealing with a relationship between *three* parties: the author, the audience, *and the host.* The Xi meme encourages us to overlook that, since its third party (Xi) has no need to be involved.

Surely though, where a private host is involved, their consent is morally relevant to some degree as well. I am free to converse with my spouse about anything we like in our own home, but our neighbor doesn't have to host those conversations if he doesn't feel like it. Vast networks like Facebook, Twitter, etc are still private for now, so I think their consent about what sort of business they want to be, and therefore what sort of content they want to host, does indeed matter to some degree.

I'm open to arguments that they're so big and all-encompassing that the state should step in, declare them a public service, and give them a variety of guarantees and protections in exchange for guaranteeing free speech and other civil liberties. I'm also open to arguments that they're so big and all-encompassing that they should just virtuously guarantee free speech of their own will.

Without either of those though, I don't see how we can get to a situation where the only consent that matters is the speaker and listener.

Expand full comment

The neighbor analogy is fallacious because corporations aren't people, they are morally irrelevant, their consent doesn't matter. I'm entirely ok with laws that oppress them.

And of course, almost every single censorship campaign is organized by a bunch of activists, aiming to embarrass the corporation as hard as possible to force its hand. *If* corporations were people, they would be *extremely* lazy people, they would love to host absolutely anything and everything so as not to spend money on censors (ahem, I mean, "moderators"). It's only a trifling minority of activists who doesn't want it to, and they frequently succeed. To frame this obvious coercion as a corporation's desire is disingenuous.

Expand full comment

Hm, I triple-checked and I disagree with every sentence & every independent clause of the compound sentences. So we're coming at this issue from very different perspectives to say the least.

To start at the start, the people who make the decisions in a corporation are pretty clearly people, and so I think their consent is morally relevant.

Expand full comment

We can trivially refute this by an example of a platform hosted by a (rich) human person whose consent would clearly matter.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

I think Musk was responding to a similar suggestion recently, unless he got the idea straight from ACX? I guess it's worth a shot, though I'm not certain it'll solve the deeper problem of receding into echo chambers.

Expand full comment

While I overall think that having many user-configurable filters is better than censorship, I also think this might make filter bubbles even worse.

If there is one filter setting which filters out holocaust denial, and another which filters out any claims that the holocaust did in fact happen (and similar filters for Obama being a Muslim, QAnon, Trump's voting fraud claims, Anti-Vaxxers, Creationists, Flat-Earthers), the Consensus Reality will shrink to basically nothing. This might lead to some bad outcomes down the road when tribes with disjoint realities clash with each other.

Expand full comment
Nov 3, 2022·edited Nov 3, 2022

Worse than pushing people into entirely different ecosystems, not even sharing financial services, cloud infrastructure, etc?

From my own experience, the only reason I almost don't interact with certain outgroups is because it's hard to split the reasonable exponents without going through the worst of them. If I had good tools to do so, I would use them.

Expand full comment

There is always a tradeoff of pitting people against each other into fighting matches versus letting everyone construct their own reality. Is it really a bad thing if Consensus Reality shrinks to nothing if the alternative is several realities fighting and the strongest winning and pushing the losers to the underground ?

Expand full comment

I understand that archiveofourown.org is a decent implementation of this principle. Works are extensively tagged and readers are trusted to decide what tags they want to opt out of/into seeing.

It's text-only, which is cheap enough to host that it can run on donations as a nonprofit, by and for people who were fed up with their writing getting censored.

The moral, if this post must have one, is that the distinction between moderation and censorship is easy; not getting eaten by moloch is hard.

Expand full comment

I held an almost absolutist anti-censorship position until I read your essay "Toxoplasa of rage" and short story "Sort by controversial". The kind of multipolar traps you describe there are real and very scarry, while all ad-hoc measures i can think of that prevent them from being maximally bad involve some form of censorship.

Expand full comment

Here’s a fascinating twitter thread about Moderation from the point of view of a previous Reddit CEO: https://twitter.com/yishan/status/1586955288061452289?s=20&t=sauXK7_fWVo1Wd2OMTuIGQ

Expand full comment

I'm assuming I'm not the only one to notice this, but this only works if you're comfortable hosting child pornography, revenge porn, etc..

I suspect there's at least some storage space costs holding large quantities of spam, and possibly even some room to DDoS a site that genuinely refused to delete anything.

You'd also be de-facto censoring what people can read without signing up - you don't want search engines indexing your spam and porn. Which means you're a half-inch from having some nice detailed logs on who turns off their filters, and that's pretty useful to a draconian dictatorship too. Not as nice as full censorship, but still enough to inspire some fear.

All that said, this is basically what Slashdot uses: normal moderation can merely downvote something to "hidden by default", but any user can override and browse those. Only spam and illegal content would actually get deleted.

Expand full comment

This still wouldn't work. Anyone who wants to could turn their filters off with the click of a button, and then be exposed to a deluge of (Nazism/Communism/pornography/conspiracy theories/harassment/etc.). Anyone who wants to trash the site would make a presentation about how AWFUL it looks like when the filters are turned off, and the ensuing lawsuits/advertiser boycott/normie exodus would destroy the moderation-without-censorship social network. You actually already wrote about this: the relevant dynamics are described in https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/.

These dynamics are why a bunch of high-profile mainstream websites turned off their comment sections, why Reddit banned a bunch of communities that were doing their own thing separate from everyone else (to be fair, all of those subreddits were terrible), and why "moderation", as understood on the internet, always means (or at least involves) censorship.

Expand full comment

I’ve often thought that there should be a cost to and a limit to downvoting, on places like Reddit. You get 5 downvotes a day. Use them wisely. If you have used them all but you really want to use another one you get to remove from a previous downvote, in a populous list.

Expand full comment

typo:

and then overthrow your your society

"your" is duplicated

Expand full comment

I think this take does not properly take into account the legitimate business interests of platforms. I find it reasonable for some platforms to refuse to host certain content if it thinks that it attracts people that are harmful to its userbase. I think there are two mechanisms:

The mere possibility of switching the "harmful" comments on reduces the experience of a normie user. If a young woman posts to Instagram and some guy posts a long comment about pride being sinful etc., she will not like this, even if the comment is hidden, since she'll fear other people peek behind the curtain see her "slandered". Even if she's the only one who can see it, she'll dread the possibility of such comments existing on her photos.

Secondly, people do not want to share spaces with very different people. If a platforms hosts literal Nazis (well behind moderation curtains), I have to suspect that everybody whom I engage with even in a normal context could be a literal nazi. If I know that due to tough moderation aka censorship, these people are not likely to stay on the platform, I can assume that strangers are not Nazis.

Expand full comment

There's already a Scuttlebutt protocol https://scuttlebutt.nz/ and it even has clients implement for various platforms. It provides decentralized, end-to-end encryped social network and the servers only play a role of helping those who don't have public IP meet or cache the (encrypted) messages.

It's really well designed protocol and I wish it was more popular.

I think this is the largest obstacle for new social networks: that social networks exhibit network effects so it is difficult to start.

Even more so if it os decentralized and not gaining revenue so there's nobody to promote it.

I wonder if rationality community would be a good seed for it.

Expand full comment

Giving people more fine-grained control over content was what we tried with Google+, and it was a colossal failure. The population of people who want to do this is 1) very small and 2) do not post regularly on social media. Huge success with Linux enthusiasts, though.

Expand full comment
Nov 4, 2022·edited Nov 4, 2022

I've noticed that my opinions about content moderation now are way different from what they were before I saw how the sausage is made at Quora. I would expect that most people's opinions would also change were they to do so. There's a lot more I could say, but I don't think it would mean the same thing to someone who hasn't had that experience, so I won't (don't want to argue with a blind person about the color of my shirt), other than to say that moderation too cheap to meter would be fantastic, because at the moment it's shockingly expensive at scale.

Expand full comment

The fundamental problem with the kind of "personalized moderation" that you propose here is that, when taken to its logical conclusion, it further breaks the conversation into separate information silos in which different sides of an argument can each select a moderator such that they don't need to see or hear any opposing points of view. E.g. the red team can tune out anybody who claims that the 2020 election wasn't stolen, and the blue team can tune out anybody who promotes the "big lie". If one of the goals of free speech is to promote truth and honest discussion, then information silos are counterproductive.

Over the past few years, I personally have become much less enamored of the benefits of unrestricted "free speech", at least with respect to speech that is distributed via large corporate platforms. Back before social media and the internet, most information (especially news) was channelled through "gatekeepers" -- local newspapers, broadcast news, and the like, and those gatekeepers essentially set the rules of what information was true, or at least adhered to journalistic norms of truthiness, and was considered to be socially acceptable for public consumption.

Now that the gatekeepers have been eliminated, it is all too easy for society to splinter into competing factions, which don't even agree on a common reality. The situation has become so bad that Serious People are contemplating the end of the American republic, if the 2024 election turns into a constitutional crisis. Without gatekeepers, there is nothing to stop demagogues from using platforms to spread lies and disinformation, preying on ordinary people's irrational fears and biases in order to boost their own personal power and wealth. Sadly, the public at large has not proven to be very adept at separating fact from fiction.

I would never advocate for the kind of draconian restrictions on speech used in mainland China or Russia. However, the Chinese figured out something that I think western democracies don't properly appreciate: Controlling the flow of information is important for maintaining social stability. In the case of China, censorship is being used to crush dissent and prop up an autocratic regime, which is clearly very, very bad. However, it also works; social cohesion and patriotism in China are at levels that are unheard of in Western democracies.

I have come to the conclusion that the best way to save liberal democracy may be to re-establish gatekeepers that are committed to truth. The scientific community has standards of peer review, which are far from perfect, but are a potential model. The old-school journalism community also has (or had) such standards. In Britain, the BBC acts as counterweight to the tabloids; it is funded by the government, but is a (mostly) independent voice that cannot be easily used for partisan political purposes. Depending on the whims of for-profit tech companies would not be my first choice, but IMO is still be better than no moderation at all.

Expand full comment

I agree that these sorts of measures are a good idea and refusal to provide them is a bad sign. I'm actually a big advocate for them. But I'm skeptical they're as powerful a tool to distinguish moderation from censorship as you're proposing.

Using Twitter is already voluntary, and in a sense, the Internet already implements what you want in the firm of different websites. If Twitter had an unmoderated "4chan mode" you could switch into, most people would not do so for the same reasons more people are on Twitter rather than 4chan in the first place, and so being banished to 4chan mode would remain an effective means of punishment/censorship.

*Most* debates about censorship online concern the abuse of voluntary systems of "moderation" to reduce the spread of ideas the moderators disagree with. There's almost no online government censorship in the west (and arguably, even what government censorship there is can be opted out of easily enough using tools like TOR, albeit at some personal risk. But people are still, quite understandably, frustrated when completely voluntary systems of moderation are abused and turned against them.

The most extreme example of this is Shinigami Eyes, a browser extension that adds it's own layer of moderation to the Internet in the form of warning users (mostly trans people) if a given user is "transphobic" and should be avoided by turning their name red. Not wanting to get called slurs is obviously reasonable, and this is the mildest, most voluntary thing imaginable. But in fact the people running it were *instantly* corrupted by this tiny shred of power and listed everyone they disagree with, including a ton of trans people in the Social Justice community they had subtle disagreements of doctrine with.

Exit rights are better than no rights at all, and a sufficiently free "market" in moderators might ultimately triumph. But it will always have to contend with good, popular teams of moderators either being taken over from the inside or corrupted by the temptation to censorship, and being hidden from any subset of users is always going to hurt.

Expand full comment

Orthodox?! On Ludicrous I had better not see any posts by anyone who isn't a sufficiently conservative rabbi to reject electricity as un-Talmudic!

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

I have always been sympathetic to the idea of having personalized filters of what you want to see, but I wouldn't use such loaded terms as "harassment" or "hate" for those filters. That's because everyone's threshold and idea of what constitutes harassment and hate are different. These are not objectively measurable entities. I am pretty sure lots of conservatives as well as independents feel harassed by the constant extreme name calling as well as outright bigotry by the left progressives, but I really doubt that if a filter for harassing posts is to be created they would actually filter out the progressive speech and only allow civil parliamentary discourse to be shown.

Expand full comment

So although I have not been a faithful and reliable reader, quickly pouncing on any new post, Scott Alexander has greatly influenced my life and way of thinking with his writing essays like “I Can Tolerate Anything Except The Outgroup.” I feel that I have to say this, because I don’t really have a comment history here, and I have to be very critical of this essay. Hopefully I can provide a couple ‘tools of thought’ to help us all understand the world better however.

The vast majority of censorship in both China and the US today is based on very extended, but widely acknowledged reasoning that aims to prevent violations of the non-aggression principle, or aims to prevent unjust and abusive treatment, very often all rolled into one.

If you want to go on Facebook and talk about how statistically, it is likely that Trump was defrauded of his rightful election win, you are doing so believing that Trump’s political rivals will treat the people he represents unjustly, and in fact, even if you are willing to lie to bolster Trump, you will usually honestly believe that his rivals are treating the people he represents unjustly.

Please do notice that at the very same time, those people banning claims of election fraud against Trump, are doing so to prevent what they view as election theft, which would be unjust, and to prevent the violent overthrow of the US government, which would violate the non-aggression principle.

Now you are probably not going to be okay with your critiques of fill-in-the-blank-politician being formally classified as being violent threats or insurrection attempts, and you are probably not going to be okay with musings on the limits of simple “inclusion” quotas in say, Cal Tech physics, to resolve inner city social problems, or transgender participation in women’s sports being formally classified as an abusive attempt to oppress people.

If any reader is considering disagreeing with me, then imagine what happens to your career (or the career of someone you know and sympathize with who must deal with the public) when someone can describe you as “having accumulated 48,000 posts advocating sexual oppression of women, and 19,000 posts advocating racism,” and will be able to quote-mine years of posts bearing this formal certification of villainy.

The problem is, if you are not okay with having your posts classified as abuse and incitement, with all the consequences that flow from that, then there is not in fact, any especial difference between “moderation” and “censorship.” The system proposed, with lots of additional levels and degrees of condemnation, can simply be expected to alienate a lot more people, and it would create a great many more cases where the person posting would be outraged by the moderator.

Expand full comment

If you are going to understand the world as other people are experiencing it, you must take the changed meaning of accusations of violence, incitement, and injustice quite seriously as well.

The classical justification for free speech was built on an ideological foundation of a “sticks and stones” definition of harm, with the free man who is able to speak freely presumed to be a man who is economically independent, typically interacting with the economy through values-independent markets, which was common in an agrarian era. People would care if your wheat was moldy or full of dirt, but they did not care if the man who grew it was a freethinker or devout.

If the person whose “freedom” is being considered is dependent on wages or public sentiments on the other hand, as is the case for most people working in corporations, and if boycotts and advocacy can be organized on Twitter and Facebook, then the real working definition of harm is going to be very different because so much more depends on the opinions of others.

A corporate executive could command literally millions of times more resources than a yeoman farmer of the 19’th century while he is in his position, but that corporate position could disappear instantly based on a vulgar drunken remark, a marital dispute, or many other social incidents that mattered far less for the yeoman.

As a matter of practical philosophy, the modern concept of psychological harm is fundamentally in conflict with “sticks and stones” freedom of expression, and while everyone pays tribute to the slogan of “free speech,” the vast majority of the population actually worries about something else, and this is a process that has been taking place for generations.

It isn’t difficult to devise non-psychological reasons against the system of segregation in the US South in the first half of the 20’th century for example; even the concepts of the “War on Terror” would do very nicely, but when the Supreme Court decided to act, they were triggered by the notion that segregation made Black children feel badly about themselves.

It wasn’t as if African Americans demanded that framing of the issue, or even that they preferred it in any way, but rather, the larger trends in Western culture, with concepts coming out of Vienna and Switzerland, were preoccupied with psychological harm, and the elite white men on the court cared about that.

If you are right-leaning politically, this example might not do it for you, so consider something inflammatory in your sphere like “grooming gangs.” The concept of “grooming” is saying that otherwise harmless and even seemingly beneficial and friendly behavior and speech needs to be regarded as a threat “because it is leading to,” or because it might lead to seduction.

Notice how this concept of “grooming” is increasingly associated with the bare existence of sexual content in Young Adult books, or how Drag Queen Story Hour is regarded by a big chunk of the right-leaning population as a threat. This feeling of threat is not based on the overt hostility of the author or Story Hour reader, but is rather, built on the idea that familiarity, and the lack of threat feeling in the young audience, is itself a problem.

If you are so embedded in a right-wing milieu that you were not bothered by the notion of being designated racist or sexist, how would you like being classified as a “groomer?”

The concept of protecting people from any hint or threat of seduction is not novel in the West, but it is quite clear that the regime used to keep a young lady’s sensibilities safe, “when the men are talking” is not a strong part of the Civil Liberties tradition of the West.

All these psychological concepts are pervasive in society, and they extend to classifications that are still regarded as harmful, but which are not very politicized like being an “enabler.” A very large percentage of the population has no idea who “John Stuart Mill” is, but a great many of those people will still know who an “enabler” is, and this means that they are used to classifying a lot of seemingly harmless and overtly peaceable behavior as being ultimately harmful.

Expand full comment

But let’s not forget about the Chinese example of censorship. One crucial idea that justifies censorship in China is the notion that nothing whatever can be allowed to interfere with national unity for reasons which will affect the well being and survival of any Chinese individual.

The rapid economic growth of recent decades for example, was possible only because of mandatory joint ventures with technology transfer that Western companies would enter into because this was the only way that they would be able to access the monumentally huge and unified Chinese market. A dozen smaller Asian nations instead of one unified China might well have been able to offer a lot more Western, or national liberties, if you were powerful and wealthy enough to benefit from this, but as an average person, you would be much more likely to be dead from disease, starvation, or conflict, and your chances of having a job on the cutting edge of science or technology would be more like a person in South America or Africa than like a Chinese person today.

When they look at their Great-Grandma’s experience, it will be clear to most any Chinese person, that the Japanese, or the Western powers before, were happy to brutally kill Chinese people to oppress and exploit them, and supposedly humanitarian powers that complain about rights today were willing to force the Chinese people to buy drugs, like a criminal in the most bigoted 1980’s Republican election ad, when it was convenient for them.

Now I mention this Chinese example because as laid out, you could certainly quibble with the case, but it isn’t insane, or something that only an idiot could believe.

That same underlying logic demanding national unity and forever fearing treason is extremely common elsewhere however, even if you don’t think highly of the rationality of those people who wanted to hang the US Vice President for disloyalty a couple of years back. This line of reasoning does indeed seem to appeal especially strongly to people who are dumb and jingoistic, but it isn’t likely to disappear, here in the US, or anywhere else, even when the elderly Trump inevitably passes from the scene.

Expand full comment

One thing this taxonomy neglects is that harassment is often communication between people doing the harassing, and isn't something the subject of it needs to see at all for it to be harmful. The people libsofTikTok highlights probably don't see the Twitter comments about them, or at the very least, those are immaterial relative to the death threats in phone calls and emails those comments inspire.

Expand full comment

In federated social media (like Mastodon) there is an interesting way to choose your own moderation by choosing your server.

Expand full comment

Something that's important to note in this conversation is that western governments whose constitutional commitment to free speech prevent them from engaging in overt, direct censorship are engaging in censorship by-proxy - communicating to social media companies that regulatory legislation that *they will really not like* may be incoming if they don't play ball and self regulate. I believe there are already documented interactions between USGOV and social media companies about which ideas should be suppressed or removed (this is not a commentary on the actual worth of those ideas).

Expand full comment

"Or you could let users choose which fact-checking organization they trusted to flag content as 'disinformation'."

This is close to the correct long-term solution. But rather than just fact-checking, you need to unbundle the concepts of content propagation (i.e., something that the platform--Twitter, FB, TikTok, TruthSocial, whatever--provides as a basis for its business model, and the stream of content that its AI decides to push at a particular user) from moderation (i.e., something that the users, in cooperation with a service with which they contract, apply to the content stream). That doesn't necessarily prevent the content propagator from applying its own standards for its own business reasons, but it does provide a nice, simple way for the user to keep the slime at bay.

The biggest problems here are twofold:

1) The content propagator would prefer that the stream came through unvarnished, because its AI has figured out how to maximize attention, and any editing of that stream is therefore suboptimal. In addition, some important qualitative information about the user migrates downstream to the moderator, which impacts the quality of the ads targeted at the user.

2) Moderation is kind of an iffy business. It's extremely labor-intensive and the subscription fees that a user might be willing to pay are pretty limited. However, the moderator potentially has an ad revenue stream just like the content propagator.

Expand full comment

I like to think about it like in the case of product reviews, which are the most censored politically neutral info you constantly encounter, showing it's possible to have bad moderation and total censorship simultaneously. You can have spam and unhelpful, irrelevant content, such as reviews from people who haven't even used the product (bad moderation), and all the reviews pointing out the flaws of the product have been deleted (totally censored). In fact you see this all the time!

Moderation and censorship are actually slightly orthogonal!

Expand full comment

Hmm, does this mean that if two of my friends want to have a fight in my personal [Discord server/Discord channel/Facebook comments/house], and I don't want them to do that because I find it annoying and I tell them to knock it off, this is censorship rather than moderation under your system, since the two of them both want to have the interaction and I'm telling them they can't do it in my space because I don't want to host it?

If not, at what point between [my house] and [Facebook] does this become censorship and bad?

Expand full comment
Nov 9, 2022·edited Nov 9, 2022

(I'm not saying I think these are equivalent, I do think they are importantly different cases, just want to poke at the framework you've laid out here)

I guess one possible relevant factor is that if my friends are having a fight in my personal space, they are effectively including me in the audience, and I don't want to hear the fight, so me telling them to knock it off is moderation on my personal behalf. (Whereas presumably nobody on Facebook intends to read everything that's on Facebook.)

But at least in some of these situations I could avoid hearing the fight by muting the channel or comment thread in question, but it still seems to me uncontroversially reasonable to tell them to stop - I guess because in some sense I have some "right" to have the full use of my space without being driven out of it by behavior I find unpleasant?

What if the reason I want them to knock it off is not because I personally don't want to hear it, but because the space contains other people such that I think my relationship with those people will be damaged if the fight happens in my space? That seems kinda censorshippy I guess? Though it still seems personal enough to be at least not a central case...

What if the space in question is the comments section in a large blog? I did see in your update post that you do basically consider the relevant distinction to apply here; otoh it seems to me that it's also pretty reasonable if you delete some comments because you simply don't want to deal with having them here (though there are better and worse ways to implement this). It does in any case seem like an intermediate place in the continuum between [person's private channel] and [entire social media platform].

Expand full comment

Most arguments about moderation forget that you have to have real, actual people moderating stuff, and that not only costs money but also costs those people a lot of stress as they have to deal with it.

As such, flat-out banning bad actors actually has a ton of very positive effects on moderation. It turns out banning the worst offenders *greatly* decreases both the psychic load AND the work load of your moderation staff, which makes moderating a platform actually possible and makes it possible to actually pay attention to particular cases more.

Moreover, by flat-out banning toxic people, you abort the networking that they would do that would bring in even MORE toxic, awful people that makes this even worse.

As such, it's actually often very valuable to just flat-out ban the worst actors, because there are major downstream effects on everything else you do because of the load they put on your staff.

Realistically speaking, your goal running a social media platform is to at least have it be economically sustainable, so having it actually be possible to moderate (and having your moderation staff not burn out after six months) is very important. In fact, it's probably one of the most important considerations.

Additionally, sculpting the tone of discussion and content you want on your platform is valuable as well. If you have a platform for scientific discussion, you don't want people who are being overly political or religious coming in and shrieking at everyone and derailing everything. Conversely, if your goal is to make a conservative Christian platform, you probably don't want a bunch of barely legal pornographers on it (or maybe you do, because they need to connect with their audience :V).

The other part of this is advertising. Free platforms pretty much have to be supported by advertising, and it turns out people shrieking "Kill the Jews" on your platform makes people not want to advertise on yur platform unless they're skeezy cons, which drives off your users when their computers get infected with viruses from your ads. Keeping your advertisers happy is a big deal, and is a big argument in favor of banning toxic people rather than "moderating" them.

The only time this kind of "moderating" is useful is when two groups aren't toxic on their own but are toxic when mixed, like red and black ants.

Expand full comment

Imagine, for a moment, that social media existed in prior centuries. The helio-centric model of the universe could be suppressed and hidden; heck, the guy walking around the desert and saying we should take care of poor people and love our enemies could have his reach suppressed because, hey, he’s just a fringe lunatic right?

Don’t try to decide for others or influence others when it comes to information consumed. Complete epistemic openness is the only way to learn, grow, and better both yourself and humanity.

Expand full comment

Good post decoupling two overly-coupled ideas!

Expand full comment