436 Comments
User's avatar
Anon S's avatar

"I think law of identity holds in all cases."

"I think it doesn't."

Good debate.

Erica Rall's avatar

Look, in order to argue with you, I need to take up a contrary position.

moonshadow's avatar

...wait, is this a five minute argument or the full half hour?

Mary Catelli's avatar

Shall we flip a coin to decide?

Zhijing Eu's avatar

OH, oh I'm sorry, but this is pontifications. You want room 12A, Just along the corridor.

drosophilist's avatar

"This isn't an argument, it's a contradiction!"

Jeffrey Soreff's avatar

For anyone how hasn't seen the Monty Python skit, it is at https://www.youtube.com/watch?v=TpQlyUjp3vM :-)

Leah Libresco Sargeant's avatar

I'm pretty skeptical of the need for apps, but I think adding working memory _is_ profoundly helpful to debates. Having a big shared space to map an arguement / take collaborative notes has a couple of benefits:

-lets someone move on from a point for now, confident it isn't being lost

-forces both people into a mini ITT, where you are trying to summarize together and can say, "Oh, no, that doesn't capture what I meant/what matters to me here"

-creates a sense that the disagreement is *large* and makes it feel safer to concede a point—you can both see that doesn't mean you have to instantly give in on the major question.

MoltenOak's avatar

I second all of that

Bob Nease's avatar

Great points. It also might be interesting to have some sort of structure so that people could see where and why they disagree with someone else's point of view (e.g., "Oh, I feel that outcome A is far worse than the OP," or "I think the chance of B is much greater than they do." In fact, it might be fun and informative to see how sources of disagreement are distributed across various factors.

Shaked Koplewitz's avatar

Also it changes the context into a cooperative engineering project, and being open about your reasoning implies trust (both in that you're being open and therefore trustworthy, and that you're providing an alternative explanation for why you disagree than "this guy just hates me", especially if your arguments actually check out on the factual level). The biggest reason people fail to just auman away their disagreements is assumption of hostility, so lowering the probability of that increases the ability to resolve things.

Leah Libresco Sargeant's avatar

Yes, the more you can get a collaborative mapping vibe, the better.

(only your opponent can teach you how to defeat them!)

Brandon Hendrickson's avatar

As the good book says, "In the moment when I truly understand my enemy, understand him well enough to defeat him, then in that very moment I also love him."

ragnarrahl's avatar

Ender's game is a good book, but that's the first time I've heard it referred to as THE good book.

Slowday's avatar

It is said that one day, Scott Alexander may love Alex Jones.

Dweomite's avatar

If you mean that you should signal your own willingness to cooperate to reduce the probability that the other guy falsely assumes you're hostile, then I basically agree.

But your comment could also be interpreted as saying that you should try to make yourself believe in cooperation rather than hostility, which I disagree with. You should try to make your own beliefs accurate. Bad faith actors unfortunately do exist, so you do need some defenses against being exploited.

Brandon Hendrickson's avatar

> Bad faith actors unfortunately do exist, so you do need some defenses against being exploited.

I can't disagree with the logic of this — but I'll say that, in practice, I've found this to be only 1% as much a problem as being too friendly a cooperator.

I fight with creationists on the internet (wow, I'm showing my age!), and consistently find that creationists who seem friendly really are. (The community's thought leaders, meanwhile, really are a pit of vipers. And you know why I believe that? Creationists told me!)

I'm sometimes criticized on Reddit for being too friendly to the community — I think they're worried I'll be taken advantage of — but I still haven't found this to be anything to worry about.

Dweomite's avatar

I've had a lot of arguments where someone told me I was being bad in some way, but they were also clearly emotionally invested in the opposite side of the argument, and I had to decide how much to try to change my own behavior in response to their criticism without knowing whether I was actually being bad or they were just lashing out. In a rare few cases I later got very strong evidence that they WERE just lashing out. I don't think taking all of these criticisms at face value would be a good strategy for me.

I've had a fair number of arguments where I considered stopping, actually continued because of wanting to give the benefit of the doubt, and ended up regretting it. I can't reliably say how many of these were because of bad faith and how many were because of other problems, but I think my "one last try" is calibrated to try too many times, and I waste both time and emotional energy because I don't want to accept that someone is beyond my current ability to reach.

I've also had a bunch of discussions where a person made an ambiguous statement, and I thought they probably meant something that seemed stupid or unreasonable to me, but they COULD have meant something more reasonable, and some sort of politeness instinct told me that I shouldn't imply that I thought they might believe the stupid thing. I think I made the conversation significantly more confusing and painful than it needed to be if I had just acknowledged that they probably meant the unreasonable thing. Which I admit is NOT the same as bad faith, but still contributes to my general concerns about modeling the other person accurately rather than optimistically.

JamesLeng's avatar

Could try dealing with those ambiguous-statement problems by laying out the possible interpretations (including the unreasonable one) explicitly, and directly asking them to clarify what they meant - possibly with an "or some third thing that didn't even occur to me" option.

If they hold a reasonable position, but are conveying it badly, that gives them a chance to salvage the situation - and they can hardly blame you for their own mistake. If they're unreasonable on this one point but intellectually honest, it "rips the bandaid off," lets you avoid wasting more time. And if they take offense at the mere implication, well... given that their emotions are apparently such a minefield, it probably wasn't going to work out well no matter what you tried.

Paul Brinkley's avatar

Whenever I've done that in the past, I typically got a complaint that I was "'splaining" and dismissed.

Some ideologies get you coming and going.

Dweomite's avatar

I have had surprisingly bad results asking people "do you mean A or B or something else?" (across a variety of contexts, not usually in the specific circumstances I was describing earlier)

My most common result is that they completely ignore the question. They often respond to other people in the thread, or even to other things I said in the same post (if any), but they don't acknowledge this particular question at all. I don't know why.

I've also had several cases where they say some more words on the topic, and the words appear to be intended as a reply to my question, but none of the words are "A" or "not A" or "B" or "not B", and I am still left uncertain what they meant.

I do sometimes get a clear answer, but much less often than I previously expected.

I also have this feeling like most online forum conversations have a sharply limited number of plies before my interlocutor and the audience stop engaging, and that spending a whole reply just to ask someone what they meant is therefore an expensive action. Additionally, if several people are talking to them, it encourages them to focus on the people who didn't see a need to request clarification, and I risk effectively removing myself from the conversation if I don't try to continue talking without an answer. I'm unsure whether other people feel like this.

Brandon Hendrickson's avatar

This is a really helpful update (or three!) — thanks for it.

Bob Bobberson's avatar

Agreed. My working memory isn't great, but even if it were, eventually it gets unwieldy to hold an entire argument in your head while also trusting that everyone else has the same map of the argument in their own head. Also, this might be too optimistic, but I think keeping some type of records or diagram could put a limit on what kind of dirty tricks people can get away with.

It also makes it easier for a third party to understand what people are arguing about, and it helps participants remember what they were talking about years later if the need arises.

Mary Catelli's avatar

Some people object to working memory because it makes their lies less tenable. I remember once, in college, where people asked me to outline an argument, and I started, and they agreed with the first point, but when I did the second, they denied it on the exact opposite grounds.

Arbituram's avatar

Came here to evangelise whiteboards. Sure we can have genetically engineered cyborg geniusesbone day but until then whiteboards are the next best thing for having vastly more intelligent discussions.

Also my working memory is terrible.

WeDoTheodicyInThisHouse's avatar

“I’ll get the whiteboard,” said Dawes. “Is it a magical whiteboard?” asked Turner sourly. Dawes cast him a baleful look. “All whiteboards are magical.”

--Leigh Bardugo, "Ninth House"

(it's more fun of a quote when you know that these people have access to various kinds of improbable magical objects, and Detective Turner wrapped up in dealing with it but is still incredulous about it all.)

Brandon Hendrickson's avatar

> I think adding working memory _is_ profoundly helpful to debates

I'll add that there's another benefit to getting some system to write things down: it _limits_ what you're arguing about, and encourages you to focus on each other's claims.

User was indefinitely suspended for this comment. Show
Anlam Kuyusu's avatar

Thanks for pointing out that the comment in question was AI generated.

I think I'd be fine with AI generated content/slop as long as it was clearly labeled as such and the commenter didn't try to pass it on as a human comment.

Mo Nastri's avatar

I agree with the spirit of your suggestion, although I wish substack had a collapsible feature like LW does to hide away longform LLM content by default. Short ones would be fine.

Scott Alexander's avatar

User banned for this comment.

Philip Dhingra's avatar

Some prior art on working memories: Wikipedia Talk pages.

JamesLeng's avatar

Maybe wikipedia could be a useful model for the "solve debate" project more broadly - just, rather than an encyclopedia article, the shared goal is a comprehensive map of the debate. Occasionally that'll result in one side 'winning,' but even when it does happen, the best 'failed' arguments would stay up, fully visible, so anyone who comes along later, and finds they're partway into one of those, will be able to skip ahead to where it falls apart.

Something similar was depicted in Erfworld, via the Thinkamancers. https://archives.erfworld.com/Book+3/67

https://archives.erfworld.com/Book+3/139

https://archives.erfworld.com/Book%204/82

That was criticized on a metatextual level for ignoring emotional angles, https://archives.erfworld.com/Book+4/149 but I don't see any hard reason emotional aspects couldn't be included in a comprehensive debate map, where relevant.

Leah Libresco Sargeant's avatar

I think this is a great point.

WeDoTheodicyInThisHouse's avatar

This (having better arguments on the internet - and, iirc, coordinating groups of people to have better arguments too) is literally one of the things you hyper-specialized in.

A. What are some of the setups/sites/systems that you've worked with that you thought have done this well? (if perhaps not necess. at the scale being assumed (?) in the OP)

B. Might you find someone who wants to give grants for doing these kinds of projects? (like for ACX+ ...and speaking of ACX+ grants thing, I wonder if there are other people who have awarded grants to these types of projects after Scott does his first pass?)

C. I am 100% guilty: I definitely submitted an ACX grant proposal once for organizing something I titled "Misfits Debate Club." Writing it up in a Google Doc or three helped clarify my thinking, so I was grateful for the excuse to do so. (Back then, though, I think that me attempting to create the thing that I imagined would have had disappointing results.)

Leah Libresco Sargeant's avatar

I have an email that boomerangs back to my inbox occasionally, reminding me to consider whether now is a good time pitch Mercatus's Pluralism group or Tyler Cowen on funding me to do better fights work. But, so far, I've been more invested in family and day jobs.

I have had very good experiencing moderating and improving debates for ISI and others—I approach my job as moderator as being an advocate for the audience. It's my job to steer the participants toward the issues at the heart of their disagreement / get them to think live, not just recite canned points. I'm a little like an ophthalmologist, tweaking an example or intuition pump and going "better or worse?" to help fill in the map.

I used to work for a better arguments group (Braver Angels) which was interesting but hamstrung by two main issues:

-the institution didn't agree on whether it was primarily interested in humanizing opponents or doing actual truth seeking debate where both sides hoped the other side would change their mind

-too much scaling up to go broad, vs creating sustained communities of practice. Better to run ten debates on one campus than debates across ten campuses if you want to do real skill transfer.

Sokow's avatar

But how does it help me making drive by shootings when I see a wrong and stupid take from a wrong and stupid person from the outgroup?

NoRandomWalk's avatar

That was great, thank you for this

Seth Finkelstein's avatar

I've had similar thoughts, but in practice, it doesn't seem to work. I think in many arguments, the loudest people on both sides either:

1) have a pretty good idea of the opposing view, and just hate it with a passion

2) don't care about the basis for the opposing view, and just hate it with a passion

There's nothing which can be done here "to raise the waterline".

I don't want to give any specific examples, but some should be obvious.

GrimMoar's avatar

I think in many Twitter arguments, the loudest people are just looking for views and social approval. Turning off twitter and social media is probably the first step to raising the quality of who you are talking to.

Should you care that you're tuning out the bloviators? Yes, and no. If you can reach them through a trusted friend, you're probably doing better than you would be trying to interact with Mr. Performatively Raving Lunatic.

Bartek's avatar
7dEdited

That can be a nice extra for two people that already want to figure out truth, which is almost never the case.

In politics-related topics you will come out trying to improve the debate, and your opponents will say that you are only a grifter that wants evil. But wait, you were not an opponent and just wanted to facilitate between two sides? Doesn't matter, you are called an opponent and grifter and only pretending good nature.

Not in politics? Nothing changes. You get exactly the same thing in choosing tech stack in companies that claim to be smart engineering teams making data driven decisions. I say "Wait, aren't we engineers, including data scientists, here? Let's do at least some rudimentary AB tests for introducing X." But I was being obtuse, the decisions already taken, now they just wanted to show that X is good and how much money it's saving.

___

In places where people are actually doing collaborative disagreements, they often are using correct tools. That is as simple as writing down pros and cons, doing calculations and thought experiments, inquiring. An app can or specific framework on how to write things down is not the missing ingredient.

Timothy M.'s avatar

I agree with the general gist of this (and think it's useful information for future grant applicants) but I think it's still valid to try to build systems that at least suck less. E.g. StackOverflow didn't solve everybody's problems (and these days isn't too healthy) but it did do a lot better than Yahoo Answers. I think somebody could still potentially figure out the analogue to StackOverflow for arguing about stuff (and I still think every news aggregator who doesn't bother trying to extract a signal other than "popularity" or "engagement" for op-eds is crazy).

> But arguments rarely hinge on false facts.

Counterpoint, https://www.propublica.org/article/michigan-solar-farms-health-concerns-st-clair-county

(This is also my response to people who think AI intelligence is a couple of years away from eating the whole economy. We already have developed life-changing better technology and its adoption is being held up by the whims of the current administration, bizarre claims that it'll kill us, and lobbying from the firms of yesteryear. Eventually renewables will win, but the barrier to them blanketing the earth in the next few years is first and foremost that we can't get everybody on the same page that it would be better.)

GrimMoar's avatar

"renewables" are a false idol, they tend to cause the natural gas plants to burn more fossil fuels than the plants would if they were simply allowed to run without renewables.

(This does not hold true for hydropower. Everyone loves hydropower. Everyone loves Nuclear except the Russians, who are a motivated supplier of natural gas).

Timothy M.'s avatar

You really need to cite sources for claims like this.

Also my point was that the opposition to them in the linked article was based on nonsensical health consequences.

GrimMoar's avatar

Well, the starting point is that the grid needs to have roughly the same amount of power at all times, because we really don't have anywhere near the amount of batteries to buffer the grid.

Now, the problem with solar and wind is that they are incredibly variable (so, say, just take solar -- you have zilch solar when the big light in the sky is hitting the other side of the planet. And you still need enough natural gas capacity to run 10Units, even if solar takes up 5 Units during the day. And solar does not take up nearly the advertized amount of Units, because every time you get a cloud, solar reduces output -- worse, this means that Solar's output is unpredictable, and that's where it REALLY gets messy. Natural gas works pretty efficiently if you wind up running it "to capacity" -- but if you wind up "scaling" it to the solar, you're doing an inherently inefficient process, by ramping up and down a lot (you can think of it as the constant natural gas gets a second burn, just like my furnace, but that doesn't work if you're constantly adjusting it up and down)) [And wind? wind electricity output increases cubically as wind speed rises within a turbine’s operational range. So, unless you're in a really stable place for wind, that's supervariable too.].

Timothy M.'s avatar

This (frankly somewhat condescending) explanation of your opinion is not a source.

Melvin's avatar

This feels like another place where an argument map would be helpful; that way instead of repeating a flawed version of an argument that you heard somewhere in the past, you could simply say "oh just look at cell K66 on the big Renewable Energy Is Worthwhile debate board" and I could check out the strongest version of what you just said.

I suspect the true strongest version comes out as more like "There are certain theoretical circumstances in which adding solar to the grid would actually cause you to burn more fossil fuels, just look at this maths" and the counterargument comes out as "Sure but those are very artificially chosen circumstances, here's what the maths looks like in realistic scenarios".

Anon's avatar

Isn’t it also super easy to get and analyze the data? Then you can find out how much gas and coal got burned to produce how many MWh and how did that depend on how much the sun and the wind fluctuated on any particular day. This should confirm or disprove the parent comment fairly conclusively.

Melvin's avatar

I don't think it's "super easy" at all, I'm sure it would be possible but probably hours or days of work; an unreasonable amount of effort to put in to disprove a casual wimbli comment.

But if the calculations would be immortalised in a popular argument map then maybe it might be worthwhile for someone.

The Ancient Geek's avatar

Here's a quick summary from AI:

"Renewable energy generally decreases the total volume of natural gas used for electricity generation by displacing it. However, it can increase the need for gas infrastructure to provide quick-start backup capacity to balance the intermittency of wind and solar."

So "they tend to cause the natural gas plants to burn more fossil fuels than the plants would if they were simply allowed to run without renewables" is not supported.

GrimMoar's avatar

https://www.eia.gov/todayinenergy/detail.php?id=52158

Simple cycles are used for peak load (and, as shown, they are much less efficient). The more that you have renewables, the more you're "peaking" (all the moreso when you have clouds, or unsteady wind -- this is like saying "the good spots get taken first").

Harjas Sandhu's avatar

Aren’t you completely ignoring battery storage tech? That stuff improves every day.

GrimMoar's avatar

Yes, I am. Completely Ignoring It.

https://www.latimes.com/business/story/2026-02-04/texas-grid-batteries

17 Gigawatts of battery storage on the grid.

In short, it's negligible and can't buffer anything, least of all 20% wind power.

If you want to crusade for more batteries? Go for it. I'll revise once we have enough capacity to matter.

Harjas Sandhu's avatar

let's return to this in 10-20 years then

John N-G's avatar

17 Gigawatts installed in Texas is 20% of the all-time record Texas peak load and a greater percentage of the total nameplate capacity of installed Texas wind and solar.

It already matters a great deal, at least here in Texas.

Dynme's avatar

I would take this data more seriously if I didn't know that Texas has just about the worst grid of any of the states.

JamesLeng's avatar

> we really don't have anywhere near the amount of batteries to buffer the grid.

A few years ago, that was true. Then somebody noticed that providing grid-stabilization services via batteries had vastly better profit margins than gas peaker plants, so investments poured in accordingly, and resultant mass production made batteries even cheaper, widening margins even further...

https://xcancel.com/ramez/status/2046285621115953214

https://xcancel.com/ramez/status/2049241179699741069

It's a solvable problem. By 2030 it will probably be a fully solved problem. https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/

GrimMoar's avatar

You're citing tweets. Cited above is an article from the LA times. I don't have "future battery count" to look at, if you've got a cite, I'd love to hear it.

https://www.yesenergy.com/blog/california-battery-storage-still-rising

10 Gigawatts.

Dumbles the AI is citing 2,802 GWh as peak load for California.

That sure is a nice, pretty graph though.

I remain skeptical of anything Chinese, particular as it relates to power, but Chinese in general, and will believe it maybe two or three years after it's installed and we have non-Chinese verifying it (people started squealing in honest anger, when I suggested that the American Military wanted to verify the Ukrainian claims of "good drone performance" in the Ukraine, by testing them in the Middle East. Verification is good, and when your counterpart has Very Good Reasons to Lie (as China does) and Does Lie About Easily Verifiable Things (like overfishing), I'm going to ask for more verification not lesss).

Hastings's avatar

HR wants you to find the difference between these two pictures:

A literal golden goat, covered in dollar bills with God’s name replaced with Trump

a solar panel

GrimMoar's avatar

I know which one is a national security issue. Do you? When china makes the parts for our grid, we're dependent on China for electricity. It's all well and good for the Montanans to say "you can't take away American sun and wind" (it's a good applause line) -- but the realities on the ground sing a different tale.

Hastings's avatar

You didn’t say it was a national security issue, you said it was a false idol. I admit, false idol / national security issue is the most entertainingly deranged motte and baileys I’ve seen this week. So its not a total loss.

Linch's avatar

This claim was false 10 years ago, and it's still wrong today.

https://waitbutwhy.com/2015/06/how-tesla-will-change-your-life.html

Doug S.'s avatar

More natural gas and correspondingly less of other fossil fuels, especially coal, is still a net win.

GrimMoar's avatar

In case it's not clear (given the fact that poland's burning lignite, it may not be), I'm pulling for Nuclear power (fusion if possible, but that's a futuretech), which is incompatible with solar/wind at this time.

Michael Watts's avatar

> StackOverflow didn't solve everybody's problems (and these days isn't too healthy) but it did do a lot better than Yahoo Answers.

Did it do better than IRC? My impression has been that the way you got answers to technical questions before StackOverflow was that you logged in to the IRC channel for the thing you needed help with, asked your question synchronously, and got a synchronous answer.

(I have never done this with IRC, but I do it right now with the Lean4 zulip, and it works well.)

I can see why StackOverflow was more popular - it demands much less of the person who wants to ask a question. But it's not obvious to me that it was more effective.

Pas's avatar

sometimes IRC, but ... mailing lists!

but SO is very useful, voting, syntax highlight, community wiki, comments, tags, search. and moderation. all in all a very helpful structure.

and in the last decade due to gamification we easily got more nice answers (than otherwise we would have gotten).

Paul Brinkley's avatar

It probably excelled over IRC (and mailing lists), because it was much more visible. If you were learning some new tech for the first time, and you didn't know the magic email address or server+channel, you were lost at sea. If that tech's website provided a link to it, that helped, but if you're learning some new tech, chances are you're learning twenty of them, and now you're subscribed to a dozen mailing lists and checking a dozen servers. By contrast, if you remembered "stackoverflow.com", you were done, forever.

There was also the reputation structure that crossed tech boundaries. You might be the Jon Skeet of C, but then enter a chatroom for Drupal or and you were just another noob with questions no one had time for. But if your handle is right there with "92.1k" showing below it, suddenly you were a muckety-muck and someone might see merit in helping you. Or at least, not speaking to you like you were a 12-year-old just learning programming.

What's afflicting SO now is hard to put my finger on. AI is probably hitting it hard now, but even before AI, it was encouraging people to "look up error message on SO, apply top solution, repeat" which was leading to a spate of badly-designed projects, or a lot of people who _still_ weren't getting the help they needed because their problem was actually "I'm getting this error message even after applying the top 5 SO-provided solutions; what's actually going wrong?".

Also, while SO was designed to encourage more in-depth answers than "just add the following line to whatever/config", it arguably wasn't awarding enough. It probably couldn't in the long run; tech was expanding more and more, leading to any given tech having too few experts to make in-depth answers worth the relative few eyeballs that would ever see them.

TGGP's avatar

> I don’t know, maybe some people with poor working memory who really hate holding an entire argument in their head might benefit from this kind of thing

I don't think limitations of working memory should be dismissed too quickly. Real people really do have problems remembering everything, which is part of why writing things down has enabled us to do so much more. Mathematicians have found mere formalism to be quite helpful, and trying to do it all in their heads wouldn't have enabled nearly as much. I think it's a problem now that broadband and the rise of video over the internet is shifting us from a written culture to a more oral culture.

I think the bigger problem is relatively little disagreement is honest. https://mason.gmu.edu/~rhanson/deceive.pdf People aren't interested in enhancing honest debate.

> They’ve been known to occasionally work

My impression is that people are dating less now than they were prior to the existence of such apps.

moonshadow's avatar

IME "real people" have an implicit prior that when an incredibly long chain of logic presented by a random on the internet leads somewhere very, very weird, it's more likely that the chain of logic is incorrect somewhere - even when the reader themselves can't immediately spot an issue (they're pretty used to not being the first to spot issues!) - than that the conclusion is actually correct and they need to rethink their entire world now. Maybe they acquire the instinct sometime in high school - perhaps in mathematics lessons when the teacher first tells them off for just writing down the number the calculator spits out without first taking a step back and asking themselves if it made sense or not; perhaps it becomes strengthened when reading "it all adds up to normality" on lesswrong, or given it's 2026 every time they catch an LLM in a confident lie at the end of a chain or reasoning; and so it remains now.

This does not feel dishonest to them, even though they may have trouble articulating why they still disagree despite the clear mathematical calculations; but perhaps you can see how chains of logic in an internet debate being so long that they need technology to help them track the whole thing because they can't hold it all in their heads at once just makes the situation so, so much worse.

I think the only way for most people who start very very far apart to reach consensus is by taking things very very slowly. Trying to leap a whole ocean at once can never work. A path of stepping stones must be constructed, one at a time, never advancing to the next until the last one is thoroughly incorporated into everyone's world and all are very sure of their current footing, to avoid triggering the lizard-brain instincts that shut the door on all possibility of further communication.

In typical online discourse, neither the will to put in the effort, nor the appetite to follow along, nor the time required to make significant progress, generally exists; and this leads to all the problems we see. Key root causes here are social issues and attention being a finite resource; and technology is of little help with either of those (indeed, in today's world of fierce competition for human-eyeball-time it is usually a hindrance to both).

See also:

* https://www.lesswrong.com/w/inferential-distance

* https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/

* https://wiki.c2.com/?OnlySayThingsThatCanBeHeard

TGGP's avatar

Indeed, chains of reasoning are apt to contain errors the longer they are, which is part of why having MULTIPLE such chains can help https://www.overcomingbias.com/p/ignorance-about-intuitionshtml

moonshadow's avatar

To put it more concisely than my first post: when the problem is “too many words, none of which connect to any part of the reality I inhabit”, one thing that is unlikely to help is ADDING MORE WORDS.

TGGP's avatar

If the probability of words connecting to any part of reality is small but greater than zero, then adding more independent statements increases the odds that at least one will.

moonshadow's avatar

You're talking about logical proof, but actually here we are specifically discussing internet "debates" between "real people". The common communication blockage is not due to insufficient proof. The thing preventing communication is lizard-brain heuristics, trained by thousands of years of evolution to detect glib confidence men attempting gish gallops. If you have triggered such a heuristic and find communication is blocked, doing even more things that trigger the gish gallop detector harder and faster will not help unblock it. Instead of supporting your case, you are locking in the denial-of-service-attack defence mechanism.

When talking to people used to mathematical proofs, there is less need for tools to help navigate the mathematical proofs; they already know what to expect. When talking to people not so trained, though, you need to go slow and take them with you, not run ahead on your own. A tool that lets you marshal more arguments faster so you can thump your interlocutor over the head with the entire preprepared block at once will not help their receptiveness of your case. That is my entire thesis here.

Out of curiosity, did you follow any of the the links I posted?

TGGP's avatar

I had already read the first two when they were originally published.

Hafizh Afkar Makmur's avatar

I interpret "multiples chains" as grounding it to reality after every small hops. The argument becomes more difficult but it's necessary. It could also inform participants if they actually agree 95% of the way but the last hop is where they differ wildly.

meeeewith4es's avatar

> That means that these apps’ target demographic - people who want to argue on the Internet, but are looking for a better way to do it - doesn’t really exist.

What about ACX Open Threads?

Alex's avatar
Apr 28Edited

[Drive-by potshot]

K Greenberg's avatar

Open Threads don't have any new mechanic to make debating better, they merely bring together people who are more likely to debate.

Viliam's avatar

Also a few people who just want to keep making the same arguments over and over again, creating new accounts when the old ones get banned.

Paul Brinkley's avatar

This isn't necessarily a dead end, though. Bringing together people interested in constructive debate increases the chance that they share mechanics for debate that they wouldn't be otherwise aware of.

If such people are hard to find, then a gathering place that brings them together raises this chance even higher.

Liron Shapira's avatar

Well, my strategy is to interview hundreds of thought leaders in an unusually probing way to get them to clarify their nuanced position somewhere, pit them all against me and one another, then let the audience's probabilities shift as they see fit.

darwin's avatar

Isn't that just Twitter again?

Yug Gnirob's avatar

Remember to check your car brakes.

Brandon Hendrickson's avatar

Thank you for your service!

Alex Gourley's avatar

I broadly agree and am glad you're saying as much, but not because I want all efforts to cease.

The opposite! It's incredibly important we figure out better ways to converge in opinion. To everyone working on this: don't give up, and instead realize how how difficult the task is and how creative you will have to be to find inroads.

TGGP's avatar

We have prediction markets.

Alex Gourley's avatar

The most important questions to find convergence on don't at all work with these. Questions about how to lead a good life (at the personal or cultural level) don't fit at all. Questions about existential risk are not a good fit either. You could prove me wrong if you can come up with a prediction market scenario that moves me on this very debate we're having now.

TGGP's avatar

It's possible to have prediction markets on your own life, though fewer people might be interested in betting and thus you'll likely need to subsidize it more to gather such information.

Matthias Görgens's avatar

Having prediction markets around as a mainstream concept is still useful, because they teach people the importance of operationalising their statements. So that we can check them at least in principle.

Neo's avatar
Apr 28Edited

Prediction markets rely on highly-specific questions and don't come close to solving all of our coordination problems. How do we convince people to trust them more? How can we implement something like a futarchy on a global scale?

Seems like there's a lot of intermediate coordination friction before they become super useful.

TGGP's avatar

Prediction markets aren't intended to solve "all" of our coordination problems. Dominant assurance contracts are intended to solve an entirely different set of them.

Neo's avatar

Agree - I (probably mistakenly) interpreted your comment as a disagreement with the need for better solutions for the more strangely shaped problems.

Matthias Görgens's avatar

Has anyone done much work in or on dominant assurance contracts in the last decade or so? They were always quite interesting, but I never saw them being implemented for anything.

(In practice, it seems just privatising an issue / giving it an owner does most of the job these contracts could hypothetically do. Eg even poor, badly run countries whose governments can't fix a pothole have shiny malls.)

TGGP's avatar

Alex Tabarrok has blogged about a few people trying to put it in practice over the past few years:

https://marginalrevolution.com/?s=%22dominant%20assurance%22

I think part of the issue is few people actually caring about public goods (related to few caring about the reforms Robin Hanson was enthusiastic about prior to writing "The Elephant in the Brain"). Alex's GMU colleague, Bryan Caplan, says he doesn't like the term "public good" because it just causes confusion based on which goods are currently publicly provided https://www.econlib.org/archives/2012/09/the_public_good.html so he prefers talking about externalities https://www.econlib.org/archives/2008/07/public_goods_an.html

Matthias Görgens's avatar

Thanks a lot! I'll check out the links.

skaladom's avatar

Oh yeah, the gamblification of life. I know they're a bit of a darling on ACX, but colour me unimpressed.

TGGP's avatar

A disagreement: that's an opportunity to engage in reasoning about it.

Gabriel's avatar

For convergence, at LessWrong events I've used a simple method: We get out a piece of paper, and when the discussion participants all agree on a statement, we write it down. The points of convergence are then a physical artifact, so we have a little evidence it works! It becomes a bit of a game, as you need to be creative and ambitious to think of statements you care about that advance the discussion and win agreement. When using this method, I find myself thinking much more about the other people's perspectives, and less about my own.

The method could use some improvement. It works best with 2. With 4+ participants, it's too difficult to find statements that everyone agrees on. Binary agreement is likely a key problem; maybe a better approach would be for participants to give a degree of agreement, and then maximize the minimum degree among the participants.

Brandon Hendrickson's avatar

This is really cool; I'll look forward to trying this at the next ACX meetup I attend!

darwin's avatar

Getting opinions to converge isn't difficult at all, just become the dominant religion in the area and harshly punish/exile/kill any heretics.

Getting opinions to converge on *the truth* is a lot more difficult because 'the truth' is itself an underspecified concept.

Matthias Görgens's avatar

> Getting opinions to converge isn't difficult at all, just become the dominant religion in the area and harshly punish/exile/kill any heretics.

They tried that in many places. It's actually harder than you make it sound.

Neeraj Krishnan's avatar

> Doesn’t it train you to point to one specific link and shout “argumentum ad verecundium"

I merely say it under my breath, though shouting is okay too.

Kenny Easwaran's avatar

The field of philosophy has existed for a couple millennia, and various other intellectual disciplines have sprung out of it. All of these disciplines have found some ways to discipline argumentation such that it is more productive within their little community. These have generally made things better, but they certainly haven't "solved" debate.

Leibniz thought that we could find the logically ideal language and then calculate to solve all our disagreements. This led to lots of productive things over the centuries, including Fregean formalized logic and Turing machines and computers. But it still didn't solve disagreement or debate.

Every community that does have disagreements can do things better. But if you're trying to improve things in a "general public" forum, you should know that your most optimistic take is to reduce fruitless disagreement to the level you find in philosophy or theoretical physics or Marxist literary theory or epidemiology or economics or whatever.

The Ancient Geek's avatar

>All of these disciplines have found some ways to discipline argumentation such that it is more productive within their little community

Which me of them is logic. It's the more linguistic alternative to the whiteboarding discussed in the article.

Michael Watts's avatar

> the skateboarding discussed in the article

Did you mean "whiteboarding"?

Linch's avatar

"which me of them is logic" what does this mean?

Xpym's avatar
Apr 29Edited

>Leibniz thought that we could find the logically ideal language and then calculate to solve all our disagreements. This led to lots of productive things over the centuries, including Fregean formalized logic and Turing machines and computers. But it still didn't solve disagreement or debate.

It was proven though that no such language is possible in principle, and therefore the hope for definitively solving all disagreements is futile. Certain varieties of rationalists continue to behave as if these insights are still unavailable, which reflects poorly on them.

Bugmaster's avatar

> It was proven though that no such language is possible in principle, and therefore the hope for definitively solving all disagreements is futile.

I disagree ! :-)

QuirkyLlama's avatar

What do you think LessOnline is? It's an IRL place to have good arguments!

kyb's avatar

Yeah, it seems pretty clear to me that some systems encourage good debate and some system encourage bad interactions that don't help anyone, and there's a range of technologies and communities that have quite dramatic effects on the kinds of interactions and how useful they are. Given that different configurations of technology can have significant benefits in terms of good debate, the idea of trying to push forward the state of the art to improve communication and debate seems very attractive to me.

Now argument mapping might not be the right approach, but research into what has worked and what hasn't would be extremely interesting. I expect that rather than being technical it would be more around building the right sort of community and giving that community the right tools.

Tristan's avatar

Research like this exists, but I don’t think there’s much overlap between the proposals Scott is describing and that agenda.

Your point is really well made though. We know better is possible because we have all seen some platforms create worse outcomes than others.

dubious's avatar

We can solve debate with an app that tells people they are right. Provide a list of reasons that arguments to the contrary are actually a different argument: even if those arguments are right, your position is right too. People can link to these and be smugly happy in their correctness. Confident that arguments people see are't actually opposing them, they will no longer care or debate them. Problem solved!!

Seneca Plutarchus's avatar

Just have an AI be the arbiter.

moonshadow's avatar

Just write "you've already won the debate, now go play outside" on rocks for everyone to keep next to their computers.

Neo's avatar
Apr 28Edited

It sounds like the proposals you reviewed were overly ambitious, and that is the backbone of your critique. *Solving* debate implies facilitation on a massive scale, which obviously seems difficult if not borderline impossible.

But I think this article hastily discourages seeking *improvements* to debate, for which I think there are tons of low hanging fruit. And these changes could be applied within small communities (say LessWrong) to demonstrate their feasibility/efficacy, without needing to scale to a massive user base of ordinary people with vastly different priors.

I'd be curious to see some of the rejected proposals if they are published anywhere - seems like they could be quite valuable if reconsidered on a smaller scale.

Brandon Hendrickson's avatar

I, too, would be interested to see some of the rejected proposals!

Tristan's avatar

The example of r/changemyview is interesting here. It really has demonstrated an improved approach to argumentation, but it’s entirely at the level of explicit group norms, not a technological solution.

Neo's avatar

Same with LessWrong, I’d argue. Both are mostly held together by strong norms, which I think would make it easier to test UI-shaped improvements to debate (since you could assume good faith usage).

Some Guy's avatar

Out of curiosity what is your opinion of Community Notes?

John's avatar

This is a good counter-example. Probably hard to notice among the vast epistemic wasteland of X but community notes is a genuine innovation in online debate and it is not hard to imagine productive extensions of it (e.g. instant AI generated or AI-debated community notes)

Viliam's avatar

Also it's a relatively new invention that people yet haven't figured out how to game.

But that's just a question of time, sadly.

Some Guy's avatar

I genuinely think their algorithm makes it very hard to game in any long term scenario as well.

Some Guy's avatar

I do see how this relates overall to the idea of: is it wise to say anything at all but not sure how it relates to the algo question. Some patterns are just very hard to fake for long periods of time because the pattern *is* honesty.

Some Guy's avatar

Yeah maybe he classifies it differently? I don’t know. It seems like a very publicly visible thing that genuinely works and genuinely feels like a public rebuke. I think AI only gets you so far because the reasons community note carries that slap is because regular people who disagreed in the past agreed on the note.

J Mann's avatar

I think it's useful to find where two people who disagree on a conclusion agree on specific premises or sub-arguments. There might be some value in identifying areas of agreement even if it doesn't resolve the disagreement.

Otherwise, I agree with Scott. I'm reminded of the giant Bayesian calculations for lab-leak or natural origin for COViD. You can show me a whole bunch of probabilities and math, but it won't convince me.

Dust's avatar

to be fair, that's a *very* complex example you mention, where most of evidence that's easy for lay people to understand doesn't actually move the needle because it's equally well explained by lab leak and just Wuhan covering their asses for non-catastrophic but super risky and seemingly-incriminating gain of function research

DamienLSS's avatar

This whole article reads to me like a strong rebuke of "just calculate the utils" consequentialist / utilitarian approaches that Scott regularly endorses.

Pas's avatar

> You can show me a whole bunch of probabilities and math, but it won't convince me.

What would? An interactive website where you can see the various datasets (the facts), how people interpret them, the hypotheses depending on them.

Ideally for each question one needs to see the probabilities that affect the outcome, what data was used for deriving them, and you can of course tweak them to see how your differing interpretation changes outcomes.

... usually the problem is that written natural language in general is the preferred way to tackle arguments for many people. Which is great, easy to process in some sense. Publishing papers (or blog posts) gives point-in-time snapshots of one's argument, but it scores pretty low on discoverability, composability, and consistency.

Hierarchical Bayesian models are great, but they are too technical, and require a lot of coding (as in labeling data), and stat work is very easy to fuck up.

Though hopefully LLMs soon (if not already) can help with this!

J Mann's avatar

Typically, I rely on trusted expert opinion + the level of fact that I can understand, so what did shift me quite a bit on Covid origin was Scott's conclusion based on the debate. That question is complicated by the fact that it's not super urgent that I get the answer correct, so there's not much in it for me to spend some more time.

If I trusted a website to break down the arguments and agreed facts for me, that would help. I really did appreciate the old media/wiki style of just summarizing the arguments and evidence on both sides.

Taymon A. Beal's avatar

You said you think there are tractable ways to improve internet arguments, they just don't look like this. What might they look like, then?

Bob Nease's avatar

I imagine that there are structural ways to better diagnosis the source(s) of disagreement; your argument demonstrates that is the case (e.g., utilitarian vs deontology lens, increasing granularity of assertions to check them against evidence, logical integrity of an argument used to support a position). But, as you note, a diagnosis doesn't necessarily produce a path toward resolution.

David J Keown's avatar

If someone's argument-mapping software conclusively rebuts this post, will you award their grant?

neqyve's avatar

This is a great piece, yes I've never been able to see this better than my foray into the Israel Palestine discourse. From one pal person said they should drive the jews to sea after the war srarted and they were losing to "the cause of the conflict was that the Palestinians wanted to drive the jews to the sea"

GrimMoar's avatar

Israel Palestine discourse would be Very Much Helped by removing all the propagandists, and, if possible, removing the blatantly transparent lies.

neqyve's avatar

Well definitely, but after that, you still have to deal with this type of stuff with no outright lies

GrimMoar's avatar

Trump caused a ceasefire in Lebanon by not sending Israel bombs to explode in Lebanon (and getting some talky people to jib-jab).

I don't think "Save the Palestinians!" or "Kill Da Bad Terrorists!" did anything to help or prevent this Very Solid, and Effective way to stop the conflict (at least temporarily). I think many people who were involved in the terminal slog of the I/P debate wars didn't think it was possible to have An American President do anything (if they did, they might have tried to influence him, rather than each other).

neqyve's avatar

The only reason Trump cared enough is because the debate wars made the entire conflict salient and unforgettable. And yes they both literally try to influence the governments across the west all the time. What are you talking about

GrimMoar's avatar

I'm pretty sure that Trump cared because he's trying to deal with Iran, and Iran cares about their puppet governments (at least enough to say "ceasefire across all fronts!")

If the pro-Palestinians could change one law in the United States, they should change the part of the budget that gives Israel "weapons and materiel" in the event of an attack. This might be a bit "post-horse leaving the barn", but it's what allowed Israel to attack a Palestinian hospital (Israelis were openly boasting about the weapons, pre-attack).

If there was a realistic way to get this changed, one would expect the time-and-effort spent towards it to be Much Better Spent than arguing online.

... I'm not sure I've got an equivalent for the pro-Israeli side. My personal opinion, but they seem to have won (unless you want to count Team Aircraft Carrier* as the side that won the Federal Argument, which would be fair).

In Europe, you might have the pro-Israel side advocating for no more funding for Hamas (this is assuming the pro-Israel side objects to Bibi's argument).

*America's largest aircraft carrier, aka Israel...

It is not generally That That hard to get a small entry in a bill, if your idea is small And Not Objected To. (Backup cameras on all new autos, once they dropped below $500 in cost).

neqyve's avatar

I'm not really sure here, the Pro Pal side pretty clearly talks about laws and influencing US politicians a lot, it was a big thing during the election, it's a big thing in all these Congressional and senate elections. The Track AIPAC thing is popular because of it as is various groups tracking congress.

The protests and BDS were heavily focused on getting even the governments to stop funding Israel

Ajb's avatar

These kind of proposals - at least as caricatured above - don't really state what it would mean to succeed. However, actually increasing agreement isn't necessarily the only useful goal. For example, there have been periods in history when politicians felt under much more obligation to make coherent, detailed statements of their position than they do at present. In living memory to some extent, but even more so when western countries were basically a written culture. Even though agreement wasn't necessarily more common, if political argument had more intellectual integrity than it does today, it would be an improvement.

ꙮ;'s avatar

Perhaps "argument" is exactly the wrong context. Arguments are struggles to make someone admit something they intrinsically do not want to, usually strongarmed out of them due to some cultural precommitment to epistemic norms, the foundation for which you rightly observe is crumbling/actively being sabotaged to deny others the opportunity to force one to recant.

Maybe a stronger context for this technology are communities with very strong attachments to *both* community integrity and individual freedom. Communities that have a practiced understanding that those values require patience, good faith engagement, and a hell of a lot of time. Quaker threshing sessions and anarchist consensus building come to mind.

Alex Gourley's avatar

Agreed - the concept he's criticizing (that one could solve debates) is one option of many when it comes to figuring out whats true. And you're right a lot of the existing methods we have require a ton of time. Is there a faster way? What I've been trying to work out for years is private practices & tools people can use to sharpen their own thinking and share back the results to the commons only when ready. This is time efficient and more importantly when you social reputation isn't at risk it's easier to change your mind.

ꙮ;'s avatar

We strive for and feel disagreement healthiest and most pro-social when:

1. Opinions are held close to the chest until relevant

2. One permits others to express and retain disagreeing beliefs

3. One, with good faith, curiously investigates *why* disagreement arises

Simplicio steel manning their conversation partners can highlight non-obvious premises and deductions in a way that can produce voluntary rather than begrudging change.

It keeps rhetoric focused on convergence rather than divergence, reinforcing a shared expectation that discussion is the coevolution of community from individuality rather than the opposite.

Edrith's avatar

I agree you don't need to fund them, but I feel you're overly pessimistic here. It is possible to develop more useful techniques for argument.

For example, adversarial collaborations - which I learned of via SSC - seem to be a powerful method for getting two good-faith adversaries to agree on what they agree on and highlight the points (factual or value-based) of disagreement. I've learned a lot from each one of these I've read.

Community Notes also seem a useful innovation in this area. And I'm sure others are possible, given both of these were developed recently.

Brandon Hendrickson's avatar

Re: adversarial collaborations: great point! I, for one, would love to see those brought back for a spell, in lieu of the book review contest. Or, possibly, open the genre of "adversarial collaboration" up to serve as a challenge to see who can innovate the best new way to disagree.

Meir Brooks's avatar

Dialogue is one of the things I care most deeply about. I think the absolutism here, especially in point #2, is misplaced (I don't much care for formalizing approaches).

I can say from personal experience that I, and others I've encountered, have been convinced either by facts or by arguments presented to us in a discussion. That doesn't make me or them representative, but it does mean something is possible, and this in an environment where no real platform is dedicated to such arguments.

Just yesterday I heard a podcast episode with Daryl Davis, a black musician who claims to have caused many KKK members to abandon the group just by meeting and talking to them (and he mentions laying out specific arguments, not just hanging out together). The story as I heard it was a little too neat, so I'm a bit skeptical of all the details, but I can't completely rule out the story because I've read about so many examples that are just as unbelievable. Joseph E Lowery, a giant of the civil rights movement, talks of confronting George Wallace-- arguably the #1 face of segregation at the time-- and changing his views or at least his rhetoric on race relations. I don't think Lowery had much of an incentive to give Wallace more credit than was deserved on that. There are reams of stories of Palestinians and Israelis changing their views upon just encountering one another, on the level of "I wanted to murder Israelis to avenge my brother but then I met some and now I lead a peace NGO." There are a lot of questions as to what the conditions are in which this works, and yes it's entirely possible that it's more about literally encountering the other human being than anything they say. But I'd say the jury is out on that. And given that I've experienced and witnessed minds changing due to arguments, I'd say arguments-- which can also humanize as well as inform-- are extremely underrated as a mode of progress today.

And of course, people change their views all the time in the other direction. Anti-vaxxers will often point to a particular book, or post, or data point that made them question their opinions on vaccines. It definitely isn't all confirmation bias, otherwise we wouldn't see the massive swings in opinions that we see on these and other topics.

Incredibly, there's a stigma against talking to people of different views. I get this all the time, and it baffles me, given how bad echo chambers are for all of us. But people will look at you funny if you say that you're trying to understand why certain bad ideas persuade people by listening to them. That stigma is itself suggestive to me that this approach is severely undertested and underrated.

Gregorian Chant's avatar

Yes even intellectual views are often socially based. Meet the right people and they can change quite considerably.

Divine Ghost's avatar

My intuition is that it's much more about challenging their worldview with your person than beating them with facts and logic. If you're atuck in a headspace where all X are evil/terrorists/lazy/stupid/criminal etc, just meeting and somewhat befriending a person from said group who isn't remotely like that can shatter your ideas on an emotional level, which is infinitely more effective than attacking at an intellectual level for most.

GrimMoar's avatar

Al-Jazeera did a lot of good for people in the Middle East, in terms of humanizing the Israelis, in that They Actually Gave Them Airtime. Now, folks might have wanted to say "their views are deplorable" -- but at least they're people, not complete caricatures (and, importantly, a lot of them look pretty much like their Arab neighbors).

Kveldred's avatar

This seems possibly related to a project / (set of) idea(s) I've been meaning to write up! I have approached it from a different direction—viz., "how do I make sure I either win all arguments I'm in, or at least come out with pride intact?" (a very important concern, I think we can all agree)—but I think the ideas therein might be applicable to the "solve debate" concept, too.

If I had to choose the single most important principle, it would maybe be that of finding a (double, ideally) crux: is there a particular claim that, if demonstrated to be true/false, would make your opponent change their mind?¹

Well, as Scott points out: not usually—they'll happily concede any particular item, if pressed, and then go on to have the same opinion as before. But! we can trick 'em, 𝘵𝘩𝘦 𝘧𝘰𝘰𝘭𝘴! You see, most people have a few main rationales for a particular position: you say "lockdowns were bad", and they'll jump in to say "lockdowns are good, because studies show that they prevented many deaths", or whatever...

...and now you have an opportunity: you can get them to commit to the idea that this is important to their belief. "Well, what if we found that they prevented only X deaths, on average? Would that change your mind?"

Often, they'll be like "Sure, but we won't find that out lmao". If not, then you can press them on what 𝘸𝘰𝘶𝘭𝘥 change their mind: now they must either admit that they're unreasonable people who hold to an article of faith while pretending that actually they're like trusting the science and stuff, 𝘰𝘳 come up with a particular claim or set of claims that can then be addressed.

Granted, this is maybe less useful for "solving 𝘥𝘪𝘴𝘢𝘨𝘳𝘦𝘦𝘮𝘦𝘯𝘵"—actually finding the truth (rather than just looking good in a debate / figuring out 𝘸𝘩𝘦𝘳𝘦𝘢𝘵 disagreement lies) is more difficult.

("But can we turn it into an app?" Good question. I give you—the intelligent & good-looking reader—permission to try, so long as you give me some money at some point when you make it big. I'll even throw in a name suggestion: "MotherCruxxr". No charge for that one!)

Of course, there are many further techniques to be found in Kvel's Patented Debate-Amaze® Bag o' Tricks™—e.g., as Scott also mentioned in his recent post on writing: being honest about what you don't know makes it a lot easier (since then you're not committed to defending anything indefensible & don't look foolish if you end up conceding upon some point or another); or the ol' Laser-Focus Principle™, violations of which are responsible for most argument-losses I see IRL; vel sim.—but the more I think about this the more I begin to fear that maybe it's not as revolutionary as I thought, so... uh... I'll finish it later, aight.²

--------------------------

¹(even better if it'd do the same for you, too, as mentioned; but we'll proceed upon the assumption that 𝘸𝘦'𝘳𝘦 right about basically everything.)

²(I feel it important to note, here, that donations will assuredly speed this vital work.)

vectro's avatar

I think the framing here is implicitly “I am (always) right and others are wrong”, which to me is a bit toxic.

Kveldred's avatar

I don't know what could have made you come to this concl–

>¹(𝘦𝘷𝘦𝘯 𝘣𝘦𝘵𝘵𝘦𝘳 𝘪𝘧 [the crux would] 𝘥𝘰 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘧𝘰𝘳 𝘺𝘰𝘶, 𝘵𝘰𝘰, 𝘢𝘴 𝘮𝘦𝘯𝘵𝘪𝘰𝘯𝘦𝘥; 𝘣𝘶𝘵 𝘸𝘦'𝘭𝘭 𝘱𝘳𝘰𝘤𝘦𝘦𝘥 𝘶𝘱𝘰𝘯 𝘵𝘩𝘦 𝘢𝘴𝘴𝘶𝘮𝘱𝘵𝘪𝘰𝘯 𝘵𝘩𝘢𝘵 we're 𝘳𝘪𝘨𝘩𝘵 𝘢𝘣𝘰𝘶𝘵 𝘣𝘢𝘴𝘪𝘤𝘢𝘭𝘭𝘺 𝘦𝘷𝘦𝘳𝘺𝘵𝘩𝘪𝘯𝘨.)<

–ah, right.

Well, such has been my experience, I'm afraid. A heavy burden to bear—and one familiar, I am sure, to many of us here at ACX—but the only other option is to start being wrong, and I just... I just 𝘤𝘢𝘯'𝘵, you know?...

DrManhattan16's avatar

It takes scarcely any effort on this path to trigger a person's defensive instincts. One man's questions are another man's witchhunt. From their perspective, any questions you ask are about trying to trap them.

The honest answer to "What would it take to change your mind" for a lot of people is "See people I trust say that I'm wrong". Obviously, that kills the process in its tracks, because now your goal is to convince the influencers, thought leaders, etc.

Erica Rall's avatar

It has occurred to me in the past that a lot of informal fallacies are rhetorically powerful specifically because they resemble lines of argument that are valid, or at least useful heuristics. You can even pair up fallacies that are opposite failure modes:

Tin Man opposes No True Scotsman

Appeal to Authority opposes Ultracrepidarian

Tone Policing opposes Proof by Intimidation

Goomba Fallacy opposes the Fallacy of Association

Appeal to Tradition opposes Appeal to Novelty

Michael Watts's avatar

This extends beyond argument techniques. One of the items on Donald Brown's list of "human universals" was "mutually contradictory clichés". Compare "he who hesitates is lost" and "good things come to those who wait".

There was an SSC post about the idea that different people need different advice. Same idea. Different circumstances can also call for different advice, or different arguments.

TonyZa's avatar

Debates work when the goal is clear. For example the judicial system is a sprawling, highly regulated debate club but contrary to TV cliches arguments are usually very short and on topic especially in the case of a bench trial as judges have little patience for exercises in rhetoric.

In theory academic thesys are supposed to be part of a debate but the process is a formality.

What is the point of internet debates?

Gamereg's avatar

Getting a pulse on public opinion, understanding the opposition, planting seeds that at least have the possibility of germinating an understanding.

hwold's avatar

> This hasn’t worked in two thousand years of arguing: Most dating apps are doomed. But one reason for optimism about dating apps is that people have made them before. They’ve been known to occasionally work. And they’re an extension of things like matchmaking resumes and classified ads which have worked for centuries. What’s the argumentative equivalent?

Essentially free speech, liberalism, "you have to argue rather than bully/use raw power to be taken seriously" ?

Gamereg's avatar

Good point. I thinks it like a complicated machine that takes a lot of effort and know-how to build, and it can still be broken, but when it works, it does the job better than anything else. Liberal democracies are hard to get right, and are relatively rare in human history, but when they work, they work quite well.

tempo's avatar

Ad hominem is a logical fallacy, but is it a Bayesian one?

Adam Ever-Hadani's avatar

Largely agreed.

- Re the "taking potshots on the internet" - I think that is exactly right and gets at an interesting cognitive bias Ive observed in my own short lived career posting on X - We get a strong *emotional* reward from chastising, taking potshots, lashing out etc. As soon as there is a sense of a meeting of the minds with a would-be adversarial party, the whole thing becomes much less exciting and interesting and you move on. the "anonymous public square" format seems to reward flaming emotionally (as well as financially for a whole class of grifters who realized you can sit at the most inflammatory friction points and just make money as a sort of public opinion market maker trading on volatility and eliciting responses and attention e.g. various extreme takes on Palestine-Israel from different angles, various "manosphere" types degrading women etc.)

- As you hint at in the article, even with an elaborate logical argumentation framework, there is a whole layer of estimation and prediction that sits on top of the cold facts and the basic "Nth order logic" which can never be fully codified or "observed", even if we try to throw fancy bayesian and causal inference hammers at it. This ranges from soft / qualitative issues that have a strong subjective component - e.g. issues of morality - Is a particular injustice commensurable with one done in return? - to the more quantitative e.g. the credit assignment problem - how much "at fault" is one factor versus another in a complex system dynamic, which most large-scale economic/world events are, where cause and effect often lag, all actors work with partial information, and so on

Patrick Ryan's avatar

There are related approaches that shouldn’t be written off. Consider an AI moderator that encourages “productive conversation behaviors” and discourages the opposite.

Sorting and reach are driven in part by how productive the response is and the users reputation.

Just an example but you can moderate behavior instead of outputs. Great moderation has historically been a form of this.

CTD's avatar

"Arguments rarely hinge on one person being simply wrong and stupid" - there are arguments like this going on all the time, everywhere! They don't show up much in punditland because they get resolved, or someone at a company hires a consulting firm to write a report that says they're right, or a family agrees to stop yelling about it and carve the turkey.

Most of those arguments would benefit from little circles and arrows.

The ones like lab leak or abortion or shrimp rights usually involve parties with disjoint interests ("they have this many neurons!" v "they are delicious when fried in batter!"), and/or some connection to the ongoing culture war. But those would also benefit from little circles, because I see many tributaries of those arguments where the circles would accelerate a retreat to the motte.

GrimMoar's avatar

Abortion is an argument on the margins. Most people agree that abortions are bad, and we should minimize them.

People disagree with particular types of birth control, people disagree with "how acceptable a particular abortion is", people disagree with the economy which says certain sex-wanting people do not use condoms because that would lower their desirability to the point that they would lack for partners, people disagree with "what should we do when someone does have an abortion"... But these are edge cases, and they make it easy to have Big Views, instead of Little Compromises.

CTD's avatar

Good point! Another feature of these big arguments is that they are actually multiple unresolved arguments at once. For abortion, how alive is a fetus at various times, how should we weigh its fate against the fate of the parents, to what extent should the government subsidize things, to what extent should the government interfere in people's lives, what is the proper discussion of responsibilities and powers between states in the federal government, etc.

Underspecified's avatar

This is an excellent post. My reply is probably going to be terrible, because now I want to vent about all of the frustrating internet arguments I've ever seen.

Starting with the red button / blue button fiasco: I don't even have a real position on whether red or blue is better, but the problem is a rat's nest of weird bullshit that might undermine the utility of any given heuristic, and the altruism heuristic is not exempt from that. Altruism is not always good! Many red voters are using a personal responsibility heuristic exactly because it's possible to make the world worse when you feel an unbounded obligation to save other people from themselves! Everyone is necessarily working from imperfect heuristics. Personal attacks against red voters as a category just make you a dick, same as personal attacks against blue voters as a category.

Also, tariffs: We have laws in the United States that sometimes make it harder to compete in international commerce. I can understand a free trade idealist who wants to repeal those laws, and then eliminate tariffs. I can also understand someone who supports those laws, if they also support using tariffs so that Americans can realistically find employment in the industries where they have a comparative advantage as individuals. But I can't understand broad opposition to tariffs under the laws we have now, because none of the arguments I've seen about e.g. economic efficiency actually address my concerns. And I also don't think Trump has implemented tariff policies that make any sense, so his supporters aren't any better at addressing my concerns.

I'm going to stop there because this comment is already too long, but I could probably write another five paragraphs about gun control, if I thought it would do any good.

GrimMoar's avatar

You could start by advancing the argument that Trump's tariff policies aren't intended to make sense, and ask what he gains by doing that? In particular, you might want to pay attention to expectations regarding the stock market, in terms of deliberately injected uncertainty.

skaladom's avatar

I've been finding Claude and ChatGPT pretty good for arguing. I keep seeing arguments made on substack, and I've gotten some interesting lessons here and there by posting my take as a reply, and engaging with the pushback if someone shows up to engage.

Chatbots actually do a decent job these days, you just open a conversation, summarize ideas you've seen, then your fresh insights, and ask for critical analysis. A few rounds of conversation and you can see which part of what you're saying has legs, if any, and what they may connect to.

Reed's avatar
Apr 28Edited

I think the maps might not be a very effective way of arguing, but I think they are effective ways to represent an argument.

I remember being confused about the SCOTUS birthright citizenship case, because I felt like most MAGA-leaning outlets just weren't covering it/taking it seriously, and the rest of the outlets (liberal and conservative) were saying that it's unconstitutional on its face.

As it turns out, there *is* a constitutional argument for it, hinging on what "jurisdiction" means, that isn't trivial. Now, I don't think the EO to revoke birthright citizenship was based on this argument. I'm quite sure it's post-hoc, and it has not changed my mind. But it has changed how I think about the case, and how I think about people who might support it.

There are all kinds of issues where I just don't understand what the opposing argument could possibly be, or if I do, I don't understand how it would handle a particular objection. I think some kind of argument wiki that maps out the various premises/conclusions would be helpful to me at least, just as a reference.

LLMs have made this much better, these days I ask them for an outline of the debate for things like this. I think it could still improve, though.

Arizona Nate's avatar

That argument, fwiw, isn't entirely post-hoc. It's been very much a minority report among legal scholars, but it's been out there. See e.g. https://www.newyorker.com/news/the-lede/the-liberal-scholars-who-influenced-trumps-attack-on-birthright-citizenship

GrimMoar's avatar

Interesting, although I can't read the link. Epps is being cited in the article I found that wasn't paywalled:

https://newrepublic.com/article/202505/law-professors-bogus-birthright-citizenship

Jackson Hurley's avatar

Wikipedia is the argumentative equivalent of the successful dating app.

The problem with many such efforts is that the map is the wrong form factor. No one is going to agree to one org’s platonic map of the discourse around X.

For any given large disagreement (COVID lockdowns, AI risk, etc.), the set of arguments for and against is usually 1) functionally finite, in that a handful of arguments and data points comprise >90% of the discourse, but 2) too large to be conveniently written down or consulted, especially in a pleasant UI.

I also think that mapping a particular debate is not nearly ambitious enough, and we can do far better now that we have LLMs capable of doing epistemic grunt work at scale.

Proposal:

We should instead collect all the claims that come up on the internet into a deduplicated vector database and assign each claim an LLM as its steward. Each claim should have a Wikipedia-like summary of the main evidence and arguments in the discourse around it, with examples and links to other claims. Lower-level factual claims should be evaluated for accuracy. Using Wikipedia-like principles of openness to correction, evidence, and argument, while keeping Claude or ChatGPT with the right system prompt in control of the canonical form of each claim's page, it should be possible to pretty much map out all the relevant claims in the discourse and do at least as well as deep research on all of them, for ~ a few million dollars in api credits, then serve to users in various forms built on top of an api for "what's the status of X claim". Forethought (no affiliation) has done some design sketches in this area https://www.forethought.org/research/design-sketches-collective-epistemics#design-sketch-1. Variations on Community Notes could be another application.

AI labs should want this as an alternative to redundant search as a grounding strategy. The main issue with LLM epistemics (and these days, therefore human epistemics) is less that models are bad at reasoning than that claims on the internet are poorly sourced and no one is “doing the work” to investigate them. The incentives for everyone, including LLMs, are to find a minimally credible source confirming their prior and defer.

Max Hniebergall's avatar

I actually have a working prototype of the system you proposed, would love to chat

Jackson Hurley's avatar

DMed. Anyone else interested in doing this?

Bugmaster's avatar

> while keeping Claude or ChatGPT with the right system prompt in control of the canonical form of each claim's page

I don't understand what this means -- what is the "canonical form" of the claim ? Does this merely refer to syntax ? What prevents Claude from generating a perfectly syntactically valid page that states e.g. "water boils at 100 degrees Fahrenheit" ?

Furthermore, what do you mean by "deep research" ? To use a cartoonish example, imagine that I'm a Flat Earther. You present me a hierarchical tree of claims supporting the round Earth, including pictures of Earth from space. I reject them all on the assumption that all those pictures were taken by NASA astronauts, and NASA is the government agency tasked with (among other things) promulgating the round-earth myth. Now what ?

Jackson Hurley's avatar

The canonical form of a claim is just one clear, straightforward statement of the claim, that paraphrases and restatements resolve to, in the same way that the title of a Wikipedia page is the "canonical form" of the concept that the Wikipedia page is about. Then the claim's page can have arguments and examples and links to other claims.

There is no bright line between what does and doesn't belong on a claim's page, and in what form, just as there is no bright line keeping a Wikipedia page from becoming infinitely long, but similar principles apply, and I expect that each claim's page could be roughly similar to what you'd get if you went to Claude today and said "What is the epistemic status of the claim that X?".

The question "when does a claim deserve to be in the database" is an interesting one. Wikipedia again provides some guidance here -- not everything existing or hypothetically existing in the universe is or can be a Wikipedia page, even though the set of things that could be a Wikipedia page is not strictly finite or bounded. One can be more liberal in admitting claims than Wikipedia is in admitting new articles without devolving into complete all-conceivable-claims-get-a-page anarchy.

I think claims that nobody is actually making or relying upon probably should not be in the database. For that reason, I would think that "water boils at 100 degrees Fahrenheit" would not be included, or would be subsumed into the canonical form of the claim that covers discourse on the boiling point of water, which might be something like "The boiling point of water at sea level is 212 degrees Fahrenheit, (100 degrees C)." If there were a bunch of people actually claiming that the boiling point were 100 F, then maybe there would be a page about that claim, quickly pointing out that it's not so and likely confused.

I definitely don't think a claim being obvious nonsense is a reason not to include it. The claims of Flat Earthers should be in there, with evidence and arguments given a fair hearing, but ultimately rejected, which is pretty much what you would get if you ask Claude today "What is the status of the claim that the Earth is flat, including arguments on each side?"

On the question of what to do with unbelievers? Be disappointed and take the L as a liberal must. That one cannot make the horse drink does not defeat the case for better water infrastructure. He that hath ears to hear, let him hear.

LLMs already matter a great deal for epistemics. Already, many writers and people in positions of power are talking to them all day long, and they will only continue to grow in their influence. Getting LLMs to care about sourcing truth and giving them tools to make finding the truth just as easy as bullshitting, is an epistemic intervention on the level of fact-checking all published media, or the institution of peer-review.

Bugmaster's avatar

> The claims of Flat Earthers should be in there, with evidence and arguments given a fair hearing, but ultimately rejected...

But that's exactly the problem you're trying to solve, isn't it ? Who determines whether the claim is "obvious nonsense" that should be "ultimately rejected" ? If you could provide an answer to this that satisfies everyone, then the problem would already be solved ! You seem to be saying that we should treat LLMs as a sort of neutral oracles, and some people might agree, but most won't. LLMs are well known for their biases, both political and sycophantic. If I say that the Earth is flat, and the LLM judges my claim to be false, I am not going to say "oops I guess I was wrong"; rather, I'd say, "of course that's what I'd expect an LLM trained by a megacorporation on a corpus of lies to say, duh".

You could retort by saying "just take the L, idiots will be idiots", but consider: are you so certain that the LLM would back up all of *your* claims ? If and when it doesn't, what will you do ?

Jackson Hurley's avatar

LLMs are not neutral oracles, but I believe that they can play the role of neutral arbiters as well or better than any person or collection of persons, if they approach the task of presenting the best arguments surrounding each claim with the right principles.

In many cases, where reasonable, informed people disagree, rendering a decisive verdict will not be appropriate, though sometimes that is the right thing to do, just as good judges do not prejudge the case but are not required to entertain arbitrarily frivolous motions.

And when I see my beliefs quickly dismissed, I will either update in the direction of the LLMs best read of the evidence, or I will bash the contribute api and make my case until either the correct view is represented or I give up!

Bugmaster's avatar

> if they approach the task of presenting the best arguments surrounding each claim with the right principles.

But again, how do you evaluate that ? What are the "best" arguments, and what are the "right" principles ? In some cases the answer is easy: if I say that 2+2=5 then my argument is bad. But debates rarely hinge on basic math; rather, they hinge on value judgements like "we should value personal freedom more than social stability" or epistemological challenges like "can you trust any of the IPCC climate data or is it all fabricated". And I don't think you can answer those questions by appealing to the LLM, because the LLM's answer depends primarily (if not entirely) on who trained it.

> And when I see my beliefs quickly dismissed, I will either update in the direction of the LLMs best read of the evidence, or I will bash the contribute api and make my case until either the correct view is represented or I give up!

I know you're kind of joking, but still, I think this is what most people would do in all seriousness, i.e. they will absolutely trust the LLM when it agrees with them, and ignore it when it does not -- just as they do with human opponents.

Jackson Hurley's avatar

The right principles are roughly those of the virtuous arbitrator, librarian, assessor, or Wikipedia admin.

The best arguments are the best arguments. There may be more than one.

When arguments are downstream of larger normative claims over which reasonable people disagree, that can be made clear.

When they are downstream of other claims, such as claims that the IPCC’s data is fake, those claims can also be investigated and assessed.

I reject the notion that the LLM’s answer is dependent primarily on who trained it. I believe that is less true than the claim that the outcome of a case is based on the evidence presented and the law, rather than who picked the judge and the jury; sometimes the cynical read is meaningful, but it’s mostly not true, especially when, as often happens, the judge and jury feel a duty to uphold the principles of justice, truth, and fairness.

I don’t believe this is a foolproof plan to find the truth in all cases right away, nor to settle once and for all every dispute, nor to somehow compel everyone to agree upon the truth. I do think it could improve upon the status quo, especially for people and machines that care about getting things right. Currently, it is somewhat hard to answer the question, is this thing I’m reading/writing reasonably supported by the available evidence? It could be right there on the screen.

Bugmaster's avatar

A piece of serendipitous news:

https://openai.com/index/where-the-goblins-came-from/

This is not a good piece of news for the "neutral oracle" role of an LLM.

Gordon Tremeshko's avatar

I'd be interested to see some of these grant applications where people think they have a legit plan to transform internet flame wars into Oxford Union style debates. I'm having trouble even conceiving how that might come to be.

GrimMoar's avatar

I don't think technology is the answer. "Don't be a Nazi" is the answer, it's a community rule encouraging good faith and acknowledging that new users of the community are going to be hypersensitive, and you should welcome them with a smile and good-faith efforts.

One of the best commenters (who I nearly never agreed with), once said to a guy, "You seem more than usually upset, how's your day been?" -- these sorts of reality checks can get people to realize when they're letting "non-discourse" leak through (or even they're being intemperate in general).

You start small, sure, but you act friendly, and you Assume Everyone Else Will Too -- assume it's not you, if someone starts getting shouty, and remain friendly.

Gordon Tremeshko's avatar

I dunno, a lot of those "you sound upset" type comments are actually condescension or dismissal under the guise of concern. Passive-aggressive flim-flammery, in other words, and the internet already has enough passive-aggressive talk as it is.

I like Julia Galef's framing of the scout vs soldier mindset. The scout seeks to gather information that could be of future use; the soldier, by contrast, is either attacking or defending. I would say try to stick to the scout mindset and try to engage only with others who are doing the same.

GrimMoar's avatar

It depends on the person. Frequent use of this is a probable sign that they're overusing it to the point of condescension, sure.

Scout, sure, but the better term is spy. You should try to be an intelligence gathering operative, and when you encounter someone with radically different starting points, it's normal to ask "where'd you get that".

Spies also generally don't advertise their own views, preferring to guide without having to fly "red/blue" flags.

Gordon Tremeshko's avatar

Spies are dishonest by nature, though. Yeah, they don’t advertise their views, but if you ask them specifically, they’ll lie through their teeth. Definitely not a mentality I would recommend.

GrimMoar's avatar

Lying is work, and spies don't tend to make work for themselves if they don't need to. If he can be honest, a spy is very likely to do so. More likely, though, is not sharing ones own opinion. You'd be surprised how much people will reveal when you get them talking.

Gordon Tremeshko's avatar

James bond over here.

DM's avatar
Apr 28Edited

Rules -- for arguing, for playing games, for cooking, for whatever -- can be great: When in circumstances C, do action A!

But in order to follow a rule, you need to know how to interpret it (what counts as C? what counts as A?) and when's the appropriate time to apply it? and are there are some other rules that might supersede it? and are there are ceteris paribus clauses (and if so, when do they apply)?, etc.

Are there more rules to determine the answers to these? If so, the same thing will apply to those rules. At some point there needs to be some kind of way to go on that doesn't appeal to more rule following. (This general idea I got from classes on Wittgenstein in grad school.)

For a short time I was a lifeguard at the local pool. I yelled at a kid for running and I pointed to the sign next to him that said "no running". He said, "Oh, I thought that meant no runny noses in the pool." I imagined putting up a new sign next to it saying, "And by ‘running’ we mean rapidly propelling yourself on foot in a manner where both feet periodically leave the ground." But then I imagined seeing him running again and when I pointed to the new sign, he would say something like, "oh, by 'ground', I thought you meant, like, dirt. This is all concrete..."

GrimMoar's avatar

There are people who complain about "put a cup of water into the chicken" (because they put in a plastic cup, and it melted). Legal got the instructions changed, and now it says "pour a cup of water into the chicken."

bell_of_a_tower's avatar

> For a short time I was a lifeguard at the local pool. I yelled at a kid for running and I pointed to the sign next to him that said "no running". He said, "Oh, I thought that meant no runny noses in the pool." I imagined putting up a new sign next to it saying, "And by ‘running’ we mean rapidly propelling yourself on foot in a manner where both feet periodically leave the ground." But then I imagined seeing him running again and when I pointed to the new sign, he would say something like, "oh, by 'ground', I thought you meant, like, dirt. This is all concrete..."

Yeah. Very much this. I'm active in D&D, including talking about it online. And that's a classic failure mode--no set of rules, no matter how detailed and "bullet proof" will stop someone from doing bad-faith or motivated reading/reasoning. Rules are just words. And words are not self-enforcing. Nor are they something that can be blamed or credited.

Saw the same patterns when I was a teacher. Rules, by themselves, don't matter. Having someone that is willing to enforce those rules even when it causes an outcry is what matters.

DM's avatar
Apr 28Edited

For years I taught critical thinking and logic classes, and even if some students got A's and could rattle of the list of fallacies and could tell me about the difference between deduction and induction and abduction, and knew their modus ponens from their modus tollens etc., I never really got the impression that taking these classes made them better reasoners. It felt to me like the thing that makes the difference between a good reasoner / arguer and a bad one had to do more with their general attitude in life (a kind of openness, curiosity) and maybe their emotional stability or emotional intelligence or something (are they okay being wrong? are they able to not take it personally? Do they not have a chip on their shoulder? can they tell if their interlocutor is is getting angry and not in a good truth-seeking space? Are they able to help the other person feel not threatened? etc. )

Paul Brinkley's avatar

I wonder if I'm the one you got the argument map idea from. It's an idea I often bring up, and it looks a lot like your description. (Probably not, given your last paragraph, but now I wonder if whoever gave you that idea got it in some part from me.)

It differs in multiple ways. In mine, the edges don't denote implication (although they could, with a bit more visual organization); they mostly denote responses. Example: the violinist argument is a response to the argument that if someone is physically dependent on your for their survival, their dependency overrides your right to convenience. That argument could have been a response to another one, and the violinist argument might have sibling responses and responses to it in turn.

My aim here was to partially solve the problem where someone enters a room with some bold idea they claim will settle the issue, or at least a great deal of it, only to find that everyone already heard it and the responses are years old and everyone feels like their time was wasted. It also could solve the problem where you come across what seems like a really good argument to you, but something seems not quite right, so you look it up in the map and lo, there's a response. Such a map could also easily point out trolls (responses to them are easy to look up), Gish gallops (multiple responses to an argument, each with a response in turn), and other rhetorical dead ends.

Now, I always thought they wouldn't work, but (1) for mundane reasons, one of which LLMs appear to have solved, (2) to some extent, I found it helpful to at least _think_ about the problem in terms of rhetorical design patterns in order to organize my thinking, and who knows, maybe encounter an approach that _could_ help.

There are still problems. One is non sequiturs. Do they count as responses? Should they? Every argument would have countless responses, each with one response saying "this is a non sequitur". Another is deeper: what if a response is logically mistaken? "Yeah, but urchins don't have knees" -> "that is a non sequitur" -> "no it isn't". Of what use is this? A related problem: who builds all the responses to a Gish gallop? It's still work, even if someone sacrifices their time to write the FAQ. A professional troll still has leverage. Thirty worthless counters in minutes that each take an hour to refute. An automated reasoner would get around this, but we don't have one (and no, an LLM doesn't count). But again, the value I get here is forcing myself to think of arguments this way while I look for patterns.

Your five-circle graph of a lockdown argument is an unfit pretense at formal syllogism, I agree (which is much of why I say LLM responses won't work; they can't do logic yet). But if you _could_ make a proper syllogism, it _would_ work. This is not a no-true-Scotsman argument. Even with arguments about real world things like vaccines and climate, one can break down an argument into its claims and sometimes point to claims that could be true, but Bayesian logic tells us are too unlikely to want to bet the amount of resources we're calling for, or some claims which are true only the others aren't, so the total probability never gets above some threshold, or even that certain claims are vague and could work if they're made more explicit.

If this makes an argument or the issue containing that argument more complicated, that's probably because it really is that complicated. And why should this bother us? It would neatly explain why some issues persist for decades, with conflicting positions defended by intelligent-sounding people with no obvious stake. Mistake theory tells us this happens because of one or more claims about observations that are impossible to measure with current technology, or some deep unsolved philosophical problem. Conflict theory tells us those don't matter as much as one side just wanting to win. A map would help illuminate which one is dominating, if at all. Or if there really is some urban legend or mistaken claim powering one entire position that couldn't be rectified until much later. It's happened before.

Your lament about internet drive-bys reminds me of my mantra: I don't want to win; I want the right answer, even if it isn't the one I carried into the room. Maybe you're right, and there aren't that many people sharing that, but you seem to have created one of the latest Babylon-5s for such a group. May as well hammer on it for a while. The alternative looks less likely to show us new space.

No worries about a grant here. I know what the obstacles are on my personal vision, and I'm pretty sure money won't solve them.

Some Guy's avatar

From this comment thread I think there are like three dozen of us and the only dignity we can possibly find is to take it on the chin and just persevere elsewhere.

GrimMoar's avatar

"this is old copypasta" along with a link seems like it does a lot to stop people from making old arguments and assuming people have never heard them before.

A need to cite all arguments, however, encourages copypasta.

"Look! a Big Guy said it was a Coup!"

"Karzai said ISIS was an American-run terrorist group!" (there's a whole list of reasons why one might credit ISIS with being less than organic, but if you say "and you have to cite sources" this is the one you get).

Paul Brinkley's avatar

"This is old copypasta" only works if you know the argument before you is actually copypasta. What if you don't know?

A search often won't help; people independently arrive at popular arguments all the time, using terms that won't show up in that search.

I can't tell what you're getting at with the ISIS example.

rez tarkai's avatar

It's getting hard to continue with the pretense that an honest debate means anything in this world. Anyone engaging with a public discourse can more reliably "win" the disagreement (in practical terms) by using dishonest messaging, bad-faith arguments, claimed economic benefits, and many other such tools to win the crowd or more importantly, others with power. Debate is so much lower on the totem pole of practical tools than simply clawing at and seizing power at every opportunity, then simply acting unilaterally with said power.

For that reason, debate is pretty much a vanity project at this point. The power to meaningfully act is typically controlled by people who don't debate. Anyone who engages in debate clearly has no power. If they did, they wouldn't be debating. The people who "win" the debate (not intellectually-- but in terms of power dynamic) do whatever they wanted to do anyway, regardless of any debate. They wouldn't have learned anything in a debate, if they bothered to participate in one at all.

Paul Brinkley's avatar

There's a variation of an argument map that would make misfeatures such as bad-faith arguments more evident.

My personal idea (covered in another comment) maps arguments in terms of positions defended, and counter responses, including pointing out logical fallacies. One could imagine a map where most of one position's arguments are "covered" logically by responses, but none of the responses are. Assuming the logical implications are all checked and found valid and based on verified fact, this could indicate that that position is either being dogpiled by especially dedicated opponents, or that it truly is too difficult to defend, or bet on. And if developing the map is aligned with the incentive to find the right answer, rather than to simply win, then in the long run, the dedicated-opponent effect will fade out.

The Ancient Geek's avatar

Is everywhere equally bad?

Bugmaster's avatar

I think debate still has its place, but in very restricted contexts: scientific articles, engineering proposals, and other such situations where numeric measurements are primary. Everywhere else rhetoric is all that matters, and any form of "debate" essentially reduces to propaganda -- which can be as useful and effective as other forms of propaganda, when applied correctly.

GrimMoar's avatar

I don't think debate is a vanity project. I think if you don't see the debates, that doesn't mean they aren't happening. There's a plan to put boots on the ground in Iran. There's a plan to use nuclear weapons in Iran. There's a plan to blockade Iran until they squeal. Many, many plans, and the pros and cons of such get debated (alongside the risk potential, and uncertainty of successful implementation, and lots of other things).

This is a Very Big Deal, perhaps the most significant military operation of our generation. And it's getting debated.

On a larger front, the wisdom of the Neoconservative "rebuild Iraq/Afghanistan/Iran to Japanese/German Standards" has been very publically debated. The American consensus seems to be "They were dumb, and we probably shouldn't listen to neoconservatives anymore."

*I say the above about neoconservatives, but lord help anyone who criticizes Biden's policy with the Ukraine, which was absolutely spearheaded by the powerful neoconservative faction in his administration.**

**I have yet to find someone who voted for Biden who knew they were voting in neoconservative foreign policy.

sohois's avatar

Will you still accept grant applications that promise to make debate considerably worse?

Melvin's avatar

We already have reddit

Paul Brinkley's avatar

What a strange situation I find myself in. I wish to +1 this comment, while realizing I would be exemplifying the target.

Domo Sapiens's avatar

You can still leave a like via the heart, if you use the app or the activity view.

Seth Schoen's avatar

> Doesn’t it train you to point to one specific link and shout “argumentum ad verecundium, that’s a fallacy, you lose!”, whereas real life is almost never that simple?

I agree that our existing argument technology is pretty good at getting people to misspell Latin names of fallacies, so maybe this is a great description of what it accomplishes, but it should be "verecundiam".

https://en.wiktionary.org/wiki/verecundia

Mutton Dressed As Mutton's avatar

I agree with this piece, so obviously I'm working up to a but: but I think we can usefully distinguish between resolving arguments (which doesn't happen, because disagreements are usually rooted in conflicting values) and making consensus decisions in the face of disagreement. The latter is the work of politics, which most people find grubby and offputting precisely because of the necessary compromise, horsetrading, papering over of disagreement, and willingness to bend principles that consensus requires. (The work of political parties, which people also hate, is likewise to find a consensus path within a fractious coalition.) Politics in general and democracy in particular are a form of technology, and can be practiced in better or worse ways. So I think there may be some version of "solving debate" that is genuinely useful, but it has nothing to do with arriving at the one true answer (which doesn't exist).

rebelcredential's avatar

Oh, I made one of these back in the day. Like a decade ago. It's still up: http://www.wrangle.co/

It failed to revolutionise online discourse, but I did like the branding.

The Ancient Geek's avatar

There were may be half a dozen argument mappers. That's evidence that the idea doesn't work.

rebelcredential's avatar

Yeah but the other ones didn't have a funny little arm wrestling logo.

Jonathan Stray's avatar

I am a researcher who works on polarization and conflict online for a living, and I agree with you 100%. Debates are ultimately not very much about facts, and nobody really wants to do that. I have long called the idea that we can solve disagreements with better epistemology "the rationalist fallacy."

However.

It *is* possible to improve the relationship between people. Mediators, therapists, and parents do this all the time. It is also possible to come to negotiated settlements that all parties can more or less live with; lawyers, politicians, and negotiators do this every day.

And, it *is* possible to change the platforms people use every day such that they improve the relationships between users, not degrade them.

My team just published the results of a 10,000 person field experiment showing that better social media algorithms can make us like the outgroup slightly more.

https://rankingchallenge.substack.com/p/its-possible-to-reduce-polarization

We're now in the middle of showing that an appropriately designed LLM can produce pluralistic answers that people who otherwise vehemently (and sometimes violently) disagree with each other all feel comfortable trusting. Notably, our approach is not about facts and arguments!

https://humancompatible.ai/news/2025/02/04/a-practical-definition-of-political-neutrality-for-ai/

GrimMoar's avatar

hahaha. Now I'm picturing all internet debates conducted in the nude, in the sauna. To improve negotiation!

Ivan's avatar

Any semi complicated argument about real life has thousands of arguments for and thousands of arguments against. Just listing arguments (even if you make sure to list both for and against) is useless. It does not bring you any closer to an answer.

Understanding what is important and what is not is the actual important skill.

W.P. McNeill's avatar

"Solve debate" sounds like logical positivism that sees the pursuit of scientific truth as comprised of the following:

1. Everyone agrees ahead of time what counts as experimental evidence for scientific theories.

2. We perform the experiments.

3. On the basis of the experimental evidence we conclude which theory is true.

The problem is in step (1) where "what counts as experimental evidence" cannot be disentangled from the theory in question.

Philosophers of science have proposed solutions to this conundrum that you may or may not find convincing, but none of them boils down to, "Here, use this handy chart."

The Ancient Geek's avatar

The other problem is that there is no possible.evidence that could settle the matter.

Hedonic Escalator's avatar

These circle map diagrams have little to do with real arguments, but were drilled into us during competitive CX debate. At least on the high school level, many debates were scored in similar ways. Did you create a series of links between the premise and conclusion? Did you break at least one link in all of your opponent's core arguments?

This meant that the winning strategy in most debates involved speaking faster than human comprehension and dropping in logically flimsy but competitively lethal cheese arguments, in hopes that your opponent will forget to address one and automatically lose.

Other leading strategies included Marxism.

Melvin's avatar

Obviously it's not going to solve debate, but in this very essay you point out a couple of places where something like this would be useful.

> Last year, I was in a panel discussion of why people disagree about AI risk. The closest we came to an answer was that some people place a lot of weight on a theoretical argument for intelligence explosion, and other people don’t really trust theoretical arguments and stick to a prior of “things rarely change very rapidly”

It sounds like something along the lines of an argument map would have been useful here, to help you find this crux, and also to figure out whether there were any other major cruxes that got overlooked in the format of a panel discussion.

Also it's interesting that people have been debating AI Risk for so many years, but as of 2025 it was still unclear why they disagreed. Could we have figured this out faster if mapping out arguments were more of a habit?

> but I actually know very little about his Sandy-Hook-related arguments, and I think relying on his general mendacity and looniness is a useful proxy here)

This is another place where it might be useful to have an argument map type thing -- in areas where you're a newcomer to an established debate. Was Sandy Hook a staged false-flag operation? I doubt it, but let's see the strongest arguments that it was, combined with the counterarguments, and we can be confident that we are justified in dismissing it as loony.

Paul Brinkley's avatar

Rightly said. Frequently - especially in controversial debates - great progress can come simply from agreeing on what the points of disagreement are.

Evan Barker's avatar

"Even an insane person like Alex Jones rarely says specific false facts."

Bruh.

Your linked post only claims that there is a dearth of false facts in posts on infowars. I do find it really difficult to disagree with you in general, but this is like Alex Jones levels of twisted. You seem to be trying really hard to make it so that infowars can't be lying, no matter how hard it knowingly twists facts.

He lies and makes stuff up all the time on his live show. That's where the defamation lawsuit comes from. Stuff he "says" on his live show.

Dude can't help himself, he lies all the time. Listen to a random episode of KnowledgeFight, preferably around the episode 750-875 range, or one of the Formulaic Objections episodes. That way you hear the lies without having to actually listen to Alex himself.

Paul Brinkley's avatar

Are you counting a lie if someone repeats a liar, without being aware that the upstream source is lying?

Rob Miles's avatar

I think formal debate is better than unstructured yelling at each other, and I also think that formal debate is extremely far from the best possible thing of its type for finding the truth.

Attempts to 'solve debate' that I think have a chance mostly look like creating a competitive game with rules such that winning the game requires good truth seeking discussion. It's a problem of mechanism design, incentive structure, and game design. A very hard problem, but I think not at all a doomed one.

Tristan's avatar

A useful addition here is what the research says is effective for persuasion: showing respect, expressing curiosity in the other person‘s point of view, asking them to explain their points in more detail. An app doesn’t help with any of this, except maybe to offer prompts and reminders. The point is not to nail someone with irrefutable evidence (which usually doesn’t work, unless a person has no emotional attachment to their point of view). The point is to coax someone into an emotional state of openness and then invite them to explore on their own terms, providing info in a way that feels helpful.

Liface's avatar

I wrote a Beginner's Guide to Arguing Constructively, in part based off of Scott's work: http://liamrosen.com/arguments

I would argue that it is useful for many types of arguments, and almost everyone walks away from it with one new technique to argue more constructively.

Paul Brinkley's avatar

Your link appears to have died.

Ivan Fyodorovich's avatar

"nobody comes into the process intending to have an argument"

I can't believe I'm the first commenter to post this Monty Python sketch:

https://www.youtube.com/watch?v=wxrbOVeRonQ

Sergei's avatar

A few points as to why "solving debate" is hard:

- there are two independent issues to consider:

- effective persuasion

- effective "search for truth"

both are hard, but in very different ways, and solving one does not solve the other

- ideology and intuition prevail over reason every time

- changing one's mind is a phase transition of sorts: it takes a lot of small internal hidden processes before a visible change happens.

DC Reade's avatar

Debate is not a problem that requires a solution. It's a means of addressing specific questions with due accordance given to matters of factual knowledge and rules of informal (i.e., verbal) logic (including, yes, fallacy detection). Preferably with a principled respect for ethics by all concerned (which helps keep the fallacies and errors of fact to a minimum.)

Debate is much more effective on some topics than others. Debates have the most utility when they address real-world problems, maintain a focus on specific topics and their most relevant aspects, rely on facts, and are carried out in good faith. Including aspects of common agreement, in order to delimit and focus on the aspects that remain in dispute. Debate is a means, not an end (another reason why the notion of "solving it" is absurd.) It's an informative process, a way of airing out the most important matters at hand for the purpose of issuing effective judgements and taking actions to carry out decisions.

Considering debate in the abstract- as if it were a calculation technique invalidated by features that aren't in all respects congruent with deductive logic- is the wrong way to look at the process. The problem of covid policy that was alluded to didn't get stuck on intractable obstacles presented by the dilemmas of debate on the questions. The flaws of covid policy were driven by political panic related to expediency. Often, there was no debate. There certainly wasn't any at the Federal level. That's what accounts for the overwhelming consensus in Congress on favor of shutting down most of the independent retail sector, providing applicants with bailouts with no provision for due diligence, and issuing successive cash payouts to every household in the US that filed a tax return. The CARES Act passed 419-6 in the House, 96-0 in the Senate.

The other example used- on the subject of the "risks of AI"- appears to have been overwhelmingly speculative. Vast and general speculations, at that.

You know about some risks of AI that aren't to be considered as abstract speculations? The impact of building massive data centers with massive power and water use requirements. Where's the debate on the impacts of the pollution from the power plants used to power these by-machine-for machine energy requirements? What about the impacts on water resources, and water quality? These are are to be considered both in aggregate and in terms of the environmental sensitivity of each separate local physical geography and watershed. Not considered as a generality, for the purpose of dismissing all those concerns with a shrug and a hand-wave.

I mean, motherly fuck. Do any techies even know what questions to ask? How many of the people still reading this post know what Thermal Pollution is? How much media exposure has been given to the specific details of the "power needs" of these massive data centers that sprung up out of nowhere as a fair accompli, with new ground being broken for more of them every month?

What is this? And the author is denigrating the utility of Debate? What's the alternative- "shut up"?

I'm not even unalterably opposed to data centers, per se. I don't know enough about them. It might even be possible to couple those operations with tertiary water treatment plants, although thermal pollution will still need to be addressed. As will nuts-and-bolts real-world impacts like erosion, noise pollution, groundwater extraction and/or effluent, and other potential questions. I realize that some tradeoffs are inevitable. But there are also potential deal-breakers associated with this massive information retrieval and storage factories.

The amount of public policy discussion they get is threadbare, in comparison to the attention paid to high drama science fiction plots about how at some future point in the middle distance AI will gain the hyperintelligence to take over, having at least developed an ego and will to power the size of All The Billionaire Techbros In The World, Combined.

Yes, Debate, I'd to see more real ones. And less nonsensical abstraction.

Xpym's avatar

>Do any techies even know what questions to ask?

Yes, these days they're mainly concerned with the question of how not to be ruled by people calling them "techbros".

DC Reade's avatar

I realize that the occupational sector whose owners and chief subordinates have made fortunes by manipulating machines and phantasmal electrical switching processes—with no requirement for understanding life sciences, or the natural world beyond man-made climate-controlled indoor environments—are a Minority group. But it’s my impression that they aren’t an Oppressed Minority, as you’ve implied.

After all, we, the User Majority, are forced to sign their contract agreements. Not the other way around.

Given that context, your report of a shared class concern with “how not to be ruled by people calling them ‘techbros' ” sounds a little paranoid to me. That it might be their chief concern appears overly egocentric, on their part. I certainly hope that you aren't speaking for all of them, as your reply makes it appear.

I wasn't aware that “techie” and “techbro” were the equivalents of the n-word, either. I just thought it was linguistic shorthand, rather than a marker implying some inherent inferiority.

I would readily change the identifying label to the term of your preference. My questions would remain the same.

Xpym's avatar

>I certainly hope that you aren't speaking for all of them, as your reply makes it appear.

Mainly for those who set the agenda. The rank and file is still generally aligned with the Right Side Of History Faction (it's the Bay Area after all), but even they aren't as sanguine as you about "techbro" being a neutral term. It's not the equivalent of the n-word of course (the Right Side Of History Faction only ever punches up, as we all know, and doesn't believe in inherent inferiorities), but it is meant to convey the air of condescending distaste.

>I would readily change the identifying label to the term of your preference.

That's commendable of you, but since others don't share this attitude, my answer remains the same.

DC Reade's avatar

The primary intention of my original commentary was to call attention to the parameters of the requirements of natural ecosystems. In order to not have sensitive ecosystems reduced to sources of coolant for data centers, with water sources depleted and then "recirculated" as >80 degree effluent. In a diminished volume, due to evaporation, and with who knows what level of contaminants dissolved from the operation. Sited indiscriminately, as if the living world of the planet was all some videogame.

And here you're making it all about the Beleaguerment of the Noble Futurists of Silicon Valley, facing pushback at the hands of those sentimentalists who advance objections to the fulfillment of their Visionary Goals of Digital Utopia. All about the intramural politics of the Bay Area too, of course. Provinces like Archbald, Pennsylvania don't even rate.

I know where Archbald is. I've been there. Where do the AI Overlords who Set The Agenda intend to get the water to cool the data center they propose for Archbald? From the local limestone aquifer, is the only answer I can come up with. Or the Lackawanna River, which is not a very large source of water at Archbald (mean value 298 cfs; median 234cfs; historic minimum flow since recordkeeping began in 1960 was 100cfs, in 2016. ) According to the USGS, which has thankfully thus far escaped the level of Federal budget cuts that would have dismantled many of those montioring sites. As had been proposed.

On further research, that issue isn't yet resolved. A proposal to draw it from Lake Scranton, a small impoundment lake that also serves as drinking water source for the city of Scranton, has been ruled out, possibly permanently. The ideal solution would rely on cold but contaminated water from old mines in the region, an alternative that ideally would include some amount of remediation. But all that is still unresolved.

Anyway, my main point does not center on one particular site. My point is that the tech industry has to be knowledgeable enough and care enough to realize that very often there's ample basis for valid concern, on a varriety of issues related to the gargantuan effort to ramp up AI infrastructure, seemingly out of nowhere, as a fast-tracked top priority for the use of energy and water resources . And thus far I've found almost no discussion about these concerns in the public conversations about AI. These concerns need to be foregrounded- by those n the industry with a sincerely knowledgeable concern about the health of the regions where they're sited. There's been entirely too much attention focused on speculative debates over strictly hypothetical scenarios. Any debate has the most value when it's centered on matters of practical concern, and supported by specific evidence.

Domo Sapiens's avatar

Thanks for sharing the concerns so clearly and respectfully (in my opinion) here. The environmental impact was already largely ignored when crypto was The New Shiny Thing that had a glorious future. I guess AI has eclipsed the energy waste of crypto by some multiple already, and it is still growing and much more likely to stay because it's an actually useful tool (maybe not in relation to the damage and resource use, but it undeniably is a much more useful general tool).

I have been pondering those topics for a long time and even considering a career change to do something about it. Do you have ideas, in the most general sense, where some good levers are situated? What field to study, to work in, what strategy to follow and engage?

Lately, I've been feeling rather downtrodden about environmental protection. I'm not a traditional tree hugger, but still I see a certain turn away from even trying to become better. Various parties on my continent would rather support fossil fuel business for no other apparent reason but "corruption".

Xpym's avatar

>There's been entirely too much attention focused on speculative debates over strictly hypothetical scenarios.

Well, the entire trillion-dollar AI boondoggle is predicated on the strictly hypothetical scenario that all those effluent-producing datacenters will essentially supplant a large chunk of human labor. You may think that's ridiculous, but for people taking it seriously, the environmental concerns are a laughable non-issue - the machine gods will have perfect solutions to these boring trivialities in a couple of years.

DC Reade's avatar

I started out as a techno-optimist about computers ephemeralizing technology and leading to much less of a burden on resources. And look what's happened.

Everyone is being conditioned to leave everything on all the time, me included. I think it's time for a responsible degrowth movement in terms of hours of computer use. I known damn well that a lot of my computer use is fat that can be trimmed.

As a personal decision, I've already done this with my driving, cutting back on annual miles by 50% even though I have a vehicle that gets 30mpg. But I also realize it doesn't mean very much unless it's a decision that's shared by millions of other drivers. Pro-individualism and market consumerism have been so effective as political forces that even the suggestion of voluntary temperance on these matters is treated as if it's advocacy for Maoism. The amount of spoilage and entitlement is unfathomable. And it isn't exactly as if any of our leaders in business or government are leading the way. But the United States of Sybaritism is not a system of values and resource use that's built to last. It's a false homeostasis that's on track to crash. The Epicureanism often exists side by side with precarity for many Americans, and there's little attention paid to addressing that situation, which is increasing to the point of being unhinged.

I feel like Jeremiah sometimes. noticing how much is being neglected. The fragility of our affluence. The increasingly drastic and reckless steps being taken to maintain a wastrel paradigm of disposable good ans ephemeral pleasures. .

It isn't even that satisfying to wallow in Stuff Abundance and take it for granted. It's more like sleepwalking. Narcosis.

April's avatar

I think in a particular sense the proof assistant is an example of this working, but I don't expect this helps much for settings other than mathematical research.

melee_warhead's avatar

I disagree. I propose funding a large series of chat bots that include my correct views on everything plus personalization to scour the Internet to refute all incorrect comments.

The starting point may be solidly evidenced positions, but to improve efficiencies we will start to cache the correct position in easily consumed memes and flood the zone.

Eventually all non-conforming conversations will be drowned out.

GrimMoar's avatar

This was already tried. It failed. You might want to pay attention to why it failed, and how to efficiently maximize the destruction of the institutions that stood against it.

melee_warhead's avatar

I understand. The media, academics, and educators will fall.

GrimMoar's avatar

Man, your views must be pretty wonk.

Scott Berry's avatar

Let's assume that debate is a subset of argument, and the shared goal of both is to identify a set of mutually agreed to facts, and that ideally this set of facts frames a solution (always tentative) to a specific problem or question. Granted, this almost never happens. But I can imagine some sort of AI agent that would require arguments to be formatted in these terms.

Conor's avatar

But in terms of the actual mechanics of arguing, has any app or institution like this really caught on?

Courts

That’s the only debate that has stakes, elite participants and rigorous processes for dealing with all the complexities of human reasoning

debate is the wrong frame cause it’s aspiring for at best amateur performance, adversarial truth seeking matters in courts, and each system has evolved interesting procedures for dealing with real and thorny issues of building consensus reality for a collective

Conor's avatar

For anyone actually trying to build something attacking the problem - one of the best systems for designing software well to develop a mental model of analogous physical system and pump intuition via interrogating that model

Great intro talk

I agree with Scott but it’s mostly that debate is a bad analogy, courts are much closer and much richer intuition pumps -

https://youtu.be/jJIUoaIvD20?si=LxcZyq0bRTAUDhRA

Seth's avatar

"But arguments rarely hinge on false facts" seems wrong in the time of Trump when many of his arguments are based on outright lies (that get parroted by his followers).

Melvin's avatar

A lying politician? Well I never!

GrimMoar's avatar

Note: I believe you may be misreading a "defeat the silencing strategy" as a strategy of argumentation. It is actually a strategy designed to coopt the media as opposition, because apparently the MSM can't,literally can't, fail to correct Trump's "lies." (Well, actually Hunter Biden wasn't dishonorably discharged, he simply received a discharge due to drug abuse... you get the drift).

Max Clark's avatar

Have you considered sortition and civic assemblies?

clay shentrup's avatar

Your bootstrapping and incentive problems both dissolve when you move deliberation to a formal decision process with mandatory participation. The jury room is the one forum where participants are structurally compelled to hear the opposing side with genuine open-mindedness—because sortition replaces your self-selected in-group with a statistically representative cross-section of your actual community, including the people you'd otherwise never engage. Deliberative polling (Fishkin) consistently shows that randomly selected citizens shift their views substantially after structured deliberation in ways that voluntary internet arguers never do.

This is why the right target isn't debate apps—it's the electoral process itself. Election by Jury applies exactly this mechanism to candidate selection: a sortition-selected jury, scored deliberation, real stakes. Already has historical precedent in Georgia's colonial-era practice.

https://www.electionbyjury.org/manifesto

Rob's avatar

"People like taking drive-by potshots on the internet..."

Has anyone else noticed that when you disagree with someone's reddit comment the same day they post it, they usually reply, but if you disagree two or three days later, they usually don't? I suspect there's a time-bound aspect of internet disagreement that likely doesn't lend itself to meaningful debate.

moonshadow's avatar

The implicit potshot algorithm for internet-point-scoring forums is:

“Does it look like either I or my interlocutor are likely to learn anything from this conversation or otherwise achieve anything fun and/or productive?”

“Does it look like any third parties who might upvote comments are still reading this?”

When the answer to both of these is “no”, ghost.

aphyer's avatar

"Arguments rarely hinge on one person being simply wrong and stupid": it probably isn't what Scott intended, but my first thought on reading this was "correct, they usually hinge on both people being simply wrong and stupid."

Seta Sojiro's avatar

I agree that the best way to improve debate is have good faith, high quality interlocutors, and there probably aren't enough of them to make an app a viable business.

But I think there is also a lot of value to shared context, at least for topics that have already been discussed at length. For example, in discussing AI risk, it helps a lot when everyone present has read AI 2027. It saves half an hour of preamble and we can just jump straight to the points of contention.

This isn't quite the same as argument mapping - it's more like literature review - a summary of the current state of the argument. Maybe a platform could compile high quality established arguments so if someone wants to discuss it, they agree to first read the established argument. But it's hard to imagine a new platform becoming the standard resource for finding these sorts of high quality established arguments.

Kalimac's avatar

I think that argumentative techniques like the ad hominem, often derided as logical fallacies, weren't intended as logical arguments. They're intended as triage. If the person is generally untrustworthy, their positions are not worth as much care and consideration as others in the limited time life affords, and if they are correct, some other person is likely to make them.

Seta Sojiro's avatar

Also, anecdotally, in the context of formal debate, by far my favorite format is when there is a long Q&A section in which each person asks the other a serious of questions (no rebuttal allowed during this portion).

Brandon Fishback's avatar

This is far too pessimistic. Sure we’ll probably never reach the ideal but there’s of space between here and there. We know spaces for arguments can be better because there are places on the internet that have better arguments than others. So look at what works and try to improve on it.

Instead of many people having bad arguments, a better approach would be trying to have a few people rigorously map out the arguments. Instead of focusing on winning, just be able to understand the persons argument and come up with counterpoints and easily trace it all Wikipedia style.

Obviously the hard part would be getting participants but you could pay them. The nice thing would be is it gets easier the more you do it. As certain arguments wouldn’t need to be rehashed over and over again.

Georg Antoni's avatar

Your critique addresses any "Attempt To Solve Debate," yet the substance focuses almost exclusively on argument maps.

There are other methodologies, for instance, systems that enforce asking and answering (Yes/No) questions to pin down exact points of disagreement: https://yesnodebate.org/ (Disclosure: This is my site.)

What I find most puzzling, however, is that your arguments against these tools seem to undermine the act of debating itself:

If it were true that people form opinions entirely independent of facts and logical fallacies, what is the point of writing long-form articles that highlight incorrect data or flawed reasoning? If the tools are "doomed," shouldn't we tell every rationalist blogger: "Your attempt to debate will not work"?

I concede that for a large portion of the internet, your skepticism is justified. But those people aren't the ones reading 3,000-word essays with nuanced views. There is a spectrum of willingness to engage with opposing ideas. Why shouldn't we explore approaches that nudge those already on the right side of that spectrum to further confront their own biases?

And you mentioned that argument maps might only benefit those with "poor working memory."

Consider an analogy: if someone presents you with time-series data, and they give you 1) a raw table or 2) a visual diagram. Even if you have elite mental visualization skills, isn't the diagram objectively more efficient for processing than the table?

I would argue the same applies to complex arguments: a well-structured map can be superior to three dense paragraphs, regardless of the reader's cognitive "RAM".

Seth Finkelstein's avatar

The point of writing long-form articles that highlight incorrect data or flawed reasoning is appealing to a certain niche audience. It's an extremely small and narrow niche overall. But if it's your specific niche, then it's the audience for you. It's a bit like a small pond isn't even a puddle compared to the ocean. But if you're in that small pond, it matters for you.

Georg Antoni's avatar

True. And the same can be true for a "Debate Solving Approach", no?

Seth Finkelstein's avatar

The problem is that there's a huge difference between "Debate Solving" - taken to mean a widely-applicable methods which would help ordinary people - and something which is a very narrow specialist technique for a tiny number of people intensely devoted to intellectual formalization and proposition abstraction.

Georg Antoni's avatar

> Taken to mean widely applicable methods which help ordinary people.

Yes, and not all 'Debate Solving' attempts focus on ordinary people. I see https://yesnodebate.org/ as a technique to:

a) enforce alignment, i.e., ensure that both parties talk about the same facets of their disagreement, and not at cross-purposes; and

b) require participants to answer (uncomfortable) questions, i.e., to concede when they have to.

Thus, it is also interesting for two specialists in their field who want to finally get a clear answer on a certain aspect from their counterpart.

Brett's avatar

"No. Look closer. People like taking drive-by potshots on the Internet - retweeting some link that makes them feel like they’ve successfully embarrassed their ideological enemies."

This is why telling folks "Don't feed the trolls" doesn't work. A lot of folks find the Easy Dunk just so irresistible for showing off their ego and wit.

GrimMoar's avatar

"Don't feed the trolls" worked well in usenet days. You'd be able to see "obvious troll" because they were including dozens of "otherwise not connected" groups, and you knew to just ignore the post as spam.

It does not work when someone is taking the time to actually troll a particular forum (say, creating a new pokemon rumor that the entire forum obsesses over for two days, then discovers is "actually not true.")

People engaged in a self-righteous cascade should be brought up short with something like "Do you think everyone clapped for that?"*

*and everyone clapped... being a classic 4chan/greentext flag for "I am totally lying about this incident."

Connor Saxton's avatar

What are your thoughts on the Root Claim approach?

Connor Saxton's avatar

"Even when they do, those fallacies often provide some kind of useful information"

Yeah that's something I've noticed a lot. I might point out that someone's argument is technically fallacious, and then they usually stop making the argument, but I kind of feel bad because what they were saying is still useful to engage with.

Alex's avatar

I feel like it doesn't work for a much simpler reason than this: nobody is actually saying what they really mean at pretty much any point anyway.

Most likely they have a point they want to make and then they invoke the human brains' incredible ability at providing arguments in favor of the point. The arguments are sorta real, in the sense that, well, they did reach their stance for a reason, and they are really describing some points to that reason. But large parts of the reasons they like their stance are things that they don't even know they think, or don't know how to put into words, or are based on values that they assume everyone else holds, or think everyone *should* hold, or are just irrational... and none of this gets communicated at all.

Added to that, most points and stances are not really "factual" anyway. The things a person says in a debate may point at facts---maybe people cite some statistic about changes in crime rates or something to make their point--but their actual stance isn't the facts they cite, and refuting the facts doesn't change their stance.

I don't think it's really possible to have an actual mind-changing debate between strangers on the internet. To change your mind to someone else's stance you basically need to have something like respect and trust for them, plus a sort of deference, where you concede they are better at forming truths than you. And those are interpersonal qualities that you just don't get in text between strangers.

Jonathan Tweet's avatar

I've had good luck running dialogues where the participants are required to paraphrase each others' positions accurately and participating in such dialogues. The draw is that your opponent will really hear what you have to say and can't just talk past you.

ruralfp's avatar

“ That means that these apps’ target demographic - people who want to argue on the Internet, but are looking for a better way to do it - doesn’t really exist.”

That’s because it’s mostly a format issue. I remember growing up, my grandparents would take me with them to their “discussion group” where a moderator would introduce a controversial topic and members could then debate. I’m sure there was some kind of formalized structure to it which I don’t remember, but the general “don’t be a mendacious asshole” rule was simply enforced by social pressure of actually being in close proximity to other people and having to look them in the eye while making your point.

People went week in and week out and debated with people they disagreed with strongly but still liked on a human level. This was mostly a group of greatest generation folks with tendencies ranging from “vague communist associations in the 1950s” to “avid Rush Limbaugh fan”, but they still could have peaceable disagreements while still reconvening for breakfast club the next day.

I argue on the internet because in most of normal adult life there isn’t really a regular venue for civil disagreement and the itch still demands to be scratched.

Todd DeMelle's avatar

i think the way to create the greatest likelihood of a constructive argument is to begin by trying to arrive at a shared goal.

Brady Dale's avatar

I tend to think most arguments just come down to different sides having different values and that is not really debatable

Brady Dale's avatar

But even if I am wrong about that this post sounds right to me

Timothy Byrd's avatar

Thanks!

In that post Scott said: "I’m not saying everything fits into this model, or even that most things do." Do you happen to know if he has done a refinement of it since then?

Xpym's avatar
Apr 30Edited

Well, I haven't gone through the entire backlog yet; for now my impression is that he didn't find this sort of thing all that useful, and the current post is him explaining why. Of course, he has also written plenty about disagreement in general, e.g. https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/ was pretty influential.

Kelian Dascher-Cousineau's avatar

I am not sure I understand the dismissal of CMV. Group norms, rewards (deltas) and clear guidance on those norms seem to be portable strategies to improve debate.

In my view, entering an interaction as a *debate* explicitly primes actors to feel like concession implies a loss of status. So I share some skepticism about apps for better debate, much like a dating app called easyhookup might not foster long lasting relationships.

The name Change My View does a very good job priming contributors to be willing to... change their view.

Viliam's avatar

I think that CMV group norms work for a certain type of people, but most people are not like that, and if you told them to follow the norms, they would just LARP it unconvincingly.

If both sides are genuinely trying to figure out the truth, there are rules that can help this effort. If one side just wants to win, or at least to avoid admitting defeat, they will find a way.

For example, in the argument-mapping applications, the easy way to avoid admitting defeat is to keep spamming new arguments in a way that expands the scope of the debate. You can't close all branches of a tree that just keeps growing.

Loweren's avatar

Aren't "snopes" and "community notes" examples of technological improvements that are now commonly used to settle disputes?

So we shouldn't be funding "apps that will solve debate", but we could be funding "apps that sophisticated debaters will use in 2030 to settle way more in-depth disputes that we are settling now".

I would be excited for the next technology to add to my debater arsenal. Perhaps some kind of crux-finding LLM that will auto-post a steelman comment against every substack post I read?

Dmitry Erkin's avatar

What about the whole speech and debate culture with formal and winners

Yug Gnirob's avatar

Meaningless when their judges are a minority.

The Ancient Geek's avatar

Do you mean an ethnic minority person?

Yug Gnirob's avatar

I mean a minority position. It's like trying to establish the rules of warfare by pointing to the ruleset of the World Boxing Association. The only people who care what they think are the ones fighting for pure entertainment.

FeepingCreature's avatar

I think we should lean into dark patterns: a user-driven site that collects articles, takedowns, and counter-takedowns. Arbital but evil. Motto: "Arguments are soldiers; We provide the ammunition." The scaling effect is that any debater who doesn't use the site is hopelessly outclassed. Gish gallop as a service.

"But why would you want that"

I think a lot of time is spent pointlessly relitigating the same arguments. If you can just say "ah yes, this article was the one that formed my current opinion" you can just farm a lot of work out. And when somebody says "I disagree with article Y, please propose one that you think is strongest and I'll actually dig into it", well then we will see if a proper debate may be made of it or if the opponent will bail out. Eventually, my hope is most argument chains will just be replaced by a link to the site.

Peter Gault's avatar

What is your perspective on platforms that are "solving debate" within middle school and high school classrooms? A space for teaching discourse to students could achieve the goals that is impossible for adults arguing on the internet.

Gregorian Chant's avatar

"Where are the car keys?"

"They are in the bowl"

"No they are not"

"Oh they seem to be in my pocket"

There are some things we change our minds about. Surely many intellectual arguments could be put into this kind of bracket?

However of course many arguments are socially underpinned. No-one talks about the things Hitler got right because - well that argument just goes nowhere eh?

Yug Gnirob's avatar

What is a pocket, if not a bowl of cloth?

Bugmaster's avatar

> Surely many intellectual arguments could be put into this kind of bracket?

They cannot, because ultimately the only kind of evidence people can occasionally (arguably rarely !) agree on, is the evidence of their own eyes, obtained in front of the other person with whom they disagree. This dramatically limits the scope of any intellectual debate. For example, if you believe electrons exist and I do not, there's no way for you to convince me since electrons are invisible -- and I can dismiss any scientific instrument you bring to bear as mere trickery. Science gets around this problem by establishing norms about several shared assumptions about things like math and experimental design, but there's no reason for a non-scientist to accept those norms.

Doug S.'s avatar

There is a very effective debate procedure for making and taking apart arguments, and it's basically the American courtroom. Cross-examination and a judge that can make people shut up and answer the question actually being asked are very important for making this format work. Unfortunately nobody actually wants to debate in this style without a very large incentive.

There was a TV show on PBS in the late 1960s and early 1970s called "The Advocates" that used this format to explore controversial issues.

https://www.imdb.com/title/tt11014804/plotsummary/?ref_=tt_ov_pl

See also: https://www.davidbrin.com/nonfiction/disputation.html for a description of a kind of debate that would be worth having if you could actually get people to participate. Some rich person could offer a large cash prize, I guess...

Bugmaster's avatar

I was under the impression that, in a jury trial, the goal of the advocates (on both sides) is to convince the jury. Facts can be convincing in some cases, yes, but good trial lawyers usually lean on techniques of persuasion, with facts as a backdrop. This is why the jury selection process is so important: you want to select those jurors who would be especially amenable to your persuasion, and deny the same to your opponent. Performing jury selection effectively (from what I understand) can make or break a case right from the start.

Doug S.'s avatar

What it's good for is exposing the weaknesses in each side of the argument. I certainly agree that whether or not it actually persuades an audience is absolutely going to depend a lot on who that audience is, but that's probably true of any debate structure. If you have Christian theologians debate Muslim theologians about the death and resurrection of Jesus, I don't think there are going to be very many minds changed regardless of how the debate goes.

GrimMoar's avatar

Yes, the last time I was on a jury, I took for granted that the job of both advocates was to lie to me on behalf of their client (big lie, little lie? Do the best for your client.) My advice (to the other jurors) was to disregard the arguments of the lawyers, as best we could. They aren't on our side (neither, I found, was the judge), the only people sworn to truth are on the witness stand (and any forms of physical evidence, which were quite a bit more credible than witnesses)

Bugmaster's avatar

Strictly speaking, lawyers are not allowed to outright lie. If they are caught in a lie (be it by a judge or by the opposing party's lawyer), then they can face sanctions, contempt of court charges, and even disbarment. Lawyers periodically discover this first-hand when they e.g. use ChatGPT to hallucinate legal precedents for them.

What the lawyers can and will absolutely do instead is tell you some of the truth -- just not the whole truth. They will hire experts that will demonstrate in excruciating detail all the true and demonstrable facts that they want the jury to see -- while ignoring the less convenient ones. Ideally, the adversarial judicial process is supposed to keep the two sides balanced (since two can play at this game); in practice, the side with more money often (though not always) has a massive advantage.

GrimMoar's avatar

I believe that lawyers are allowed to "lay out their story" in opening and closing arguments. The amount of untruth allowed could be quite large, because the jury, the finder of facts, has yet to rule. Yes, legal precedents that are made up get lawyers yelled at (or even disbarred) -- that's for more than just "you lied" reasons (wasting the court's precious time, making the whole court look like an idiot, officers of the court aren't supposed to be trying to mislead the court...).

For example, the prosecutor was trying to say that three gunshots were three different decisions to fire. As they'd occurred in series, in less than thirty seconds, and all at one person, it seemed to me that it could very reasonably be "one decision" that was "shoot until he stops moving" (or "shoot until I know I hit him").

Bugmaster's avatar

> For example, the prosecutor was trying to say that three gunshots were three different decisions to fire.

Typically stuff like this would be resolved by questioning expert witnesses, e.g. psychologists, criminologists, etc.; at least, this was the case in a trial where I was part of the jury. If the prosecutor simply stated this with no supporting evidence, the defence lawyer should have objected due to e.g. lack of foundation.

The same would happen if the prosecutor claimed that any shots were fired at all without providing evidence such as impact wounds, security camera footage, etc. He cannot just say "my client was shot because I say so".

GrimMoar's avatar

I was under the impression that opening statements were just that, where the lawyer was allowed to "state his case" and then attempt to prove it through the witnesses. They're allowed to state what they want to prove, and can twist it...some.

At any rate, it was not objected to, and neither was the defense lawyer's argument that punches "are deadly force and lead to deaths all the time."

The prosecuting lawyer did not finish making her case, it was declared a mistrial (due to problematic conduct of a police officer, biasing the jury in such a way that the trial could not continue). However, I do not expect to have heard from many expert witnesses (other than lawmen involved in the case).

Cyrus the Younger's avatar

This is why I try to have conversations online more than debate, and as soon as it becomes unpleasant I just peace out.

I also have never enjoyed formats where people are pitted against each other to debate, except where I already know who I agree with so I just watch their bits for entertainment (not terribly nourishing intellectually).

Disagreement is so time consuming I prefer to listen to someone explain their arguments in a relatively friendly environment and then do the same for people with different views, instead of listening to people get angry and confused and talk past each other.

GenXSimp's avatar

I've used mind mapping tools, and argument maps. Many years ago I was part of a now defunct start-up that built these tools. If you want to think through your own arguments and understand where you may or may not be wrong, they are helpful. For trying to argue with others less so.

What I notice about arguing is most people really haven't thought things through and are completely unaware of what actually motivates them. Even the most empathetic person may not be able to avoid stereotypes and assumptions when trying to figure it out actual motivations. So everything rabbit holes.

Most Real arguments

1. I don't want my status to be lower

2. I want to hurt people I don't like, or who I think are hurting me

3. I want to strengthen my group by drawing a line

4. I want higher status

5. It would be inconvenient if that were true

6. Outside of my direct experience, so I can't believe it;.

7. I have a strong desire for the world to be a certain way.

8. I want to feel virtuous.

9. I perceive things to be unfair.

10. tit-for-tat

Facts are there to support motivations, attacking them is rarely useful. The most convincing argument is you will be part of a new in-group that believes the true thing, and you'll have higher status. Second best argument, believing the true thing won't actually hurt your status.

Motivations may be biological, and unable to be easily changed. My wife thinks people are constantly being mean to her. So one day I recorded an interaction, then showed the recording to a group of unrelated people and asked what they thought. This did not change my wife's perception, just hurt her deeply that I didn't take her side.

So what good is argument? To find the very few people motivated by pure curiosity and the desire to be understand reality. We argue to understand what you care about, and why, and then use that information to choose our companions for a particular adventure.

John's avatar

> But in terms of the actual mechanics of arguing, has any app or institution like this really caught on?

Science writ large, peer review (despite its detractors), reproducibility and replicability. The mainstream ideal of journalism (despite its detractors), Wikipedia in some cases. And, hat tip to the rationalists, LessWrong-style discussions. At the very least, these entities seem robust against *prolonged and profound* mistakes in reasoning, or becoming irretrievably detached from base reality.

Some things they all have in common: institutionalism, a culture of corrigibility, a focus on ideas over individuals, pursuit of a shared ideal, some ground-level rules regarding how discourse happens, and yes, some level of gatekeeping.

Sniffnoy's avatar

When I think about what i would want an argument map to accomplish, a lot of it is just stuff, that, like, if you're having this problem, the real problem is that the people involved don't really know how to argue in the first place, and so if you tried to introduce an argument map, it wouldn't help as they'd just reject it. Like, one of my real peeves during an argument is when someone makes a point A, and then I respond with a rebuttal B, and then instead of responding to B with a rebuttal to that, they just repeat A. Argh!

OTOH, maybe argument maps could help with the problem of arguing against inessential points. If you map out the argument, and make it clear what points are key, then maybe people won't waste their time arguing against points that don't actually affect the conclusion. Although once again this does require realizing that the point of an argument is to establish the truth or falsity or a particularp proposition, rather than scoring points on the other person... still, this problem at least seems like something where good-faith knowledgeable arguers might nonetheless accidentally fall into, so maybe an argument map could help there.

serp's avatar
Apr 29Edited

I had Claude use my argument mapper to map your arguments against argument mappers and some counter-arguments to those arguments: https://map.claims/d/?discussionId=-OrN7EQRFLMkiT77VgYe

Zanzibar Buck-buck McFate's avatar

In general I do enjoy arguments. I don't enjoy every argument, and I don't enjoy them for their own sake, the endgame is truth, but when there is an important issue at stake they can be fun - I really liked the 100,000 dollar lab leak debate, and the cash stake added to the sport. Bit of rutting, you know? Locking horns. Other people feel differently so we have to find a balance but I am partly looking for a buzz.

I see the structure of an argument as more like a prediction - given this set of facts and values which I'm presenting, on average reality will resemble my position more often than yours. Not original probably (ha ha)

David Manheim's avatar

"I don’t know, maybe some people with poor working memory who really hate holding an entire argument in their head might benefit from this kind of thing. I think for everyone else it just makes things more complicated."

I'm one of those people. The AI safety debate is multifaceted and specific strategies and predictions are strongly dependent on many, many details. It was incredibly beneficial for me to spend a year just trying to map out the connections - https://arxiv.org/abs/2206.09360 . I think the same is true for your work with AI-2027, trying to clarify the expectations and reasons for the timelines; not as a way to debate, but as a way to clarify what the argument is and what you believe.

But as always, if the point of having the debate is to make the other person admit that you're right, then you wrote you bottom line before starting, and your argument is not causally connected with the truth. That's certainly bad, but it's bad for reasons unrelated to how anyone is trying to improve the debate, and I think it's the reason for at least a majority of the failures you describe in the post.

Matthias Görgens's avatar

I'm less pessimistic: we have actually made progress with arguments. That progress is called prediction markets, or more prosaically: betting.

Arguably even if when people don't use prediction markets, having them around as a mainstream concept is useful. For example they highlight the importance of operationalising our statements about the world.

pozorvlak's avatar

Gotta say, I was expecting a "You are proposing a _____ approach to debate reform. It will not work. Here's why:" meme a la https://craphound.com/spamsolutions.txt or https://qntm.org/calendar.

And rereading the "spam solutions" one, it's interesting to note that - despite the continued existence of all the problems Cory lists, most especially "asshats" - spam is much less of a problem than it used to be. Paul Graham proposed the use of Bayesian filtering in 2002 (https://www.paulgraham.com/spam.html) and Gmail deployed it at scale on April 1st 2004: there have been a lot of other anti-spam technologies invented and deployed in the years since, but I think that was the step-change (it's certainly why I stopped hosting my own email). I note that the Wayback Machine's earliest record of Cory's post is from March 2nd 2004: maybe he was only a little bit too early? Though I guess he'd been posting variants of it on forums and Usenet for a while before then.

GrimMoar's avatar

Outright murder was also employed in stopping spam.

Carlos's avatar

I think argument can work between people having the same goals, like what is the best way to the same goal, and if they do not have the same goals, if their interests are opposed, they should negotiate, not argue.

I give you an example, say I am selling my house and my real estate agent tells me the price is too high, not market clearing. Given that our interests align, sell as high as possible but still fairly quickly, I consider this argument. But suppose a potential customer says "you will never sell it at this price". As our interests do not align, I dismiss it as a crappy negotiation trick.

Yeah, I mean this between political opponents. Instead of arguing, negotiate.

The interesting thing in the culture wars was that no one negotiated compromises, everybody was just hating each other, but somehow compromises emerged:

A) The war about videogames and comics and whatever sexually objectifying women, the compromise no one negotiated but still emerged was that OK then, main Western publishers do not do that anymore, but everybody who wants that kind of content can buy it from Japan. They care little about our culture wars. That is how anime got right-coded and 4chan happened and all that.

B) Remember when Facebook suddenly added 100 genders and I wondered what the fuck is the difference between ze and xe? Am I really supposed to learn all that? The compromise no one negotiated but still emerged was that queer people ask cis people to remember only he, she, and they, and nothing else, but then cis people actually use the chosen pronouns.

See if we can do that without negotiation, how much better it would be with negotiation?

I voltunteer to be a bridge-builder, who understands both sides. Any other volunteers?

Carlos's avatar

But Scott, I do not have ideological enemies to embarrass really and I still argue. I sympathize for the left, as I think we should be pushing things in a broadly egalitarian and autonomy-respecting direction, and I sympathize with the right because I recognize how often it goes really badly, and really often done in bad faith, like pushing equality in the things one is underprivileged while happily accepting privilege in something else. I think this is not even far from your general worldview. And yet I argue. And you do too.

The reason is the following. There is the old Internet joke that if you ask how to do X in Linux, you get no answer, but if you say Linux sucks because it cannot do X, you get five different ways of doing it just to prove you wrong.

Argument is a way of learning. It is a bit like learning biology from dissecting live frogs. The frogs don't like it, and maybe you are an asshole if you do it, but it works.

David Roman's avatar

A debate is a conversation, and conversations among people who are not friends or family are always performative: "Man does not communicate with another man except when one writes in his solitude and the other reads it in his. Conversations are either amusement, deception, or fencing."

Divine Ghost's avatar

I tend to think of debate as intellectual wrestling anyway. You can structure the arguments as neatly as you want, but famously you can't reason someone out of a position they didn't reason themselves into and our dirty little secret is that the vast majority of us didn't reason ourselves into almost any of our positions. Within a rounding error, it's post-hoc justifications for emotional impulses formed well before the arguments in their favor were concieved.

Mike's avatar

The real problem is human nature. “A man convinced against his will is of the same opinion still.”

Sui Juris's avatar

I think this (true) aphorism also get us close to why the kind of arguments Scott is talking about truly are futile.

Someone up thread pointed out that debate does work in eg business settings. And that’s true, but I think it’s true because the purpose of the business debate is not to change opinion but to make a decision. If I convince the team or the boss or the customer, it doesn’t matter if my colleague with the opposite argument isn’t convinced, and both he and I know that that’s absolutely fine, because ‘winning’ looks like decisions that go your way, not like everybody agreeing with you. And in a business setting I can conclude ‘my colleague had a better case and better data to back it up’ without having to admit ‘I was wrong’. I can hold the same opinion still, and maybe the results of the decision will vindicate me; or maybe the results will vindicate my colleague. In both cases we might update our priors over time, which is how opinions change in real life, much more frequently than they change by being argued out of them.

A similar logic used to apply in political life. Arguments led to electoral results (or votes in Parliament, or court decisions, or executive action) and that decided what happened. My party could lose an argument and an election and still believe their principles were right. Nowadays of course we seem on the one hand to have to ‘accept the will of the people’ and on the other, relitigate every political decision in daily internet argument. And that has blurred the distinction between ‘political arguments which ade about concrete political actions’ and ‘political arguments which are about what political opinions you have’.

GrimMoar's avatar

People really do employ strategies of razzing people for getting things wildly wrong. This is a survival-of-the-fittest strategy designed to get narcissists and other people who 'can't take being wrong and publically seen to be stupid' to quit.

I am often wrong, and sometimes my data is even out of date. Happy to admit it, happier to learn from others.

Carlo's avatar

I have an incredible idea to address these issues, but this note is too small to contain it.

Michael Watts's avatar

No mention of Leibniz? This idea, and its obvious failure, isn't new at all.

https://publicdomainreview.org/essay/let-us-calculate-leibniz-llull-and-the-computational-imagination/

>> The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right.

Though note that this is the same premise that defined the Rationalist movement.

Michael Watts's avatar

There's a remark I found about arguing on the internet that has stuck with me: the author was complaining that he found arguing on the internet less fulfilling than it had been in the past, because no matter what argument he chose to advance, some idiot would have advanced the same argument elsewhere on the internet before him, and everyone would already be familiar with it.

Note that "some idiot" isn't a figure of speech here: the author's complaint was that a very stupid person would already have made the argument, somewhere. The idea was that having your argument anticipated by an idiot cheapens your argument.

As best I could tell, there were felt to be two problems with this state of affairs:

(1) the author has nothing new to contribute to the argument;

(2) the earlier guy, being an idiot, has served to immunize a portion of the opposition against the argument.

Personally, I don't find this to be as much of a problem as the essay author did. It means you're responsible for evaluating the quality of an argument independently from your respect for the person making it, which matches my personal view of the world. But apparently there's a group out there for whom this is a real problem.

GrimMoar's avatar

Yes. Narcissists. They find "I should be treated as a respected elder" and when it doesn't happen, because "Someone else, with poorer words, made the argument beforehand!" (because obviously it wasn't because my argument was wrong).

yrrosimyarin's avatar

It depends what your goal to "solve debate" is. I do think that when confronted with a like-minded opponent who also wants to "get closer to the truth" rather than "win," a strategy of mapping out the argument space and working toward a double crux can be very useful for the people having the debate *as well as* onlookers who want to learn about the topic.

Seeing the full argument space and determining which disagreements are factual vs values vs predictive can result in a real better understanding of your opponent. It can also allow onlookers to more intelligent map out where they stand on the issue and why.

I could see an informational site that presented the resulting output of these kinds of debates as "the state of the debate" to be very valuable for certain types of public issues. Certainly better than "Candidate A says this and Candidate B says this" or "let's fact-check cherry picked pieces of Candidate B's statements."

But I don't really see an app for having those conversations being publicly popular... for one thing, doing it properly takes a *long* time and a lot of trust.

Tom DeMeo's avatar

I know this is a generic argument about debate, but the lockdown thing really scares me.

Any conclusions about a lockdown are a risk evaluation about infection spread. A relatively small difference in the rate of spread, or the lethality of an infectious disease completely flips any script or lesson we may have learned from the last time, and decision makers are usually working without clear evidence.

God help us if a virus comes along and a lockdown is actually necessary.

GrimMoar's avatar

Hello, risk avoidant person! Perhaps I might interest you in a pile of MREs? It would appear that most people nowadays can work from home, and you can (mostly) afford to not appear in public for quite some time.

In other news, avoiding large public toilets is always a good plan.

Also, what makes you think we didn't plan for most contingencies last time round? 20 million dead Americans for a starting point, and that got very different plans than what we actually got, in terms of Dead Americans.

Bugmaster's avatar

I am a pretty pessimistic guy in general, but I used to be quite optimistic about the structures of the American government. I finally lost my faith in government institutions during the initial phases of COVID, when I saw an official government doctor appear on an official government program with an official government instructions on making your own face masks out of bandanas and paper towels. This event represented such a compete failure at virtually every level that I don't see how we could ever recover. The fact that we got any kind of a working vaccine in such a short time is practically a miracle, especially since it most likely won't ever be repeated. So yeah, if a deadlier version of COVID does come along, we're all screwed.

GrimMoar's avatar

We had all the plans for a deadlier version of covid. Including many, many orphanages for all the kids who had killed their parents by bringing covid19 home with them. (Yes, this was a pretty grim scenario. That's why they hired a comedian. Such cheerful work).

Bugmaster's avatar

Well if that's true, then our plans were comically bad. If your plan includes face masks, then the correct statement to make a national TV is not "here's how to roll your own mask, sort of, uwu", but rather, "your free package of N99 masks is in the mail. Here's a video tutorial on how to wear them properly. Make sure you do so". Similarly, if you plan for respiratory illnesses, then maybe make sure hospitals have access to respirators. And if you failed to plan for respiratory illnesses, then why are you even here ?

GrimMoar's avatar

I think you may be mistaking when plans were made, when they were expired (say, GWB apparently took this seriously, and it was the ONE thing he took seriously, so... yeah. There was a national stockpile).

https://www.independent.co.uk/video/coronavirus-pandemic-george-w-bush-supplies-us-respirators-masks-a9450251.html

https://www.politico.com/news/2020/04/08/national-stockpile-coronavirus-crisis-175619

Blame both Obama and Trump for not keeping it up to date (it wasn't supposed to handle a 50 state draw, mind you, but it was supposed to be there, and ready, for -- say, another rogue federal employee throwing anthrax around).

AFTER we got wind of Covid19 (this is around when we had the idea that it was going to cause 20 million dead Americans), plans got made. Good plans, bad plans, plans that were "good for 20 million dead Americans" and very, very, tragically VERY bad for "2 million dead Americans."

I don't know about where you live, but we engineered a cross-country heist (okay, okay, they were donated masks, but getting them to Us, and not given away to someone else on the way, was the issue) of some N95 masks from Mexico, so that our hospitals could have some functional masks (as there were shortages -- among other problems) -- this is the sort of federalist solution that Trump was asking for (does this sound like "if you don't hire logistics guys, vote for someone smarter next time?" Yep. Guilty as charged -- if you aren't living in a backup to DC, you may want to consider moving as well, we have quite the medical talent here).

The OWS folks writing plans for 20million dead Americans were pulled from the whole project when BLM started winding up.

https://publichealth.jhu.edu/johns-hopkins-education-and-research-center-for-occupational-safety-and-health/news-updates/can-a-mask-protect-me-putting-homemade-masks-in-the-hierarchy-of-controls

(This is just a sample, but that puts it at somewhere between 2 months before Floyd and "at the time of Floyd")

Donnie Proles's avatar

A wise man once said you can't rationalize with a demoralized person. I see that every day.

tg56's avatar
Apr 29Edited

Successful argument is happening all the time in a professional context.

Also, isn't wikipedia a bit of a counter-example? Going after internet arguments is definitely hard mode, but wikipedia seems to have manged some framework that's reasonable successful.

tg56's avatar
Apr 29Edited

I've worked in Tech for many years, mostly startups, and arguments are frequent (and if you don't think they don't get heated, acrimonious, and push on people's identities your wrong), but a decision has to be made, there is, usually, as least some degree of feedback loop or grounding, and professionalism provides some structure and constraint (B-school is full of frameworks, scaffolds, processes, etc. for decision making and getting buy-in). People do move their positions in these debates. I'm sure similar dynamics apply in many other professions (certainly legal, medical, engineering).

The analogy that pops to mind is internet arguments are communism, political arguments are the bloated low-competition conglomerates, and the business world in general is capitalism.

GrimMoar's avatar

Nah, wikipedia ain't successful. Wikipedia is "founders stormed off in a huff" level broken. At some point, wikipedia was harnessing Autism For Good. Now? It is both no longer fun "decisive Tang strategic victory", and on a lot of subjects, hopelessly biased.

Wikipedia removed some of the background for why a priori we'd expect the covid19 vaccine not to work (similarities between coronaviruses and the dengue vaccine that was pulled, etc.). That's sanitizing the record, in order to "prevent people from coming to bad conclusions." Aka censorship.

Sean Waters's avatar

Realizing my work around creating affinity groups for interpersonal learning and wisdom cultures is doomed to the same squishy logic … even as our ability to successfully be autodidactic is very often ennobled through perspectival learning and practice with others … but selling that… creating a new kind of relational LMS… hmm

The Disagreement's avatar

I agree with many of your reservations. I think most of these efforts have failed because they don't invest heavily enough in upfront content knowledge. As you note, you can't have a healthy debate unless both parties are well-informed. Additionally, there needs to be a body of shared evidence to draw upon (which is tall feat but surprisingly possible on many issues). We're launching a unique model at disagreement.org where we invest heavily upfront in creating shared knowledge on controversial topics from diverse viewpoints, and then enable a structured, moderated debate where the goal is not to win but to collaboratively work through a problem. We're also partnering with colleges/universities to create a credentialing system so there's a real incentive to participate. I'll keep you posted!

Bugmaster's avatar

I agree that debate cannot be solved by stating facts, because we live in an age where the average person can no longer find reliable sources for all his facts. For example, if you and I are debating climate change or vaccines or immigration, then I could point to government sources for all my facts -- and you will automatically discard them as being hopelessly biased. Meanwhile you are going to bring up some independent research, and I will dismiss it as the work of crackpots. And in this era of overabundance of information and lately AI-generated slop, validating these sources becomes the work of a lifetime, not an afternoon.

Flat Earthers take this to the extreme by distrusting any observations they have not personally made with their own eyes, and I'm afraid they're merely at the leading edge of our impending future...

Maarten Boudry's avatar

Strong agree with this peice, and I'd push it further: fallacies don't actually exist, and even our old friend the "post hoc fallacy" (which you give some credit) is often perfectly reasonable, depending on background probabilistic assumptions and context. ;-) I wrote a recent piece arguing for this thesis (based on an acadmic paper), after my own disillusionment with teaching logic and fallacy theory: https://maartenboudry.substack.com/p/why-fallacies-dont-exist?utm_source=activity_item

TL;DR: every "named fallacy" is really just a heuristic that's sometimes weak and sometimes load-bearing (your Alex Jones ad hominem is a perfect example). As you say, logic and argumentation schemes are overrated as a way of resolving disputes. That's almost never where the real action is. The dream of argument-mapping is partly downstream of the dream that bad arguments have a clean structural signature you can point at and yell "gotcha." They don't. And the fallacy labels actively encourage knee-jerk intellectual laziness.

One quibble though: I'd separate the critique of informal logic and argumentation schemes from the more general cynicism about whether people are interested in good-faith arguments at all. The first is a claim about the structure and complexity of reasoning; the second is a claim about human motivation. Both might be true, but they're independent. And I'm less cynical about the second point: people are willing to change their minds and regularly do so, even online, just very rarely in the heat of the moment.

Argentus's avatar

For whatever it's worth I've had *some* success in arguing with my sister by us sharing links to AI prompts we started that we let the other one take over and argue with the bot, share the bot response, and keep iterating. This hasn't actually changed either of our minds, but it has helped identify cases where we actually didn't disagree but there was just some dumb misunderstanding of language or something about how we said X was triggering but the bot could phrase it in a way that was palatable. Just the process of refining the prompts has kind of forced the interaction into something more collaborative than competitive. It's also very useful when you have something like domain expertise in an evidence base you want to use but you don't want to spend a bunch of time tutoring the other person in the evidence base - I can create some kind of "hey, Claude, explain X for dumb" prompt and then just post that instead of having to write up an entire elaborate explanation from scratch. This has also been useful and saved a lot of frustration.

This obviously would only work with someone you already sort of trust and can stand, but I haven't really seen anyone else using AI this way. I doubt it's scalable, but it's a tool people might try on an idiosyncratic level anyway.

gubbz's avatar

unfortunately, scott

your attempt to solve the debate about whether or not it is possible to solve debate will not work

M. Stephens Hall's avatar

Well, you can’t wireframe human nature. The engineer’s delusion is treating belief as a system to be debugged rather than a bunker to be defended.

I feel like most people argue to survive the terror of being wrong. You can't really solve a haunting with better geometry.

J M Hatch's avatar

If it is looked at as getting apple eaters and orange eaters to agree on the pricing differential (as if pricing is determinant on the existence of the "other"), then it is not only useless but meaningless as society has already tolerated the pricing mechanism that exist in reality, grumbles and all.

If the idea is to expand understanding and tolerance so society can reduce friction it could have value, but the last thing the existing elite want is a reduction in friction which gives them their value so they will do their best to wreck it.

Mark Neyer's avatar

I think for something like this to work, it can’t be built to solve contentious problems. You would need to build something that had the purpose of enabling groups of people to think, and reach conclusions, collaboratively. And the first users would have to be something like businesses, that just want to collaborate on software projects.

If something actually worked in that scenario, it would have to be far more complex than the argument map. More circles, probabilities, etc. And I think you would need to get people used to using it and seeing that it just worked in noncontentious places.

Or at least that’s what I tell myself when I work on this thing:

https://github.com/neyer/intent

Chris Lawnsby's avatar

So good. Just the best.

"In the few cases where there are specific false facts, most people, when forced, are happy to jettison the false fact and continue making the argument on other grounds."

Tattoo that one on my visual field so I don't forget it.

FWIW I have basically stopped arguing with my friends about AI risk even in cases where they are so obviously wrong that it feels unsporting to show them their mistake: The specific false claim isn't the point: They see the situation differently than I do, and rightfully so! Their argument does *not* hinge on the current detail we are arguing that they are ludicrously wrong about.

It is hard work for me to internalize that lesson and behave accordingly, but I believe it is essential and it's something I actively work on.

GrimMoar's avatar

You cannot rationally argue someone out of a position they have not arrived at rationally.

Alban's avatar

???? that's exactly how it works? In the cases where I have succesfully convinced people that views they hold were irrational, they have by definition arrived at those views in a non-rational way. What does your statement even mean?

GrimMoar's avatar

It's me butchering a Swift quote, apparently:

“You cannot reason a person out of a position he did not reason himself into in the first place.”

― Jonathan Swift

People hold unreasoned views all the time, so it tends to be hard to argue them out of those. You can classify this as "some rational people hold irrational views and will retract those if they are brought to the fountain" -- however, there are a lot of people that are less than rational, and they are not going to make decisions based on a rational framework (in general, these are herdbeasts.)

An example of an entire groupthink around here, was the idea that Trump would fold before Iran does, in terms of "blockading the blockade". This was the result of people failing to be rational (and running the numbers), and falling for a FUD campaign, due to people's irrational belief that Trump's entire Administration is very, very dumb.

Patrick Laske's avatar

"Arguments rarely hinge on one person being simply wrong and stupid". I disagree.

1) winning an argument for false or stupid reasons is a form of power. Instead of saying "I want x because I'm a selfish bastard or jealous" you can just make stuff up. People rarely say "I should get x because I'm a (terrible person)", and will instead "lie" or make stuff up. Motte and Bailey can be a form of this.

2) when benefits are concentrated and costs dispersed we create incentives to lie with plausible sounding ideas, something the uninterested won't look too close at.

As such as a heuristic most debates probably are not as close as we think they should be. Of course that's a mean thing to say, neutrality and open mindedness are virtues.

It's really just a minority of interesting debates at the margin that actually are an interesting crisis that requires both normative and positive coordination. Either the stakes are too low, "should toilet paper be inside or outside" or there is near or total consensus, "peanut butter and chocolate tastes good."

Alex Weissenfels's avatar

The argument-mapping tools I’ve seen all make the same mistake: They optimize an argument that has already skipped some basic steps in the problem-solving process. Nobody collects the stakeholders’ priorities and compiled them into a problem definition, and nobody brainstorms possible approaches for addressing the stakeholders’ needs.

Instead, the argument starts with one person asserting a solution that they believe meets their own side’s needs. Then the other side asserts that that solution will obviously destroy society, and therefore their solution is the only one that will work. People fight over which incomplete solution is the lesser of two evils. We can do better than that.

Arguing about facts is an insidious trap. Facts do not tell us what we should do. They are tools that help us accomplish our goals. Facts describe the assumptions that we make about the future consequences of our decisions, based on what we have observed in the past. However, it’s always possible, however unlikely, that a factor we’re not aware of will cause a different outcome than the one we expect. Remember this: Facts are predictions, and predictions are risks.

To address arguments about facts, we first need to know what outcomes people want, and what risks they are and aren’t willing to take. What’s at stake for them if they rely on a fact that turns out to be wrong, or a 99% certainty that comes up on the 1% failure mode? Once we understand the stakeholder values, we can either find a test that will satisfy more people’s standards for certainty, or we can take steps to limit the worst-case scenario in the event that our predictions do turn out to be mistaken.

The conflict resolution process that I’ve been developing breaks the conflict down into its basic building blocks, starting with what people want. Why does the argument matter? What will anyone do differently based on the outcome? This understanding lets me dissolve people’s defensiveness and the zero-sum framing of the argument. It also points the way towards approaches we can collaborate on to change the situation and achieve mutually agreeable results. These approaches are constructive, not merely compromises between two extremes.

This system also quickly weeds out disingenuous participants. The point of the system isn’t to let you come to an agreement with literally everyone. Not everyone is reasonable. The point is that most people are reasonable, but the unreasonable people dominate because they can keep the reasonable people divided over issues that they should be capable of reconciling on.

People already resolve conflicts constructively in real life, in business and in communities. There just hasn’t been a systematic, scalable process for it until now.

Reliably and efficiently resolving political, ideological, and epistemological conflicts is not only possible, but well worth doing. I elaborated on this topic in the Effective Altruism forum.

https://forum.effectivealtruism.org/posts/pxangWuPHCCn76Qv6/political-conflict-resolution-we-can-and-we-should

Conflict and ignorance cause far too much harm. They also come with a huge opportunity cost. Think of all the societal problems that are stalled because people are divided on the solutions. Imagine what humans could accomplish if they could agree on policies to support, even if they’re not all willing to rely on the same assumptions. Conflict resolution is too important to give up on. We just have to start by understanding what the conflict matters.

looking in's avatar

I think the whole idea of trying to solve debate is ill-informed. Going back to the idea of how people don't usually hinge their arguments of false facts, I think it ultimately boils down simply to different people having different loss functions. This is the human condition, and attempting to 'solve' debate seems to me like enforcing a uniform loss function on everyone. I think this takes away from the beauty of being human, no matter the method taken to achieve it.

mako's avatar

If I could convince you that we had a credible shot at building the next discord/twitter/substack/bsky, and that our focus on extensibility/decentralisation would make it strangely good at evolving and distributing novel social technologies, and that many of the social technologies it ends up discovering would, if not solve debate, make debate much much more efficient, would you fund *that*?

Yug Gnirob's avatar

>make debate much much more efficient

What's an example?

mako's avatar
Apr 30Edited

Things that might happen: making it clear which arguments are taken seriously by which sides; how large those sides actually are; exposing reputation indicators; making it very clear when public figures are avoiding engaging with difficult questions; when refutation is possible, flagging a root comment with a marker that a refutation has happened; various ways of lubricating and informing dialog with llms.

I'm not going to be more specific than that because the point is that we have to run the experiment to find out what's actually going to stick. We have to open the cage to find out where the bear will run. I could give concrete examples of features/systems *I* will build first, or which I'm pretty sure I want for my neighborhood, but the question of which features are going to catch on with the public is very different.

It's possible you're asking for a historical example of a social process being made more evolvable and consequently improving. I'm not sure how useful such examples would be, I don't think we could reenact history if we wanted to, but I'd be interested in hearing such stories if you have any.

We know that humans will keep going online in search of information, and also in search of social harmony, and sometimes they'll seek both of these things at once, and they'll seek platform features that make that easier, and if you make it easier to build and trial features, then there will come to be more of them.

Yug Gnirob's avatar

I see problems with most of those. Which arguments are taken seriously, and by how many people, is a temporary state that would need constant updating; it's always subject to "your figures are outdated". Trying to prove public figures avoid questions is trying to prove a negative; it's subject to "they did answer it, you just ignored that one". Refutations are debatable in themselves, subject to "that's not a refutation at all". (See my previous two examples.) And the whole thing is subject to the ever-popular "tl:dr".

GrimMoar's avatar

How is it different from Mastodon?

mako's avatar
Apr 30Edited

In mastodon you can only have modifications (which couldn't be called extensions) that've been installed by your instance host, which couples admin and moderation to curation decisions, and further limits competition between curators due to scale effects.

Modifications will just be (ruby) source patches, it wasn't designed to be extensible. An activitypub project that's trying to care more about extensibility is Bonfire, but afaict if you want to take extensibility all the way you'll have to get into serverless hosting standardisation, or else there are limits on what extensions can do, or can be allowed to do, so we're looking at that. The host coupling issue could probably also be said of bonfire.

Error's avatar

> "You might think: “But don’t people like arguing on the Internet?” No. Look closer. People like taking drive-by potshots on the Internet - retweeting some link that makes them feel like they’ve successfully embarrassed their ideological enemies."

The fundamental problem with "solving" argument is that most arguers are arguing in bad faith, most justifications are fake, and most people are aware of that -- at least on the dragon-in-the-garage level. Techniques to improve debate won't work because nobody *wants* them to. People don't care about opposing arguments except insofar as they present an attack surface. People don't care about their own arguments except insofar as the payload lands. Acknowledging standards of accuracy and local validity in debate would make both of those harder, and so accuracy and validity are strictly a nerd concern.

So you mostly can't deshittify individual arguments via techniques like double cruxing, or betting as a tax on bullshit, or mutually honoring rules of validity or whatever; they are likely to be rejected by disputants to exactly the extent that they would actually work. You can *indirectly* benefit from them to the extent that norms of their use *filter disputants*, though. There exist people who would prefer to lose an argument than win it by invalid reasoning, in the same sense that there exist sportsmen who would prefer to lose a match than win by cheating. Willing use of argument-deshittifying-techniques may help them distinguish each other. Argument quality may not statistically improve much, but the experience of this subset of participants very well may.

(Even then, the difference may be hard to observe. Arguments where both parties care about accuracy and can obtain it tend not to *stay* arguments, because independent accurate maps of the same territory will converge. If we disagree about whether the store has milk in stock, we can go check and then stop arguing, and the arguments we didn't have don't get counted as deshittified)

{tangent: how the !@#$% do I do a proper blockquote in a substack comment?}

songxian's avatar

I think the problem often lies in how we frame the question before the debate even starts. If both sides are working from different definitions or implicit premises, it’s almost impossible to reach a productive resolution. Maybe the first step isn’t to argue, but to map out where we actually disagree.

gorst's avatar

I think we already have tools that help improving debate, for those who are interested in improving debate.

* Groundnews helps to get in contact with perspectives from other bubbles

* AI fact checking helps with ad hoc evaluation of arguments

* knowledge bases help with repeated arguments/facts

Also I think that if you want to improve discussions, you should scope what kind of discussions you are aiming at, e.g. random twitter posts? family whatsapp chats? presidential debates? podium discussions? each of these start on a different level, use different tools already, and would benefit from different tools.

Itai Bar-Natan's avatar

"This hasn’t worked in two thousand years of arguing"

What about the scientific method? I don't think I am at all stretching in putting it in the same reference class you are discussing of social technologies for improving debates or disagreements. Recall the standard formulation of the scientific method: when you have a theory, formulate a testable hypothesis that follows from your theory, then perform an experiment to test your hypothesis. This is not necessarily something you're applying in a disagreement or debate, but it also clearly gives a prescription for what to do in a disagreement or debate. And it's a massive success: It has contributed to a consensus on the origin of the universe and of human beings, the principles for the transmutation of elements, and many more things.

I do worry the conventional story about the role of scientific method in science is simplified, and there are other factors for how the great discoveries of science were made. But it really does look to me like the scientific method as conventionally described genuinely made a significantly contribution to these advances.

Andreas Jessen's avatar

Ok, so what do we do then?

I mean people not being able to properly debate without trying to kill each other seems to be getting worse. At least that is my perception. And if things are getting worse, something must have changed. And if something changed it can change again, right? ... right?

Am I too optimistic in thinking that this is a problem where improvements can be made?

GrimMoar's avatar

No, you are not. The solution is simple: Drop Social Media. It's bad for your brain -- invest in "better for your brain" entertainment, ideally your own creativity (you'll get more out of music once you try making it yourself, I promise! Listen to "Big Bottom" again, and see the musical wit).

The "California Teacher of the Year" had tweets that look just like every other rabid blue-teamer (I'm sure there's rabid people on the right, too, but apparently fewer uses of guns towards High Profile Politics Figures).

Extend trust, and do so even when someone else is prickly, obstinate, or even rude. They may think you are a Nazi, so prove them wrong!

Andreas Jessen's avatar

I guess, I was thinking more about interventions on a societal level. Banning social media sounds like it might backfire. If the TikTok ban had gone through, that would have been an interesting data point. I was really curious how people were going to react to that.

GrimMoar's avatar

EU is pretty much banning social media for children, due to their inability to spread their propaganda efficiently.

Alban's avatar

No, that's not why they are doing it. They're doing it because it has shown to have negative impacts on health in children and young teenagers. It's like banning smoking, alcohol or gambling under 18.

GrimMoar's avatar

Up next: banning "Overly Processed Foods" for children. Oh, wait, obesity is a plus, not a minus, for the Powers That Be.

Alban's avatar

What you infer is wrong; they are actually doing that as well, since obesity is not actually a plus for the powers that be ( no need for capitalization - there is no need to infer conspiracies).

As evidence you can see the MAHA movement, or for example the Dutch government taxing drinks with a high percentage of sugar.

Andreas Jessen's avatar

Let's see about that. If they ban it as effectively as Australia, they are not actually banning it.

Also, I think you are a little too cynical about the reason why they want to have an age restriction.

Limehouse Records's avatar

Hasn't this been solved to some extent, though? I can think of two cases where this works pretty well. The appellate court system and well-functioning academic disciplines. Just riffing, but I think the necessary conditions are consensus on argumentation norms, real reputational consequences for breaking those norms (skin in the game), and a set of judges/referees/senior practitioners who aspire to impartiality, even if it's imperfect.

You're not going to get this in an online environment, so I agree that creating an app from scratch is doomed. Because if you start losing the argument on the app, there's no real reputational consequence from violating norms (for example, by using ad hominem attacks or appealing directly to the public), which defeats the consensus on norms required to make the system work well.

drosophilist's avatar

>Sometimes people tell me that actually, my opinion is wrong. This frustrates me.

But why does it frustrate you? Unless you suspect people of arguing in bad faith, don't you want people to tell you when your opinion is wrong? Isn't that how you learn and grow?

Like, I'm an atheist, but if I found compelling evidence for the existence of God, I would believe in God!

Or, as the famous quote goes, "When the facts change, I change my opinion. What do you do, sir?"

Simon Says's avatar

You do not wish to support the website 'Every Argument against Everything'; the sole purpose of its existence to provide passive aggressive URLs to be pasted whenever someone is starting some kind of debate?

Steven Postrel's avatar

I think the "keeping track of what's been said" in branching arguments could be useful, but Scott is overly hung up on logical deduction and fallacies as being part of that.

Stephen Toulmin, the smartest and one of the earliest "postmodernists," cracked this problem back in the 1960s. He was mostly focused on writers keeping track of their own arguments, but the application to social interaction seems obvious. The key is that a) it helps you keep track of your OWN arguments as well and b) it does not impose specific forms of deduction or inference at each link, just allows one to map the context for each assertion among the others.

https://odp.library.tamu.edu/informedarguments/chapter/toulmin-dissecting-the-everyday-argument/

Bob's avatar

The Argument is doing a great job at what can be done in this arena: create examples of people who disagree with one another talking to each other about ideas without it devolving into anger and hatred.

shoefish's avatar

I feel either attacked or vindicated.

I've been having thoughts on how to fix debates for more than a decade and whenever I went to the drawing board I bounced off of it.

My point of pride is either that I'm way past the basic objections raise in this post because "obviously" "solving" debates is more about finding cruxes of disagreement and things like what level of credence people assign to things, or that it always seemed like an incredibly hard problem to solve with no reasonably-low-hanging fruit to start from.

Giaime Ferrigno's avatar

the app idea is stupid, given how arguments work, but your theory that no one likes to argue for the sake of it, or in orer to approximate the truth is simply false. Source: here I am. countless times i changed my opinion through arguing, and countless times i made my opponents change it. Maybe we are not so many, but i kinda disagree there is just a few of us

The Solar Princess's avatar

I have never heard of anyone trying to "fix debate", and I struggle immensely to understand how this could even be a specific grant-worthy proposition. I kinda... want to see an example? Not of a good attempt to "fix debate", but of _an_ attempt. What would such a thing look like? Maybe anyone who tried to apply for the grant could show me the proposal? I am so confused

Stephen Lukins's avatar

Apparently Derek Parfit could hold an enormous argument map in his head and work his way through it, claim by claim. For the rest of us, argument maps are useful. They won’t ’solve debate’ but they do make it more productive, because they make unstated premises explicit, and unstated premises are the cause of a lot of wasted energy when people talk past each other.

It’s also valuable at an individual level- there’s good evidence that it improves critical thinking skills even when the maps are no longer ready to hand. People seem to internalise the logical structure of arguments. From a study in Nature: “We found that seminar students improved substantially more on LSAT Logical Reasoning test forms than did control students (d = 0.71, 95% CI: [0.37, 1.04], p < 0.001), suggesting that learning how to visualize arguments in the seminar led to large generalized improvements in students’ analytical-reasoning skills.” https://www.nature.com/articles/s41539-018-0038-5

Ben's avatar

Seeking understanding and unity vs seeking dominance, &c.

May be much more important than any of the rest

Jesse Parent's avatar

Indeed; I’d suggest better articulation of what is going on. “Trying to win in the eyes of other humans” is one thing, “seeking to clearly delineate a position, to juxtapose to a second position” is another. "Seeking to explore a problem space” may be something else entirely.

Peter Gerdes's avatar

Isn't very friendly, kind and understanding AI kinda a way to fix debate? I don't necessarily mean persuade people but I think if most people were convinced to interact with an AI that tried to charitably explain why people who they felt were destroying the country felt the way they did it might do a great deal to increase understanding.

Importantly, the goal needs to not be persuade people they are wrong. Merely make them appreciate how someone who is well-intentioned and compassionate could end up having those points of view.

Of course you need to motivate people to use an AI with the correct prompt but you could just outright pay them. I don't expect miracles but I do think Americans are similar enough for most of us to at least empathize with how most of the rest of us end up where they are. Just painting the picture of how people on the other side end up feeling pushed by angry accusations from our side could help a great deal.

Maybe not solving debate but reducing the hate.

Recovering Philosopher's avatar

There is a simpler construct to this: arguments are values-weighted and are not readily resolvable through discussion. They are inherently subjective on this basis. Trying to make this objective is a fools errand. You can’t shift subjective values from a pure logic perspective. You can try to have argument about values prioritization (most likely won’t go anywhere), but most things in life are a manifestation of underlying tensions between values non-alignment.

Poor Imp's avatar

By solving debate, do you mean eradicating debate? In other words, is 'debate' the 'problem'?

Mapping seems rather to be trying to solve the inability to debate, but the inability to debate seems to be a problem rooted in the inability to enjoy gleeful disagreement and apply not only formal logic but formal and even familiar enjoyment of encounter. We debate someone whom we trust, to a certain degree, to also be seeking either a) ultimate good or b) the truth (if they're not the same thing). We flee and reject or reject and attack someone whom we do not trust.

A good argument (and someone who debates well) engages with and attacks ideas, but respects and encounters their fellow arguer. The mix up is the major muddle, isn't it?

Laurentiu Lupu MD's avatar

What seems most important here is that debate may fail online not because it is under-structured, but because it is over-public.

Once an audience is present, argument stops being only a search for truth and becomes a performance of self-protection. At that point, better maps, cleaner logic, and more explicit premises can still help at the margins, but they are trying to repair a medium that has already converted disagreement into display.

The deepest problem with internet debate may be not insufficient reasoning, but too much spectatorship.

Biggus Rickus's avatar

@Patrick since Substack blocked me from replying to your previous posts …. This is not a safe space for bitter fake American leftists like yourself. Leftists are emotionally incontinent , intellectually and morally bankrupt !

Biggus Rickus's avatar

@Patrick since Substack blocked me from replying to your previous posts …. This is not a safe space for bitter fake American leftists like yourself. Leftists are emotionally incontinent , intellectually and morally bankrupt !

Patrick's avatar

I’ve read ol’ Teddy, I’m very familiar with the term leftist. Doesn’t really apply to me.

I’m not bitter. Uneducated people voted for a fake billionaire pedophile. It was a rational outcome considering how stupid the majority of people are in the USA, unfortunately. Anyone with a pulse on society has seen the writing on the wall.

I’ve planned accordingly and can weather the shit storm that this administration will enact on poor and uneducated folks in America.

and even though you’re a low IQ slug who couldn’t define what Leftist actually means, it’s alright. I wish you well.

Cheers.

Biggus Rickus's avatar

Your future Komrade ! BWAHAHAHA!

Patrick's avatar

Nah, my future is me and my family and friends and that’s it. I’ve done curated a world that isn’t really determined by psychopaths on the tv.

Biggus Rickus's avatar

Who told you to make that claim that Trump is a pedophile? The Ladies of the View? Or the Chinese Communist Party?

Patrick's avatar

Dozens of photos with a convicted sex trafficker, millions of mentions in the philes….Look, I’ve done plenty of studying on this topic and I don’t have a horse in the race as I’m not allegiant to any man- I call a spade a spade.

Biggus Rickus's avatar

So anyone who had a photo taken with Epstein is automatically a pedophile ? Sophistry is not a substitute for syllogism! You are just not good at this!

Patrick's avatar

If your brain can’t comprehend the historical facts of Donald Trump and his documented history is sexual misconduct and apply it to his relationship with Epstein then my deductive reasoning also leads me to believe that you’re not as smart as you’re trying to pretend to be by using words you looked up in a thesaurus.

I’m going to state this again, I’m not Left or Right- just putting together a profile of an individual based on factual evidence.

Biggus Rickus's avatar

Except your “evidence “ is not proof of your accusation . When are you silly leftists gonna stop repeating the same zombie lies ?

Biggus Rickus's avatar

Your intellectual laziness is staggering !

A. Jacobs's avatar

From my perspective, arguments fail because they assume a shared representation of reality that doesn’t exist. People aren’t disagreeing inside the same map, they’re using different maps entirely, built through different ways of compressing and organizing what matters. Some lean structural, others narrative or social, and those don’t translate cleanly. Trying to formalize the argument without aligning those underlying representations just adds structure on top of misalignment. That’s why it feels like it should work but consistently doesn’t.

Anlam Kuyusu's avatar

I understand that "solving debate" or "solving disagreement" with an app-like tool that merely lists the alleged premises and the alleged conclusions may be unrealistic. Regardless, isn't it fruitful to think about why well-meaning, good-faith debaters disagree? Should that not make us doubt our own opinions as somewhat accidental?

For instance, I think it's fair to say that most disagreements in analytic philosophy can be rooted to intuitive responses participants give to certain thought experiments. And unlike in math where counter-intuitive truths can be argued to via other methods, in analytic philosophy, it's intuition all the way down.

Balaji's avatar
4dEdited

(1) The utility of this kind of tool is much higher for internal investment committee / corporate decisions than for public political debates.

Within a company or fund where one is deciding what the future is going to be and whether this particular allocation of capital will make money, everyone is on the same team and truth seeking is real.

(2) Also, it can be overkill at times, but an ACH-style tool (analysis of competing hypotheses) can also be useful for debugging complex experiments in biotech. Here you are pipetting clear liquid into clear liquid, and often don’t know exactly how your experiment went wrong without systematically enumerating many different hypotheses. The goal is to come up with debugging micro-experiments that you run to get a differential diagnosis.

(3) In general, this type of tool would be useful for high stakes corporate, financial, biotech, and medical decisions. Only when there is time and calm to make the decision, a lot of information, a lot riding on the outcome (both positive and negative), and everyone in the discussion is on the same team.

Demarquis's avatar

This is like curing the common cold. There is no "debate" to solve. There a millions of different types of debates that have millions of different things wrong with them.

citrit's avatar

I think modern university parliamentary debating has half-solved debate. at the very least, there are consistent criteria for winning/losing, and the debates are consistently high quality (despite the prep time being only 15 minutes). here’s an example: https://youtu.be/RG53F3qSBW8?si=ErzYo-w9BPb5Hw3l&t=942 (I recommend listening to ryan lafferty’s speech first, the speaker at 15:45)

judging manual: https://drive.google.com/file/d/1pXoKtAdigoL5w94cKDcQ8nCQ5fN06pMp/view?fbclid=IwY2xjawNUrGlleHRuA2FlbQIxMABicmlkETFiYnRnMlJLaXFFSlZaU2h4AR4k006NeLE1FzAxvlj2AMLN1oFN7xgV7OFyalbQLfqSO-bd3AhKvUzYx1G5RQ_aem_ZQ3KbEvEj1dJT8-svcSK-w

andy culinan explains how it works: https://www.youtube.com/watch?v=askTnN0CNB0

“stock arguments": https://docs.google.com/document/d/1I4uTCoGVlEbCrt7dKC3Qr4GL2Dq5x3Idmvz251ZEJFo/edit?tab=t.0#heading=h.19q9ib19j54g