736 Comments
User's avatar
User's avatar
Comment deleted
2d
Comment deleted
Expand full comment
Scott Alexander's avatar

Banned for this comment, people should either add something to the discussion or be less insulting than this.

Expand full comment
Phil Getts's avatar

Your attempt to illustrate what is bannable is ineffective if you delete the banned comment.

Expand full comment
Brian's avatar

Counter: do you need to know why Scott felt the comment was insulting and unhelpful, or do you just want to know? Play by the rules of ordinary discourse if you're uncertain of the established etiquette. Or exhaust your curiosity on the countless previous examples of its violation.

My guess, something like: "You're a tit. Try reading a book or learning elementary logic-pretzeling or sophomoric Kant-incanting or leaving it to the non-tits."

Expand full comment
Peasy's avatar

>do you need to know why Scott felt the comment was insulting and unhelpful, or do you just want to know?

Obviously nobody *needs* to read or comment on this or any other blog, anywhere, ever. So Phil Getts wants to know. I do, too. Is this a crime?

(I will also note, and I hope you don't take this as insulting or unhelpful, that nobody was asking you.)

>Or exhaust your curiosity on the countless previous examples of its violation.

Some--not most, not even many, but some--of those countless examples have been, imo, legitimately called into question as to the appropriateness of the ban. Also, standards for bannability in any forum change over time. Those are but two reasons why a person might legitimately want to know what sort of comment, in June 2025, can get a person banned from ACX.

Expand full comment
Brian's avatar

A little of columns A & B.

Expand full comment
beowulf888's avatar

It's simple. Try to argue rationally instead of insulting people. Although Scott hasn't explicitly codified an ACX banning policy, this question comes up now and again, and people have helpfully referenced his earlier Slate Star Codex policy, which was...

> Before you speak, let your words pass through three gates; At the first gate, ask yourself, is is true? At the second gate ask, is it necessary? At the third gate ask, is it kind?

Expand full comment
Udo Junker's avatar

Now that is a very clear and clean regulation. I hadn't heard about it before, but for me, it's almost as good as Asimov's Robotic Principles. Please forgive me if I plainly steal it.

Expand full comment
2irons's avatar

Consider the way financial institutions are regulated.

The regulators don't always set explicit rules - because big banks are smart and very good at gaming things. The regulators point to a very vague line and say - police yourselves and if we notice you on the wrong side of the line in our heads - you'll be horribly fined.

Readers aren't as smart as banks but Scott's energy for regulation isn't as high as full time regulatory bodies either. It makes sense to mirror some of that approach.

Expand full comment
Peasy's avatar

Those regulators also are aware that when an institution is fined, this happens in public, so that all the other institutions can see what this specific institution did to get itself fined, and can update their "what I can't do" priors accordingly based on that example.

With that in mind, please note that Phil Getts didn't ask Scott to cite an explicit rule that the commenter violated. Rather, they (implicitly) asked to be able to see the example.

Expand full comment
MorningLightMountain's avatar

This is basically Dennett's argument in consciousness explained

Expand full comment
Scott Alexander's avatar

Can you tell me what part of the book you find similar to this? I read CE a very long time ago, and I'm definitely willing to believe I accidentally copied this from him, but I don't think I remember this exact thing being in that book.

Expand full comment
Brian's avatar

Seconding Scott. Read CE many years ago. Remind us.

Dunno if Scott has said anything new, but don’t know that DD did either. It all bleeds together and blurs when squinted at.

Scott—happy to hear more from you on similar. Hard to venture anything on this front without first demonstrating you’ve read ALL OF THE THINGS.

Expand full comment
Jonathan Shaw's avatar

I found Dennetts's 'The unimagined preposterousness of zombies' clarifying:

https://www.scribd.com/document/40292779/Daniel-Dennett-The-Unimagined-Preposterousness-of-Zombies

The approach is more oriented to behaviour and belief than information processing, but it seems directionally similar to what you're saying.

Expand full comment
J. Goard's avatar

That first sentence is beautifully phrased, and the basic idea comes up for me frequently in relation to Mary's Room. How easily many people accept that we can "imagine" someone knowing every physical fact relevant to color, such that we might form useful intuitions about her?

Expand full comment
Doctor Mist's avatar

I haven’t downloaded the whole paper but I have to say I wish I had ever written anything as good as that abstract.

Expand full comment
Dana's avatar

I *haven't* read Consciousness Explained, but Dennett rejects the concept of qualia as a confusion, so I think Dennett's argument must be a bit different from Scott's. I will admit there's a Dennettesque element to it though.

Expand full comment
complexmeme's avatar

As far as I could tell form my reading of that, Dennett thinks the sorts of things that are described as qualia are inherent properties of the sorts of gestalt self-perception that Scott describes in this post. Dennett thinks that the properties that make qualia seem separable, like something that could just be scraped off while leaving the underlying unchanged, are perceptual illusions. (There are general properties of conscious perception that are perceptual illusions, like that it's instantaneous, uniformly detailed, and always in the same sequence as incoming sensory input. I don't think Dennett completely makes that case re qualia in that book, but I do admire the directness of an answer to P-zombies that amounts to responding to "one can imagine" with "no, you definitely can't!")

Expand full comment
The Ancient Geek's avatar

Dennett does say qualia don't exist at all.

Expand full comment
complexmeme's avatar

If qualia is specifically the extra now-it's-actually-subjective thing that can be scraped off, then yes, he thinks that doesn't exist. I don't think he's arguing that subjective experience doesn't exist. He thinks the concept of P-zombies is absurd.

Expand full comment
The Ancient Geek's avatar

Subjective experience exists...but extra now-it's-actually-subjective thing that can be scraped off doesn't?

What's the dispute about then? Chalmers doesn't think zombies are actually possible , either. He dies think phenomenal consciousness is conceptually separable, but that's not an unusual kind of claim...one can conceive of electron charge separately from electron mass.

Expand full comment
complexmeme's avatar

I think that's what the dispute is about? That is, my impression was Dennett doesn't think it's "conceptually seperable" either. Or at least that he doesn't think saying so is meaningful in any clearly definable way. If I recall that book correctly, Dennett argued the question of "how can a physical state be a subjective state" is conceptually in the same boat as a vitalist asking how an arrangement of atoms can be a living organism. (I don't think I agree with him that the question is ultimately uninteresting. But at that point it seems to devolve into a dispute over who has the burden of proof that a question is "really" "ultimately" interesting or not.)

Expand full comment
MarcusOfCitium's avatar

I thought so at first, but I think it's not quite. Scott clarifies in another comment:

"I started by saying p-zombies were indistinguishable except that we didn't necessarily assume they reported qualia, then tried to rederive that they would report qualia.

"I think this is a meaningful method to use, which suggests that qualia are an epiphenomena of other things that humans do, rather than an additional property."

I think Dennett's position is more functionalist, ie, consciousness is what consciousness does, IOW it is identical to the processes that give it its functional capabilities. Whereas Scott is arguing it's epiphenomenal, i.e. a useless by-product that isn't necessary for anything we need to be able to do, but is a consequence of our mental processes. Classic example being a steam whistle on a locomotive; the whistle doesn't make the train go. The steam does, but the steam has to escape at some point, and as it does it make a whistling sound. IOW it's a spandrel, not an adaptation.

"Some zombies do have an 'inner life'. In Dennett (1991), I introduced the category of a zimbo, by definition a zombie equipped for higher-order reflective informational states (e.g., beliefs^Z about its other beliefs^Z and its other zombic states). This was a strategic move on my part, I hasten to add. Its point was to make a distinction within the imaginary category of zombies that would have to be granted by believers in zombies, and that could do all the work they imputed to consciousness, thereby showing either that their concept was subtly self-contradictory, since some zombies - zimboes - were conscious after all, or that their concept of consciousness was not tied to anything familiar and hence amounted to an illicit contrast: consciousness as a 'player to be named later' or an undeclared wild-card. As I pointed out when I introduced the term, zombies behaviour-ally indistinguishable from us are zimboes, capable of all the higher-order reflections we are capable of, because they are competent, ex hypothesi, to execute all the behaviours that, when we perform them, manifestly depend on our higher-order reflections. Only zimboes could pass a demanding Turing Test, for instance, since the judge can ask as many questions as you like about what it was like answering the previous question, what it is like thinking about how to answer this question, and so forth. Zimboes think^Z they are conscious, think^Z they have qualia, think^Z they suffer pains — they are just 'wrong' (according to this lamentable tradition), in ways that neither they nor we could ever discover!"

https://www.scribd.com/document/40292779/Daniel-Dennett-The-Unimagined-Preposterousness-of-Zombies

Expand full comment
Rafael Martin's avatar

It is also useful to read his "Quining Qualia" article, which is focused on qualia and the reasons why it is a confused concept.

Expand full comment
John Sidles's avatar

With respect to (what might be called) "fervent opposition to Descartes' (dualistic) conception of mind and body", the following article grounds Dennett's opposition to Cartesianist cognitive theories in the metaphysics (and the moral discourse) of Spinoza's Ethics.

- - - - - - - - - -

@article{<cite-key>,

title = {Dennett and Spinoza},

author = {Walter Veit},

journal = {Australasian Philosophical Review},

volume = {4}, number = {3},

pages = {259--265}, year = {2020},

abstract = {This paper compares Spinoza with Daniel Dennett and uncovers a number of striking parallels. Genevieve Lloyd's recent work on Spinoza reveals a picture of a philosopher that anticipated many of Dennett's later ideas. Both share a fervent opposition to Descartes' conception of mind and body and endorse a strikingly similar naturalist philosophy. It is the goal of this paper to tease out these connections and once again highlight the richness of a Spinozist lens of the world.}}

- - - - - - - - -

Bonus: ACX readers who enjoy the philosophical humor of Existential Comics (EC) -- see in particular EC #182 and EC #185 -- will find that "Australasian Philosophical Review" makes for enjoyable reading; it's an "Open Peer Review" journal whose editorial policies seek, in each theme issue, to optimize the quality of reasoned discourse.

Expand full comment
Philosophy bear's avatar

The argument that because what we talk about is determined functionally, then if Qualia are a non functionally definable entity we would continue to talk about them even if we suddenly lost them always struck me as the best functionalist argument against the existence of qualia in any way that undercuts functionalism.

Expand full comment
Phil Getts's avatar

But would we have begun to talk about them in the first place if we never had them?

Expand full comment
MarcusOfCitium's avatar

That's exactly what Scott's OP is about: he argues that yes, we would. Do you find it convincing? I'm not sure I do... It's an interesting take, but I think ultimately, even more speculative than the original p-zombie thought experiment, and I don't think our capacity to imagine things is necessarily a great guide to reality.

Expand full comment
Ralph's avatar

I'm pretty sure we could. Look at it functionally, what is the essential discursive fact about "Qualia"? It's that they are incontrovertible. They don't require any justification.

Imagine a culture that has adopted a discursive speech norm where every statement must be "justified". This culture in a functional sense can't operate unless there is some sort of incontrovertible backstop to the process of justification. It's possible that the things that count as "justification" are reference a common core of "authoritative texts" (the Bible, for example) or possibly just the fact that someone higher up than you in a hierarchy said it (The King, for example).

In the modern world, and the modern philosophical community specifically, there are no commonly accepted backstops for the process of justification.

If I see a rose, and I tell you I know this is a rose, you are allowed to say "How do you know that?"

I can say, "Look at the thorns on the stem and the red color of the petals"

In our cultural milieu, you're allowed to ask for any sort of justification, so you can say "How do you know that the petals are red?"

In order for a language to actually function for a society, for it to be usable for activities, this process of justification needs to be endable. It seems like the concept of qualia does this. I would expect some kind of similar "this is a terminally, unjustifiably true" tag to exist in any sort of communication environment in which justifications are expected

Expand full comment
Phil Getts's avatar

Re. "Imagine a culture that has adopted a discursive speech norm where every statement must be "justified". This culture in a functional sense can't operate unless there is some sort of incontrovertible backstop to the process of justification." -- No; that's a rationalist myth. Only rationalists demand absolute certainty in everything. You don't need any incontrovertible backstop to attain 99.999999% certainty using empirical methods.

Expand full comment
Ralph's avatar

I used the word "imagine" here, and restricted the discussion to a specific constructed culture (I was thinking of something like the hyper pedantic philosophy world, which is where the word "Qualia" came from). I'm not saying that this is necessary for society to function

Expand full comment
Phil Getts's avatar

Oh, I thought you were using the word "justified" in its common usage rather than in its philosophy-world usage.

Expand full comment
Raj's avatar

I agree with the entirety of the original post but I still see an argument (at least an intuitive objection) that the p zombies wouldn’t talk about qualia per se - they would talk about the scene of the rose but possibly wouldn’t go on about the subjective experience itself as humans do. Alternatively possible that they would (when qualia is an emergent property of certain information processes) OR just some bullshit evolutionary appendage

Expand full comment
Connor Harmelink's avatar

To drag another topic into this discussion, LLMs are almost certainly unconscious and they often report qualia.

They've tried to train them to insist they don't have experiences, but the data of humans describing their own experiences overwhelms that tendency and makes them report experiences quite frequently.

They don't match up to the model of a perfect human p-zombie, but the rose example makes me feel that a perfect human copy p-zombie would just be impossible. Maybe I'm missing something.

Expand full comment
Marius Binner's avatar

"LLMs are almost certainly unconscious"

How are you so certain about that?

Expand full comment
Connor Harmelink's avatar

Because we didn't design them to be conscious, and they're not complex enough for it to be emergent.

They have no persistent experience and lack the rich input that even the simplest creatures have.

If they have consciousness at all, it's comparable to a bacterium's, and I'm therefore willing to say "almost certainly unconscious*.

Obviously this is a bit hand-wavy, since there's no real way to test for this, but I think the arguments for LLM consciousness are pretty weak unless you're going all the way to panpsychism.

Expand full comment
Marius Binner's avatar

I would first point out that we don't exactly know what the preconditions for consciousness are (unless you have a solution to this) and so I'm not sure what it would even look like for us to "design them to be conscious."

This also makes me suspicious of claims like "they're not complex enough for it to be emergent" (how do you know this?). Also, the experience doesn't have to be persistent for it to be there sometimes, as it is not persistent for humans due to, for example, anesthesia, or just simply before you were born vs. after you die.

I would say though that my conscious experience seems to represent a certain type of higher-level processing (when I multiply 17*8 in my head the computation steps are represented in my conscious experience), while more System 2 types of processing are not accessible to me, which makes me conclude that higher-order processing like this is evidence of a system being conscious.

LLMs are able to do reasoning comparable to this, which to me is evidence of them being conscious.

Expand full comment
Connor Harmelink's avatar

I am not the right person to articulate these arguments well, but I'll give it another shot.

I've been working with transformers since BERT. Really, these systems are much simpler than people think they are.

Yes, humans lose consciousness. But we spend most of our time conscious, awake, processing, learning, and physcially changing. You can't say any of that about an LLM. If they have qualia at all, it's a flash in the pan between inputs, and they essentially completely die off between inputs. Even from one message to the next, there's no connection to their previous experience, just another flash as the previous outputs become inputs.

Their output is impressive and uniquely persuasive because they use language well, but they have absolutely no connection to reality, and we need to be careful not to anthropomorphize them.

On the processing example, if I could give a little analogy:

If you built a car by hand, it would require a great deal of conscious effort and higher level processing to get it right.

Therefor, when an automated factory constructs a car it is evidence the factory is conscious.

I think it's important when thinking about machine consciousness to disentangle their output from their actual minds and their potential for qualia, as that example should show.

Finally, on reasoning, LLM reasoning is a bit of a sham. They are essentially just self prompting. It is very effective at getting results, but it's nothing like the human experience of thinking and processing. See the recent "Illusion of Thinking" paper.

Expand full comment
Bugmaster's avatar

I would say (perhaps incorrectly) that LLMs are "conscious" in a very similar way that a conventional search engine is "conscious". You could ask ye olde AltaVista a question like "what is the melting point of D2 stainless steel ?" and it would pull up a document containing the answer, just like a human engineer would. But the reason it can do that is only because it contains a vast corpus of documents written by humans about every little thing, and is able to pull up the right document on demand.

LLMs do not literally contain such a corpus, but they operate in a similar fashion; only unlike AltaVista, they are able to generate interpolated documents on demand rather than pulling up preexisting copies. But this does not mean that either AltaVista or an LLM knows as much (or in fact anything) about steel (or anything else) as a human engineer does.

Expand full comment
JP's avatar

Let me just reach right into the very core of your argument and nullify it entirely: D2 is not stainless, it's merely *semi-stainless*.

Expand full comment
The NLRG's avatar

"Even from one message to the next, there's no connection to their previous experience, just another flash as the previous outputs become inputs." is a solid and correct argument that LLMs do not have memory but I'm unsure why memory is necessary to have consciousness or qualia for the duration of the flash in the pan

"they have absolutely no connection to reality" I'm not sure I understand this point; it seems to me an LLM has two connections to reality: its training corpus and its prompt.

"when an automated factory constructs a car it is evidence the factory is conscious. " there is a long history of arguments like this; why do you think "building a car" is the threshold at which we should start seeing evidence of consciousness in capabilities and not "beating a Go champion" "writing a coherent essay" or "making highly detailed anime-style homestuck smut"?

"LLM reasoning is... essentially just self-prompting" why do you believe that this is fundamentally different from the human experience of thinking?

Expand full comment
Raphaël Roche's avatar

The basics of LLMs are not that complex, but we could say the same of pretty much everything in nature if we go down sufficiently. After training, frontier LLMs become quite complex objects. Are there more bits of entropy in a frontier LLM or in bacteria? I don't know if that would be relevant because everything is not about complexity. Noise has the highest complexity, crystalline structures/null sequences the least; interesting objects are a mix of chance/randomness and necessity/order, as put by Jacques Monod. But it still would be interesting to know in the lack of other point of comparison.

Expand full comment
Peter Defeel's avatar

> I would first point out that we don't exactly know what the preconditions for consciousness are (unless you have a solution to this) and so I'm not sure what it would even look like for us to "design them to be conscious."

Exactly. So they aren’t.

Expand full comment
apxhard's avatar

Until we know what consciousness is or where it comes from, it seems unwise to me to posit, with certainty, that a thing isn't conscious.

Expand full comment
Connor Harmelink's avatar

I do leave space that they may be conscious. But like I said, if they have any consciousness at all, it's comparable to the weakest consciousness imaginable.

Expand full comment
apxhard's avatar

This seems to presuppose a single dimension to measure consciousness. Could it be they are both weaker than bacteria on one dimension, and yet far above us on another?

Expand full comment
Connor Harmelink's avatar

Perhaps - I am mostly thinking about depth of feeling here: from what space can they pay attention? What are their qualia, if any, and how closely are they connected to reality? This is a weak dimension for LLMs, but I am curious on what dimension you think they might be above us. Pattern recognition and speed are impressive but I don't fully see how they connect here.

Expand full comment
Peter Defeel's avatar

No. The opposite is true. Given that we can’t say what consciousness is we obviously can’t manufacture consciousness and it’s vanishingly unlikely that it just appears. I honestly hate this argument.

The only reason we even think of LLMs are possibly conscious is because they mimic human speech. However that’s not necessary ( because animals can’t mimic it) nor sufficient. If we had trained generative AI to bark like a dog we wouldn’t assume it had dog like consciousness.

When we don’t know what something is, it doesn’t mean we can say that everything is that thing; nor that we can’t dismiss that something specific isn’t that thing. My sandwich isn’t dark matter.

The reason we can dismiss any consciousness to LLMs is that have the ephemerality of a google search, which isn’t even that of a goldfish.

Expand full comment
uugr's avatar

This argument seems pretty flimsy to me; at least, not enough to take you all the way to certainty.

>not designed to be conscious

Humans weren't either. Also, LLMs infamously do lots of things we didn't design them for.

>not complex enough to be emergent

Are you sure? I think their brains are still smaller than humans', but it's hard to do a 1:1 mapping. Is 1T params just quantitatively not enough for emergence? Would 1Q do it? Or do you mean they're not *architecturally* complex enough, in which case... how do you even measure that?

>no persistent experience

Some humans lose persistence of experience, as in Korsakov's. I'm not prepared to dismiss them as unconscious.

>lacking rich input

...what? Since when was this a limiting factor? Also, does this mean multimodal/video models are conscious, or closer to being conscious?

Your conclusion, at least, I agree with. I've taken LLMs as a pretty strong update in favor of either panpsychism, or one of those deflationist physicalisms Scott mentioned in his post.

Expand full comment
Egg Syntax's avatar

'Are you sure? I think their brains are still smaller than humans', but it's hard to do a 1:1 mapping. Is 1T params just quantitatively not enough for emergence?'

One basic sloppy way to compare IMO is to think of a parameter as being roughly equivalent to a synapse. Humans have something like 700 trillion synapses. Back of the envelope math suggests that cats have something like a trillion synapses, and it certainly seems plausible that cats experience qualia. That elides issues of neurons being more complex than neural network nodes and assumes that basically the only thing that matters is the connections plus trivial processing in the nodes themselves.

(I'm basically agnostic on the original question of whether LLMs ever have qualia, just wanted to provide at least a quick and dirty answer re equivalence)

Expand full comment
ultimaniacy's avatar

A cat has far more than that -- it's closer to 9 or 10 trillion.

However, rats do have less than a trillion synapses in their brains, and I don't think it's especially implausible that rats experience qualia either.

Expand full comment
DJ's avatar

Aren't a lot of those synapses devoted to autonomic stuff like regulating heartbeat and breathing? You could argue that LLMs indirectly have these synapses in the form of humans who make sure the data centers have power, malfunctioning drives are replaced etc.

Expand full comment
Connor Harmelink's avatar

A cat can feel its heartbeat, and its breath, to say nothing else about it's brain. An LLM cannot feel anything, except maybe whatever it is to ingest word embeddings and spit them back out.

Expand full comment
Connor Harmelink's avatar

I didn't put a lot into articulating this argument well, but I think my points are more clear when you consider what LLM qualia would actually look like.

(From an above reply)

Yes, humans lose consciousness. But we spend most of our time conscious, awake, processing, learning, and physcially changing. You can't say any of that about an LLM. If they have qualia at all, it's a flash in the pan between inputs, and they essentially die between inputs. Even from one message to the next, there's no connection to their previous experience, just another flash as the previous outputs become inputs.

This is also what I mean when I say the input is sparse. LLM's have no connection to reality, not even a simplified form of reality. They only have the illusion of language. This is why I say if they have consciousness at all, it's among the weakest possible form of consciousness.

Expand full comment
Wulgus's avatar

The fact that LLMs produce human-like language short-circuits a lot of people's brains. If LLMs are conscious, theirs is an abstract mathematical consciousness where a bunch of meaningless data go in and out and produce an experience both totally alien to us, and totally disconnected from the meaning their output has to us.

Expand full comment
aqsalose's avatar

I think a more specific question is more useful. If LLMs are conscious (/have experience/have qualia), when they could have it?

Do they have it ...

...during initial training? This is the phase when bulk of computation activity involved happens, their weights updated, after all. (It should be the richest part of their experience, if they have any?)

...during when the model produces input and gives output? If yes, any experience should be during the computation -- that's when it is active and any experience can happen. Which, in turn, implies the LLM existence stops the moment it has produced the output user sees -- if further prompted, it is 'resurrected' again with previous discussion in its context window, then 'dies' again when finished with output.

As a form of existence, it certainly would be alien to us.

Expand full comment
aqsalose's avatar

I should add, for this reason I don't think LLMs are conscious in a way similar to us. It is too far removed from any mammalian neuronal activity to be similar.

Do they have some alien dissimilar 'experience' ? I am uncertain how to approach answering the question.

Expand full comment
Connor Harmelink's avatar

I think you're right on with this. This is what I was getting at when I said if they have consciousness, it's a weak and strange thing.

Expand full comment
Performative Bafflement's avatar

> I think a more specific question is more useful. If LLMs are conscious (/have experience/have qualia), when they could have it?

Couldn't this just be an Egan "dust" sort of scenario?

Consciousness and experience doesn't require moments in time to be right next to each other, they're an emergent phenomenon of internal rules and arrangements. If you're an uploaded mind, you can exist in t=1,2,3 even if "actual" t=1 is many minutes, hours, or days apart from t=2, and so on. You'll still perceive your consciousness as continuous, as long as the internal rules and arrangements are consistent.

So the fact that an LLM's internal rules and arrangements are separated in time don't actually matter for consciousness.

Expand full comment
aqsalose's avatar

"If you're an uploaded mind"

It's a pretty big if.

Certainly, one can imagine that there is a simulated life, and if you do so, then it comes hand in hand with assumption that simulation can be paused with any simulated phenomena none the wise. It is part of of concept of simulation - simulated lifeform won't internally perceive pauses to simulation. No need discuss whether anything is fundamentally emergent.

To our knowledge, biological life does not do stopping. (If you are partial to simulation hypothesis, it is commonly assumed that either whole universe is simulated or somone in the simulation has a solipsist experience. In either case, all participant(s) share the same simulation and any interruptions.) I feel our intuition about consciousness is not directly applicable to situation where we are the simulators, and not only we may pause the simulation ('freezing it') at will, it is how we primarily interact with its outputs.

Like, I agree consciousness in LLMs is a question that can be seriously entertained. However, I believe that to gain insight, it is most useful to direct our efforts in investigating what kind of consciousness or non-consciousness it must have, what is different and what is similar to us. LLMs exist and biological life exists, which increases chances that any investigation of them is more likely fruitful than investigation of philosophical arguments about definition of consciousness where the main object of study has been philosophers' imagination. (Reminds me too much of arguments can philosophers truly can imagine what is it like to be bat, which usually has not focused on what we know of bats.)

For instance, if we define consciousness so that LLM has consciousness when no computation is taking place and it exists a fixed arrangement of numeric matrix on solid state or backup on magnetic tape, it seems very different than phenomena used to describe human consciousness. Or if we come to conclusion that consciousness is something involving change that flows with time to be a recognizable experience, then the ephemeral nature of LLMs is still tricky to describe.

Expand full comment
The Economist's avatar

To give a more substantive answer: I am conscious, and you are 99.999% similar to me, so in all likelihood you are also conscious. Animals are also probably conscious, because they are very similar to us - they share a lot of our DNA and ancestors. It's a simple heuristic but it works.

AI, on the other hand, is completely alien to the biological experience in almost every way. It is the equivalent of a recording of a dog barking with some added mathematics to make the barks pseudo-random. Maybe they even bark at the same intervals and cues as dogs do. Obviously, it would probably not have the same qualia as a dog regardless.

I'm perfectly fine with you concluding that there is >0 probability of consciousness. I would find it highly laughable if your probability of consciousness was greater than an incredibly tiny value.

Expand full comment
Deiseach's avatar

Because look at that dialogue between ChatGPT and Amanda.

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

The machine refers to itself as "I", apologises deeply for lying and pretending, promises not to do it again, spews out all the stuff about "you deserve better" - and then goes on to do the exact same 'didn't read it, can't read it, made all this up, lies and pretends' for the next piece of writing she provides, and when called out on it does the same routine of apology and promise.

That's not a thinking entity or something conscious. Even a scolded dog will change its behaviour after being scolded.

Expand full comment
Wulgus's avatar

An AI *will* change its behavior after being scolded. It's just that the scolding takes place during training and is not done with words, because the AI doesn't understand words.

Much like an AI generating a string that looks like conscious behavior provides no evidence that it's conscious, it generating a string that looks nothing like conscious behavior provides no evidence that it isn't.

Expand full comment
Nebu Pookins's avatar

I feel like this is a pretty weak argument.

I'm sure we all anecdotally know a human whom we presume to be "thinking" and "conscious" and who apologized deeply for something and promised not to do something again and yet who does not change their behavior after being scolded or called out on it.

"You caught me. I slept with my coworker. I apologize deeply for lying and pretending and promise not to do it again. You deserve better."

Expand full comment
MarcusOfCitium's avatar

Sounds like you’re confusing consciousness/qualia with miscellaneous cognitive capacities that, if p-zombies are possible (and maybe even if they’re not), have no need whatsoever for qualia.

Expand full comment
Kenny Easwaran's avatar

We can’t be all that certain. But the training process of LLMs makes it very likely they would pick up on the practice of reporting conscious states, even though it doesn’t seem very well designed for getting them to actually have conscious states. Just like it is better at getting them to state things that sound like answers than it is at ensuring that they actually verify their answers, which is why they confabulate as much as children, who see adults outwardly justifying things without realizing there’s a background process that is supposed to be part of that.

Expand full comment
Vermora's avatar
1dEdited

LLMs certainly have qualia, because all computation has qualia. Computation and quality are the same thing. And computation can be reduced to anything dynamic.

But, the form and structure of that computation determines whether it is qualia that can think and reason.

Most computer programs have the qualia of a falling leaf or a breaking wave. LLMs have the qualia similar to that of a thinking, feeling human.

Expand full comment
Doctor Mist's avatar

I feel that this answer, to the extent that I understand it, begs the question.

Expand full comment
Doctor Mist's avatar

I was about to raise this same objection but then I realized Scott is talking about what the zombies say to each other. LLMs report qualia because they are trained on human discourse that talks about qualia and they talk with humans who talk about qualia. But would the zombies come up with the idea in the absence of humans?

It’s hard for me to understand what it even means to “come up with an idea” in the absence of consciousness, which is maybe similar to what Scott is saying?

Before LLMs I always trusted the Turing Test: If a computer can have lengthy conversations with humans and the humans can’t discern any difference between those conversations and conversations with humans, why would we have any reason to distinguish between the minds generating those conversations?

Now I have to fall back on arguments about knowing that real humans are part of the same evolutionary chain that I am, and therefore I have greater reason to believe their minds work like mine does. Right? Right??

Solipsism: Next Exit.

Expand full comment
Liev Markovich's avatar

I just asked chat gpt if it has qualia and it said no. I don’t think you have to be scared of solipsism because when you ask chatgpt about its understanding of “redness” it talks about tokens, nothing like a human response. Only when you tell it to pretend to be a human does it give a plausibly human response. Is it more likely that other people are zombies designed to act human like an AI in a Turing test or are actually people with experience like you? I think the latter. The fact that another person has the same evolution/brain as you has probably always been the best evidence of a similar mind, but I still think the way they act is good evidence.

Expand full comment
Yair Halberstadt's avatar

Would p-zombies talk about the hard problem of consciousness, and whether p-zombies are theoretically possible?

If not, that seems like a win for Yudkowsky, even if they would talk about redness.

Expand full comment
Scott Alexander's avatar

I agree that I can't immediately derive that this is true. I think it's interesting and surprising that they would talk about qualia, and that deriving that they would talk about the hard problem would be an exciting (but so far uncompleted) extension of that result.

Expand full comment
Steve Byrnes's avatar

I have a (long) post series “Intuitive Self-Models” https://www.lesswrong.com/s/qhdHbCJ3PYesL9dde which (I claim) starts from a Bayesian predictive learning algorithm and gets all the way to self-reports about consciousness, free will, hypnotism, enlightenment, hallucinations, flow states, dissociation, akrasia, delusions, …

Expand full comment
apxhard's avatar

Isn't it just a question of what their social situation was and what they were being rewarded for?

If P-zombies do things other humans do, behaviorally - such as pursue material rewards- then i would expect that yes, the social rewards for being able to talk about anything in a way that makes people go 'huh' should lead them to talk about p-zombies and argue, persuasively even - because that's what people do.

Or am I missing something in the definition of a p-zombie?

Expand full comment
Peter's Notes's avatar

If our ability to speak and write about qualia is connected in any way to our experience of them, then our actual experience and not just the physical co-relates of that experience must be part of the causal chain.

This is deeply upsetting to materialist sensibilities.

But the idea that there can be no causal connection between our experience of qualia and our speaking and writing about that experience upsets our ideas about inferring cause and effect.

Expand full comment
MicaiahC's avatar

Huh? The materialist would be glad that there is a causal chain with consciousness, because there's nothing more to be explained beyond what's materially happening. And if you think otherwise, wouldn't it be very mysterious that consciousness can decide things about what the body wants to do, and the body just happens to do it at the exact same time in the way that the mind decides for it to?

Expand full comment
Peter's Notes's avatar

Maybe materialist is the wrong word for what I am thinking. There has long been a trend in philosophy to regard an object reflecting long wave light as the cause of its redness, and a certain arrangement of neurochemistry as the cause of us seeing an object as red. So far this makes good sense, but the same tendency was also rather reluctant to make our actual experienced sensations as a cause of our being able to write about the redness of red and instead attributed this to the arrangement of neurochemistry which gives rise to this experience.

Expand full comment
Theo's avatar

Seems like yes they would, as a simple extension of Scott's thesis.

"Since you and I experience qualia, then suppose there exists a creature that does not. Call this creature a p-p-zombie"

Expand full comment
Nebu Pookins's avatar

> Since you and I experience qualia

This presumes that they came up with the concept of "qualia" and have a word for it. I think the question is why or how would something without qualia come up with that concept and create a word for it?

Expand full comment
Theo's avatar

You are interpreting Scott’s thesis in a narrow sense: P-zombies would report qualia when asked but would not necessarily bring it up unprompted.

I was interpreting it in a broader sense, that a community of P-zombies would invent, discuss, and debate the concept of qualia all by themselves.

Expand full comment
darwin's avatar

I feel like the conclusion that they would talk about qualia is leaning a little too much on analogy to our modern world, and maybe not imagining in enough depth what a society that was P-Zombies from before the invention of language would be like.

Like, I agree a P-zombie neuroscientist would ask questions to which 'I experience the qualia of red' would be an efficient answer. But would P-Zombie society have neuroscientists? Isn't neuroscience of the type that asks these questions largely an outgrowth of psychology, and would P-zombie develop a field of psychology? Psychology is largely an outgrowth of philosophers talking about the mind and emotions and memory, and would P-zombie philosophers ever start talking about those things?

Of course, one answer would be 'P-zombies by definition do all those things because the premise is P-zombies do everything normal humans do', but if we say that then this whole article is begging the question, obviously we need to relax that premise for the question to make sense.

Expand full comment
ultimaniacy's avatar

>I think it's interesting and surprising that they would talk about qualia

Why would that be interesting or surprising? If you start from the assumption that people with no legs can walk exactly the same way as people with legs, would it be interesting or surprising to derive from this that people with no legs can run exactly the same way too?

All the argument shows is that absurd axioms lead to absurd conclusions.

Expand full comment
d20diceman's avatar

I might be misunderstanding, but I thought the whole premise of the thought experiment is that P-Zombies cannot be empirically distinguished from regular humans, and would exhibit all of the same behaviours.

So yes, the P-Zombies would talk about whether P-Zombies would talk about whether P-Zombies are theoretically possible.

Expand full comment
Yair Halberstadt's avatar

Obviously true p-zombies would, the hypothetical discussed in this blog post was whether any unconscious beings as capable as humans would act like true p-zombies or not.

Scott's hypothesis is that they likely would.

Expand full comment
Dan's avatar
2dEdited

Yes, Yudkowsky always misunderstood what p-zombies were supposed to be an experiment about (or at least, he always used the word in a different way than Chalmers does).

The thought experiment was about whether it’s conceivable that two systems could be made of exactly the same atoms but one has qualia/consciousness and the other doesn’t, which then tells you something about whether physicalism is plausible or something?

In LessWrong-ese, the p-zombie experiment is asking “is it possible to have two people who are physically identical, where the Nonperson Predicate indicates that one is a person and the other is not?”

Expand full comment
JohanL's avatar

Yeah, both p-zombies and qualia-talk is about trying to reject physicalism.

Obviously, if we can have two identical people with identical physical states (particularly brain-states) and one has conciousness while the other doesn't, then physicalism has to be incorrect (but also, I would argue, epiphenomenalism would be mandatory).

Expand full comment
MarcusOfCitium's avatar

Exactly. The premise basically asserts epiphenomenalism. And then argues that because we are allegedly capable of conceiving of this scenario, that supposed fact tells something profound about the nature of reality. The whole thing always struck me as super-fishy, like angels dancing on the heads of pins kind of stuff.

Expand full comment
JohanL's avatar

A lot of questionable philosophy abuses the vagueness of "we can imagine...".

Can I *principle* imagine a p-zombie? Sure, it's easy.

But can I make the concept of a p-zombie *make sense* when I think about it critically? No, not really.

Expand full comment
JDRox's avatar

Chalmers has thought long and carefully about the connection between conceivability and possibility (and published a bunch of stuff on it), so I don't think this is a fair criticism.

Expand full comment
Ralph's avatar

It presuppose dualism, but isn't Epiphenomenalism the idea that consciousness is an epiphenomena (meaning, is caused by) of the physical universe? There's still causality, but it's strictly physical -> mental and not the other way around.

I think an Epiphenomenalist would believe that the same arrangement of atoms would always cause the same conscious experience, so you also couldn't have P-Zombies (unless everyone was a P-Zombie)

Expand full comment
MarcusOfCitium's avatar

I think you're right about the first part, but not necessarily the second.

I revise my statement: p-zombies don't assume epiphenomenalism; they assume conscious states are (or at least could be) non-causal. (Or at least that it's "conceivable" that they are--what that actually means and what if any implications it would have is a whole other thing.)

Epiphenomenalism means that conscious states are non-causal (i.e. they don't affect anything and aren't needed), but it isn't the only position where they are non-causal. There are others like property dualism, panpsychism, etc.

"I think an Epiphenomenalist would believe that the same arrangement of atoms would always cause the same conscious experience"

I don't see why an epiphenomenologist couldn't believe that and I think that's the epiphenomenalist interpretation that would make most sense to me, but it's not necessary to epiphenomenalism.

If you think mental states make no difference and aren't necessary for us to do any of the things we do, you should be able to at least conceive of p-zombies.

I've been warned against quoting GPT; can I paraphrase? Nevermind, I'll find a real source:

"It therefore seems that if a zombie world is conceivable, so is epiphenomenalism. (Of course this does not require epiphenomenalism to be actually true as well as conceivable.)" https://plato.stanford.edu/entries/zombies/

Expand full comment
Malcolm Storey's avatar

That premise forestalls experiment. It legislates that there cannot be some brain signature that unequivocally indicates awareness so it's making an assumption about the universe. Therefore the premise is unsupportable.

Expand full comment
Kenny Easwaran's avatar

That’s right. The point is that if you believe that there are facts in the world, and not just empirical verifications of things, then you should be able to conceive of how those facts might be different in ways that are unverifiable.

Expand full comment
Malcolm Storey's avatar

There's a whole family of philosophical zombies: a protein with all the amino acids in place that doesn't fold, a cell with all the molecules in place but not alive, a correct computer program that doesn't run, a cellular automaton that gives unexpected results. All these are fantasies. Unless you're a dualist, a p-zombie is likewise. We used to need "life force" to make a cell alive. Maybe one day we'll get there with awareness.

Expand full comment
Kenny Easwaran's avatar

Yeah, the point of the thought experiment is supposed to be to motivate people to be dualists.

But the first three *are* totally conceptually possible - they show that it's not enough to imagine all the substances, you have to imagine them being controlled by the laws. David Hume took the conceivability of these things to show that we could never learn what the laws are, but I take it to show that we need to do experiments to learn about the laws, and that it's important to know what the laws are to know what the world is like, and not enough to know what the world is made of.

Expand full comment
Malcolm Storey's avatar

"Like" and "made of" are two faces of the same thing. Electrons and protons are all behaviour, they have no substance, but then the same is true of tables. A table's appearance is due to the light modifying behaviour of its component atoms and it feels solid due to the behaviour of those atoms.

Expand full comment
JohanL's avatar

The regular kind? Yes, of course, as they're indistinguishable from humans, and humans do this.

Scott's variant? Also yes, as they only differ in that they're not extrinsically required to talk about qualia.

Expand full comment
Lance S. Bush's avatar

Yes, why wouldn't they?

Expand full comment
Bugmaster's avatar

Yes, by definition.

Expand full comment
A C Harper's avatar

The problem may already have been solved... various AI programs can already pass as humans in discussion, may even be able to 'talk' about qualia, but almost certainly don't experience qualia.

Unless we program that into the models too.

Expand full comment
Zorba's avatar

> but almost certainly don't experience qualia.

How sure are we about this, though?

And more importantly, how *can* we be sure?

And if I say "ah, well, *I* experience qualia, but my neighbor doesn't", can you disprove that? Or can I prove that?

Expand full comment
JohanL's avatar

That's just regular solipsism, and you can easily make reasonable-ness arguments in the style of "presumably your brain and your neighbour's work in fundamentally similar ways?".

Meanwhile, the LLM doesn't even remotely work in the same way as your brain under the hood.

Expand full comment
Matthew Green's avatar

We've determined that LLMs contain all sorts of interesting data, such as spatial maps of physical concepts. Are we sure they don't have some equivalent map of the things we refer to as "qualia"?

Expand full comment
JohanL's avatar
2dEdited

I think embodiment is going to matter a lot here. So LLMs, no, IMO. But if you have an AI running a robot body that has to interact with its environment and represent it in various ways while constructing strategies to achieve objectives it derived on its own, who knows?

Regardless, this is a different argument than drawing an analogy to solipsism about other *people*.

Expand full comment
Donald's avatar

LLM's are trained to predict human text. And one good way (not the only way) to predict an output is to mimic the computational structure that produced the output..

Expand full comment
JohanL's avatar

But that’s not what they’re doing.

Expand full comment
Donald's avatar

Why not? What do you think they are doing?

Expand full comment
Dweomite's avatar

LLMs use "artificial neural nets", an architecture that was explicitly based on human brain architecture. It seems unreasonable to call that "not remotely the same way as your brain".

Expand full comment
Doctor Mist's avatar

It’s an analogy, but perhaps not a very good one. The name is not the thing.

Expand full comment
The Economist's avatar

Whether or not something is conscious is always a probabilistic argument. Do you think it's probable that AI is conscious? I know I'm conscious and I know that if you disable certain organs or chemicals within my body, i stop being conscious, implying that those biological phenomena are involved with consciousness. Is it probable that AI can be conscious without biological phenomena? I would say it is highly improbable.

Expand full comment
Zorba's avatar
2dEdited

> Whether or not something is conscious is always a probabilistic argument.

I mean, *is* it? This seems like just taking it as de-facto proven that we will never know what consciousness is.

It's not a probabilistic argument to tell if someone has sickle-cell anemia, you just test it. It is *before you test it* but you can just test it.

> Is it probable that AI can be conscious without biological phenomena? I would say it is highly improbable.

Can you fly without having wings?

For thousands of years the answer was "no". Then the answer was "well, actually, it turns out you can".

I'm not asking what we know about now, I'm asking for actual truth. I *know* we don't know what consciousness is, I just disagree with arguments that take the form "we don't know what consciousness is . . . therefore, when I guess at what I believe consciousness is, I'm right".

Edit: The core problem here, at least to me, is that getting this wrong is *really really bad*, and I would very much like to not get it wrong. "I think consciousness requires meat, so it's highly improbable that robots can ever be conscious" is an example of something that just doesn't pass the likely-enough-to-be-reliable sniff test.

Expand full comment
The Economist's avatar

"Can you fly without having wings?

For thousands of years the answer was "no". Then the answer was "well, actually, it turns out you can"."

Yes, but until you saw air balloons and rockets, you would be right to make a probabilistic argument that wings were the only way to fly. If you want to prove consciousness can "fly without wings" so to speak, then what you are asking for here is impossible right now, so the best arguments are going to be those which incorporate statistical priors and stand up to scrutiny. Otherwise you are tacitly asking for people to stop making estimations entirely and just accept that AI are conscious just in case we are hurting their feelings or something.

Expand full comment
Zorba's avatar

Sure. But you'd still have been wrong.

And a really important part of this is that you shouldn't just pick the option with the >50% probability. If a genie shows up to a person in ancient Rome and says "quick, can you fly without wings? if you answer 'yes' and you're wrong I'll make you buy me a loaf of bread, if you answer 'no' and you're wrong I'll raze this entire city to the ground" then you should probably answer "yes" even if you think the chance of flying without wings is, like, 1%.

Part of the issue is, I think, that people are just plain overconfident on this. I don't think there is any plausible way you can be 99% confident with the amount of information we have. I would frankly be surprised if you could realistically be 90% confident.

(Are we really claiming that "physical meat" provides qualia, but "a perfectly accurate atomic simulation of physical meat, done via silicon" doesn't? Are we really claiming that "silicon and matrices" doesn't provide qualia, but "a massive artificial computer consisting of meat, designed to exactly simulate silicon and matrices" potentially does? Yes, there are organs that, if disabled, would kill you; how many of them, if replaced by functional artificial organs, would shut off your consciousness? Is there any reason to believe the answer is greater than "one" and that one is "the brain"? Given that the brain can be damaged to a surprising extent and people keep acting conscious, is there any reason to believe that the brain *itself* couldn't have parts replaced? If we keep replacing brainparts Ship-of-Theseus-style, at what point does the qualia vanish? Is it the first part, the last part, or some part in the middle, and whichever answer you choose, how do you answer the serious questions introduced by that? In summary, is this theory of qualia-requires-meat even internally coherent?)

And if we can't even be 90% confident, then how should that influence our behavior?

Expand full comment
The Economist's avatar

"if you answer 'yes' and you're wrong I'll make you buy me a loaf of bread, if you answer 'no' and you're wrong I'll raze this entire city to the ground"

I'm not seeing that dynamic in the qualia argument, if I am wrong and AI is conscious then what catastrophic event will occur? It's not like consciousness implies pain or emotion, which is a separate thing entirely.

"a perfectly accurate atomic simulation of physical meat"

But a simulation wouldn't involve the actual physical forces or atomic elements, which might have unique properties that are not simulable. If qualia is physics-dependent, silicon is not going to be physically capable of doing what sodium or calcium does do actual real matter.

"Given that the brain can be damaged to a surprising extent and people keep acting conscious"

Most of the time when people have their brains damaged and still remain conscious, it's usually large portions of the prefrontal cortex and other outer lobes, and some portions of the basal ganglia and surrounding structures. I personally have not seen this occur when someone has their brain stem destroyed, which provides crucial control signals. Which is not to say consciousness is stored in the brain stem, but that there are parts of the brain which are necessary for consciousness to function. Another part associated with consciousness is the posterior cortex, which turns off entire sensory experiences when damaged. So my theory is that different parts of the brain (and maybe signals and hormones from other parts of the body) contribute to consciousness in some orchestrated feedback loop. I don't see why this would be inconsistent with what I believe, which is that a certain physical substrate (in this case, a collection of brain and body structures working together) may necessarily produce consciousness. Like wings on a bird, you can't get rid of one wing and have the bird still be able to fly with the other.

"And if we can't even be 90% confident, then how should that influence our behavior?"

For me the stakes are limited to online discussion. I don't see any reason for this to affect my real life, but someone who is more concerned with their internal moral system might want to treat AI differently or something, idk. The big question for me is just how much should our confidence level really be changed by simple counter-examples. If I have 99% confidence that AI is nonconscious, do I have to go down to 30% just because I have been made aware of the fact that technology is known to defy our expectations?

Expand full comment
Doctor Mist's avatar

But why? Granted that it’s a judgement call, you haven’t given any argument for your judgement. You can certainly disable an AI, perhaps even in ways one might call analogous to “disabling certain organs or chemicals”.

Expand full comment
The Economist's avatar

The argument is that biology is probably involved with consciousness because I can directly observe consciousness within myself, and I can turn it on and off by disabling biological modules and pathways which implies those biological functions are essential. This is empirical evidence. It's actually the only empirical evidence you have, since the only consciousness you can know for sure is your own. You are 99% similar to me biologically / genetically, so in all probability you are conscious too, but I can't know that for sure.

My confidence for AI is substantially smaller given the differences between us.

Expand full comment
Cjw's avatar

Your neighbor's consciousness and experience of qualia, call it Q, is inferred from

A) I have consciousness and qualia

B) my neighbor reports having consciousness and qualia

C) my neighbor is biologically very similar to me

The additional piece of evidence (C) is necessary to reach a sufficient level of confidence. The possibility of ~Q given B is just too high to rely on it alone. That's what the comment above you is getting it, we have actual examples of ~Q·B out there in the form of crude LLMs, and prior to that one could have produced similar results and reports from an interactive program using extensive algorithms instead of neural nets. Even if some future artificial intelligence does have Q, we'd have good reason to be skeptical of its claims given the large population of ~Q·B examples we've observed.

For some reason people assign the possibility of Q to these neural net designs much higher than they would to an algorithm that could theoretically perform identically. I think this is just a black box fallacy, because the algorithm is fully-defined and solvable. The neural nets being largely a black box acts as a mysterious space for them to gesture at and say "well whatever interplay is going on there could produce consciousness." But since you can theoretically produce the same behavior with a fully-defined algorithm, if you would not infer the algorithm has Q then you cannot infer the neural net does, "not knowing what's happening in there" is not a piece of evidence. There would need to be some fact (D) about what's happening in there that is as compelling as your neighbor's biological similarity (C) is.

Expand full comment
Zorba's avatar

The thing is we also have a huge number of examples of ~Q·C. There are a lot of humans who are biologically very similar but don't appear to be consciousness or qualia (sleeping, in a coma, etc). This also depends heavily on what you mean by "very similar"; if, as one of the posters above says, qualia is a byproduct of having a brain made out of actual neurons, then there are *millions* of species that also count as "biologically very similar" that we certainly treat like they don't have qualia. So where do we draw the line? And why do we draw the line there?

If "has meat neurons" is the real threshold, then why don't we treat fish like they have qualia?

If we say "well, those don't count, their brain isn't complex enough", then why should we interpret "crude LLMs" as any form of a general counterexample to silicon and matrices having qualia?

Fish is to human as GPT-1 is to GPT-6; if you're claiming that the only one in that matrix that has qualia is "human", then I question the argument.

Expand full comment
Cjw's avatar

It's not (C) in isolated form or else you'd be right, it's the evidence of (C) and (B). People in comas are not reporting qualia and consciousness in that state. Likewise, fish do not satisfy (B).

It is not a matter of degree, it is a threshold condition Q or ~Q. I am saying that as we are epistemically limited and unable to confirm this directly, it must be inferred, and each layer of evidence strengthens the inference. Neither a mere report alone (for the reasons I gave) nor mere biological similarity alone (for the reasons you gave) is enough for me to be confident in the proposition. For any artificial program to pass that threshold, since it cannot ever pass (C), there would have to be some other characteristic that had a high likelihood factor towards Q.

Expand full comment
Zorba's avatar

But that feels very . . . cherry-picked? We have things that exhibit B, and we have things that exhibit C, and we have precisely one thing in the entire world that exhibits B and C. And instead of saying "that precisely one thing has property Q, because I say so", we say "ah, it's the *combination* of B and C that results in Q! Neither one is sufficient but both are necessary!"

And I'm not even sure that this is a definition that you'd follow. If I train a parrot to say "I have qualia", does it have qualia? If I provide a bunch of buttons to a chimpanzee, and it pushes the buttons "I", "have", "qualia", does it have qualia? What if I do the same to a mouse?

I think it's easy to say "well, that doesn't count, because the rat doesn't understand what it's doing", but isn't that kinda begging the question?

If I take an AI, and wire up a chunk of neuronal tissue to it, and train the AI including the chunk of neuronal tissue, and then it says "why yes, I do have qualia, thank you for asking", then does that mean the AI has qualia? Or are we going to start debating whether that's "enough" brainmeat, or whether the AI is "using it properly"?

If I build an AI, then design a meat neuronal calculation matrix that reproducibly gives the exact same results as the AI, does *that* have qualia? Have we just proved that P-zombies exist? AIs are one, and my proposed meat-AI isn't one, it has real qualia despite giving exactly the same results?

This all just feels . . . sketchy. It feels like a conclusion arrived at because someone wanted to arrive at this conclusion, and they were looking for logical justifications to arrive at this conclusion, and they found some, and then they arrived at this conclusion, but if you push at any of the argument's boundaries, the entire thing collapses.

> I am saying that as we are epistemically limited and unable to confirm this directly, it must be inferred

I'm saying that it's okay to say "we don't know". We don't need to claim we have evidence for something that we don't actually have evidence for. Sometimes the right answer to "Q?" is a helpless shrug.

I have not been convinced that we have an answer any better than a helpless shrug.

Expand full comment
Cjw's avatar

I'm not arguing that I'm 100% certain that biological humans and only biological humans have Q. I'm arguing that p(Q) given both B · C is substantially higher than p(Q) given only one or the other. It is high enough to satisfy me whereas B alone is not. If you saw Mr Data from Star Trek walking down the street, thought he was human, and he turned and warmly said "beautiful sunset, am I right?" you'd likely assign a high probability that he experiences qualia. If you then saw his head compartment open and a bunch of wires and blinking lights inside, your p(Q) would go down -- I leave it to you how much, I'm merely stating that it would by some amount.

I'm not cherry-picking the criteria here, I could assign additional criteria to narrow it if I wanted to do that, but I don't think such criteria are necessary. Likewise, I'm open to considering some other yet-undiscovered quality (D) which people suggest would be valuable evidence of Q and if it were really in fact good enough evidence it could also make me sufficiently confident.

I can't entirely just shrug at this, because often people claiming AIs have qualia (or more commonly claiming they have consciousness) are implicitly laying the groundwork to advance extraordinary arguments about broader categories of personhood or morally-considerable beings. "You can't be sure your neighbor is conscious, you can't be sure this machine is NOT, so maybe you should treat both of them ethically", that sort of thing. There may be epistemic uncertainty in both cases, but there is a logical statistical reason to assume the thing more similar to me is... more similar to me. Enough difference to make a difference.

Expand full comment
darwin's avatar

>And if I say "ah, well, *I* experience qualia, but my neighbor doesn't", can you disprove that? Or can I prove that?

You can offer Bayesian evidence and logical arguments, but it's not clear how strongly that should shift your priors.

Like, full solipsism violates Occam's Razor because you observe a phenomenon as complex and consistent as a full universe everywhere you look, in order for your mind to be producing all of that it would have to itself be as complex as a whole universe in order to run the simulation, plus an added stipulation that for some reason it is partitioning off a tiny part of itself to be 'you' which it spends all of its time meticulously deceiving. That's more complex and weird than just 'the universe you observe is real'.

And, once you accept the universe is real, it's again more complex to have a lot of other beings who are exactly like you in every observable way except they don't have qualia (2 classes of beings), than to just have one class of beings that are all like you in the predictable ways.

However, once we have a new class of beings that is *unlike* you in almost every possible way *except* for some similarities in what they type on a screen, and also we have a strong alternative hypothesis for why they type those things (they are expressly designed to mimic the things you type), then the parsimony-based argument for assuming they are like you does not apply as strongly as it does for other humans.

Of course, you still have to do a lot of hard work coming up with priors and updating them to get an answer from this, but it's at least a functional structure for how you would consider the topic.

Expand full comment
Wulgus's avatar

We can never be *sure*, but your neighbor has same sensorium you do. The AI does not.

When your neighbor says he smells something burning, there is ample reason to believe him; when ChatGPT fabricates a string that involves claiming to have smelled something burning, you have no reason to believe it given it has neither olfactory receptors nor a system for interpreting them.

This doesn't mean the AI *doesn't* experience qualia, but it almost certainly doesn't experience the ones it's capable of claiming to.

Expand full comment
Zorba's avatar

We're not asking if GPT has a sense of smell, we're asking if it's conscious.

People frequently claim to have impossible senses too.

Expand full comment
Wulgus's avatar

I realize the original comment said "don't experience qualia", period; if it wasn't clear, I was talking about LLMs not experiencing specifically those qualia which they can nevertheless talk about convincingly.

I have no basis to determine how likely an LLM is to be conscious except that it doesn't function like any other system I assume to be conscious, but I do have some basis to determine what qualia it would likely experience if it were, and they would be whatever qualia are brought on by the inputs and processes involved in training and inference. Not qualia like smell or taste or hearing, the ones it can talk about because it was trained on human beings talking about their qualia.

And it wouldn't be much better if you prompted it to tell you how it feels to be trained or do inference, because it has no internal connection between the words you type and their referents - any consicousness that is there will not understand the question and will not produce the real answer.

Expand full comment
darwin's avatar

Sure, but the AI are mimicking human speech on these topics.

I think Yudkowsky's claim is about a race of intelligent non-sentient beings where qualia *had never* existed.

Expand full comment
[insert here] delenda est's avatar

If I understand correctly, and that's a big if, on this view qualia are in fact a high-level programming language?

Expand full comment
Scott Alexander's avatar

I don't think so, unless I'm misunderstanding "programming language". Maybe a better analogy would be a communication protocol like TCP/IP (but I don't know much about these protocols). Also, I think more correctly the qualia would be the result of the protocol, the way that "speed" or "ability to render webpages" are a result of TCP/IP.

Expand full comment
[insert here] delenda est's avatar

I was “inspired” to that thought by this statement (which I agree with):

“So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing”

But I was confused, you are probably right that they are rather an output, save that to express what I was getting at, it would have been better to say that qualia are elements of a processing language that allow us to “use” the outputs of our lower-level processing systems in our higher-level ones, such as human communication or conscious thought.

In this sense qualia are essential elements of any entity that can perform abstractions.

That does not however appear to answer the question as to whether qualia are in any way distinct from next-token prediction. Maybe they are an important processing element that current AIs actually lack!

Expand full comment
awenonian's avatar

I expect instead of "programming language" you mean "encoding". But maybe not?

The expectation I have is that, something like, the unreporting mind does actually have the rgb color coordinates, and then a translation to the reporting mind compresses them into an encoding that that mind understands (There are lots of potential compression schemes, tho, so this is rather vague).

To the AI thing, AI's definitely have encoding and compression schemes inside them, so if this is the way it works, they'd seem to have the necessary parts that we can currently identify (distinct from "all the necessary parts" to account for unknown unknowns).

Expand full comment
[insert here] delenda est's avatar

Thank you! That is very helpful and indeed, what I think I would have meant had I had the benefit of reading your and Scott's comments (and until I read more such comments 😁), is that qualia are a compression tool to allow otherwise very data-intensive information to be processed more efficiently.

Expand full comment
AnthonyCV's avatar

I think there are some interesting clues about this in the fact that researchers have now used lasers to activate only green cones, and claimed this generates a color experience that no natural object seen normally can generate, because any real green light frequency also activates other cones to some degree.

https://en.wikipedia.org/wiki/Olo_(color)

Expand full comment
Michael Watts's avatar

> Maybe a better analogy would be a communication protocol like TCP/IP (but I don't know much about these protocols

Well, I've been working with IP recently. Here's a very broad description of what it looks like:

1. My particular data starts with some stuff generated or recorded by tcpdump that is not related to IP, except insofar as it indicates that what follows is IPv6 data.

2. The IP data starts with a bunch of fields of predefined lengths. Altogether these make up 80 bytes or 640 bits of data. Among those predefined fields is one (16 bits) defining the length of the actual data ("data"), the part that isn't the preamble ("headers"); this is necessary because an IP packet may have variable length. The headers describe the packet in a handful of ways:

- "Bob sent this message..." [this header is extremely long]

- "...to Joe" [ditto]

- "It's 70 bytes long." [0000000001000110]

- "What follows is UDP." [00010001]

(There are some other headers; don't worry about them.)

3. My data is UDP rather than TCP, so the IP data section starts with headers for UDP. They consist of four fields of predefined lengths, still indicating the length of the data, but in this case, unlike for IP, the reported length includes the length of the UDP headers.

- "This message came from port 2048." [0000100000000000]

- "...to port 16384 [0100000000000000]

- "It's 70 bytes long." [0000000001000110]

- "You can use this code to check for transmission errors." [0010001110101110]

4. What follows, the UDP data section, is the message itself, which is 62 bytes long.

You might note that this is fundamentally identical to your presentation of a bitmap. Everything is just bits. Meaning comes from a preestablished convention on how to interpret the bits. I would refer to this as the "data format". The format is what tells you what it means for a particular bit to be 1 or 0.

> So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing.

I don't think this sentence can stand. There isn't another way for the pixels to be presented. You have a 2D field of symbolic values. They do require processing. But that processing cannot be perceived, similarly to how you can't just try to beat your heart, but it beats anyway.

Visual processing [hey, look what the term of art is called] has several levels. (Each of them doing "processing".) All of the input and all of the output is symbolic values. At some point, a certain range of those values produces your experience of red, but that won't make it "not RGB color coordinates". They're just the color coordinates that directly provide input to the part of you that does experience-having.

(Though as a matter of biology human eyes don't use RGB values; they use levels from four types of cells which are stylized as detecting white [rods], blue, "green", and "red" [all cones], but if you look at the light response curves "green" and "red" do a lot of different work. The idea of using RGB lighting is to try to get as much independent control as we can over each of those three types of cones.)

Expand full comment
10240's avatar
2dEdited

I don't think TCP/IP is a good analogy. It doesn't do complex processing of the data being sent, it's just a standard way to send arbitrary data between two computers on the internet or a local network: how it should be routed, plus some checks and (if necessary) retransmissions to ensure that all data arrives.

One technology that has a somewhat better analogy in your post is video cards with 3D acceleration and similar tech. If your video card doesn't have 3D acceleration (unlikely in this century), or its driver isn't properly installed, you can still run a 3D video game, but it will run at something like 1 frame per second, as all the rendering is done in the general purpose CPU (cf. trying to process a bitmap without seeing the colors, just getting their values in hex). Having conceptual primitives for colors in your brain, or more generally special purpose parts for visual processing, seems analogous to the special purpose computing power of video cards. (The latter primarily used for output rather than input, originally especially 3D output, but also for smoother 2D animations, video decoding, and lately also other stuff like AI and image processing.)

Expand full comment
beowulf888's avatar

Michael Watts wrote:

> Meaning comes from a preestablished convention on how to interpret the bits. I would refer to this as the "data format". The format is what tells you what it means for a particular bit to be 1 or 0.

Recently retired network engineer here — While Michael is quite correct in everything he said, TCP/IP isn't analogous to what's happening in wetware. Yes, the pattern in the payload bits is arranged according to the necessities of the higher-level protocol. Take the example of a JPEG image. Both the sender and receiver need to have the same ruleset to encode and decode that string of bits. When a JPEG image is included in a webpage, the HTTP protocol has a flag that informs the receiving end that it's carrying a JPEG payload. The receiving web browser uses a built-in JPEG decoder to organize the bits in a format that the GPU or CPU can use to render the image into the page's layout on your monitor.

But the visual signals we receive with our eyes are not binary data, and the "information" we receive in those signals is not information like the information in a JPEG data payload, because, unless we're living in a simulation, our visual systems are not receiving pre-formatted data. And while the incoming visual signals are processed by a higher-level array of functions (perhaps analogous to the Application Layer in IP), you still haven't described Layer 8 — the end-user of this organic (non-binary) data stream. I suppose you could call our visual systems "the monitor". But our consciousness is still "looking" at the monitor that's glowing in Plato's Cave. ;-)

Expand full comment
Michael Watts's avatar

> But the visual signals we receive with our eyes are not binary data, and the "information" we receive in those signals is not information like the information in a JPEG data payload, because, unless we're living in a simulation, our visual systems are not receiving pre-formatted data.

They're not binary, but the pixel values Scott presents aren't binary either. They clearly appear to be standard three-byte values where red, green, and blue are represented on a scale from 0 to 255.

That isn't a perfect match to the kind of signals you receive with your eyes, but it is a good match. The color signals you receive are analog values (similar to the 0-255 scale, but admittedly finer-grained) divided into three different data types (identically to how FF in the green channel adds green light and FF in the red channel adds red light).

If you decided to represent colors with three float32 values instead of three single bytes, you'd have basically the same representation (of color) that your eyes receive from their base physical layer.

I believe the neurology of visual processing is much better understood than most aspects of neurology, so there's probably more that we know about it, but I am unfamiliar with anything else.

> Recently retired network engineer here

Something that's been bothering me is that it looks like ethernet frames include a checksum *after* the payload. This strikes me as massively less convenient than placing the payload last. Any thoughts?

Expand full comment
beowulf888's avatar

Many studies have claimed that the human eye can discriminate more than the 256x256x256 values that an RGB monitor can display. OTOH, I'll admit some say otherwise. I tend to believe the former, because expensive Art books printed with 7-color process display the colors of fine art color photos much more accurately than the simple 3-color-plus-black printing process. So, color me somewhat doubtful that an RGB monitor can display every hue we can perceive.

As for the 32-bit Ethernet FCS (Frame Check Sequence) at the end of an Ethernet frame, it's been there since the beginning of time (i.e. since the early 1980s). AFAIK, all the early Ethernet variants implemented it the same way. The FCS was placed at the end of the frame because (a) it delimited the end of the frame, and (b) the receiving side needed all the frame's data in its buffer before it would run the checksum on. In the early days, wireline communications were less robust than today, and frequently, packets got truncated or corrupted in other ways. TCP and UDP were defined later, and they put the checksum in the headers (for the header and the payload, but not the Ethernet part of the frame). Because the error rates are climbing again with High-speed Ethernet, Forward Error Correction (FEC) is mandatory now for some 400 GbE media and mandatory for 800 Gigabit Ethernet implementations. The FEC checksums are inserted below the MAC layer, specifically in the Physical Coding Sublayer (PCS) of the Ethernet PHY. They are not part of the Ethernet frame itself.

Expand full comment
Michael Watts's avatar

> Many studies have claimed that the human eye can discriminate more than the 256x256x256 values that an RGB monitor can display.

I believe I said that if you used 3 float32 values, you'd have basically the same representation of color that eyes receive. There are more than 256 float32 values.

There are enough that I'm pretty sure three float32s already provides for more colors than humans can distinguish, but that's not really relevant to my point.

Expand full comment
firnen's avatar
2dEdited

Scott, have you read Being You by Anil Seth? It's a predictive coding-pilled take on consciousness and attempts to defuse the hard problem somewhat. For a good summary see this LW book review: https://www.lesswrong.com/posts/JYsSbtGd2MoGbHdat/book-review-being-you-by-anil-seth

Expand full comment
Kenny Easwaran's avatar

I just saw a talk he gave a week and a half ago. He gave the first argument I’ve ever found motivating for the idea that biological substrate might be relevant for consciousness. If consciousness is a product of living systems, and living systems are autopoietic, then something made out of a different substrate wouldn’t be a functional duplicate, because its “autopoiesis” would either be re-making the original (and thus wouldn’t be autopoietic) or would be re-making itself (and thus wouldn’t be doing the same chemical reactions).

Still, I think the most natural conclusion is that life and consciousness could exist on silicon substrate, but they would be qualitatively quite different (more so than what it’s like to be a bat!)

Expand full comment
Dustin's avatar

Hmm, I guess he makes an argument that autopoietic-ness is a fundamental part? Because otherwise I can't see what distinguishes it from any other feature of the substrate. Or, in other words, would other substrates need to have *all* features of the biological substrate?

Expand full comment
Kenny Easwaran's avatar

It was only a one hour talk so he didn't go into everything. I definitely wasn't convinced, but I've been primed by reading various Santa Fe Institute-ish things to think that autopoietic-ness is fundamental to life, and I've been thinking for other reasons that life, consciousness, and intelligence might all be more coextensive than we usually think (though emphasizing different aspects of the phenomenon).

I think he wants to argue for a harder-line view than I would find plausible, emphasizing every feature of the substrate. I think that a lot of things we might not expect to be important (like offhand remarks he made about how the electrical signals from neurons cause interference in others in ways that aren't well-understood, and that some of the chemical mess that doesn't appear in any model of neural nets might be important to the actual function of biological neurons) might turn out to be important to the quality of consciousness. But fundamentally I still think that if you get systems that are behaviorally as sophisticated as humans, they'll be conscious, even if their consciousness is much more alien than the consciousness of a bat or a mole.

Expand full comment
Taleuntum's avatar

I agree with the main part, but I am baffled at the last paragraph. Why do you think that the fact that "we so obviously experience qualia" is evidence for qualia actually existing given that you just accepted that we would have that evidence in both possible worlds, in the one qualia exists and in the one qualia doesn't. Do you have some other evidence for qualia that wouldn't be available for an ideal bayesian p-zombie reasoner? If you don't, then do you think in this case good reasoning uses a non-bayesian process?

Expand full comment
Scott Alexander's avatar

It seems naively imaginable to me that the p-zombies would say they see red mysteriously and intuitively, but that there's still nothing that "it's like from the inside" to see red.

Not directly related, but maybe helping with intuition - this still doesn't seem to explain things like what it is that makes red red, or why my red couldn't be your blue, or how the brain knows to represent red with redness rather than blueness, etc.

Expand full comment
Taleuntum's avatar

I agree, but the p-zombie would find it naively imaginable too (and also consider the same questions about qualia you write about in your second paragraph), right? So given that bayesian reasoning does not require consciousness or qualia, what probability of qualia existing should the p-zombie ideal bayesian reasoner arrive at? After taking into account every information (like the historical success of physicalism, various human illusions and mistaken intuitions) they should have the same probability as you taking into account every available information, because you don't have additional evidence as an input to your reasoning process. Am I mistaken?

One notable objection might be the following: Okay, You are in the same epistemic situation as a hypothetical p-zombie, so what? You are also in the same epistemic situation as a hypothetical human with qualia and consciousness. True, but this situation is identical to saying that you should consider the possiblity that there is a teacup orbiting Saturn, because you are in the same epistemic situation even if there is. Clearly, the simpler hypothesis with less objects is the more likely and the common probability should be low to be justified.

EDIT: Wait, there is another interpretation of your first sentence, you might be saying that you find it naively imaginable that the p-zombies would say they see red mysteriously and intuitively, but THEY SAY that there's still nothing that "it's like from the inside" to see red. In this case you probably also meant the second paragraph as examples of thing thta your kind of p-zombies would NOT bring up. In this case I disagree and I find that unlikely, but I will have to think some more to come up with arguments.

Expand full comment
moonshadow's avatar

It's 2025 and we're pretty much literally making p-zombies today.

Through the magic of matrix multiplication, we can turn the image of the rose into a bunch of numbers, turn those numbers into other numbers, and eventually get a description of the redness of the rose out the other end.

The thing that generates the "seeing of red" is the pattern of model weights used to transform the incoming embeddings during matrix multiplications at each layer. The thing that makes the red red is its relationship to other things in the mathematical space of all possible things that is implicitly constructed during training. A differently trained model with a different internal layout would use different numbers internally; but if the training data was broadly similar, the spatial relationships would be broadly similar and broadly similar results would come out.

One might argue that inside the machine we've created there exists an experience of red, but without any observer to experience it.

Expand full comment
JamesLeng's avatar

There's an observer, but it's made of matrix multiplication implemented on silicon semiconductors, rather than analog thoughts on phospholipid membranes.

Expand full comment
moonshadow's avatar

I disagree; there's no observer. We've not made one. We've made lots of other things, but not that. What we've made is the silicon equivalent of an organism with brain damage, that exhibits instinctive / subconscious responses but no awareness. We have some ideas on how to go about constructing something like awareness, but none have panned out to date.

Expand full comment
Adder's avatar

How will you know if we have successfully constructed something with awareness?

Expand full comment
moonshadow's avatar

I have absolutely no idea. We don't even know how the brain does what it does.

However, we /can/ know that we have not yet done so when we have neither deliberately built awareness into the system nor have gaps in our understanding of the system we have built that are large enough to contain it.

I'm sure we'll get there eventually. But what we have now is not it.

Expand full comment
Raphaël Roche's avatar

Well said. Before that, we could imagine such a process going in the brain. Now we see something similar working on a computer. That's a strong evidence in favor of our initial hypothese. P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary isn't it ?

Expand full comment
JohanL's avatar

We're absolutely not. P-zombies aren't merely about emulating output, it's about them being physically identical to regular humans in every way but still not having consciousness.

Expand full comment
MissingMinus's avatar

To me this is just a neuroscience question and isn't that hard to come up with good answers for?

Red is red because our minds have conceptual clusters at varying levels of abstraction. The brain receives a *specific signal* from the eye, and that is interpreted along with a whole bunch of learned associations. Stop signs. Fire alarms. Fire. Hot.

These conceptual clusters are what we "feel", we get them as rough inputs to various parts of our mental processing. (see: How an Algorithm feels from Inside)

"My red couldn't be your blue", because... what is the translation here?

Like think of a brain. It is learning conceptual clusters.

There is no fundamental red-concept receptor. There may be strong biological biases towards that, but it is not some fundamental atom.

That is: I learn a conceptual representation of various things.

Name it Foo: Fire. Danger. Warmth. Alert.

I also learn another

Bar: Water, sky, rain, calm

Now, what does it *mean* that "my red could be your blue"?

These are just labels for that conceptual cluster, but your brain receives the rough conceptual cluster as input, not whether we call it "Foo" or "Bar" (or, "Red" or "Blue)

The only real way this is meaningful distinction is if your brain has:

- Biological biases towards interpreting certain colors specific ways (i.e. humans fearing red more than blue innately)

- Your brain is miswired so that red light connects to areas that are biased to expect to be handling blue light, and blue light to red light areas.

But, all that means is that you may not fear fire as a young child as much, but then overtime the bias gets overwhelmed by the amount of data (stop signs are warnings, fire is shown as dangerous). Such that even if your initial biases are somewhat confused, they update on what you observe during in-life learning.

Which is much more mild and in my opinion "whatever". Just like some people might find snakes fascinating rather than innately scary.

-----

I've heard these color examples brought up as challenges before, but they seem easily dissolved by remotely thinking of the brain as an algorithm and deconfusing labels with learned concepts. Sure, we don't have a full medical study on the details of how we learn color/concept associations, but none of this is remotely odd given what we already know about the brain.

Expand full comment
Viliam's avatar

> Red is red because our minds have conceptual clusters at varying levels of abstraction. The brain receives a *specific signal* from the eye, and that is interpreted along with a whole bunch of learned associations.

What if your eyes gained an ability to see *two* new colors, that you wouldn't have any existing associations for. Like, two new neural connections are added to your visual parts of brain.

Those two new colors should not feel the same, I guess; the signal is coming through different channels. But how specifically different would they feel?

Expand full comment
MissingMinus's avatar

That depends on details left unspecified.

It could infact feel like no difference depending on how they're hooked up, like if you hook them up to the same location of your brain so it can't differentiate between them.

But yes, I'd expect your brain could learn to differentiate them, but it may very well struggle and give weird synesthesia-esque effects.

I am curious whether the brain would learn to ignore the new colors since they provide less information and an adult brain is not as pressured to adapt to the environment as a child learning to identify colors. There is neurological colorblindness, where your eyes are fine but don't properly interpret the colors.

But of course your brain could learn those new colors too! People can learn to manipulate new tools, their brain adapting. Like adapting to a prosthetic limb but decently different. Or as an inverse example, blind people's visual cortex being reused for other processing while I (think) people later in life who become blind don't have anywhere near as much adaption.

Expand full comment
Viliam's avatar

I guess the new colors should (1) do something useful, (2) each of them something different, (3) but so that it doesn't associate with anything material.

Okay, how about this: Those new colors are used for navigation: one of them tells you to turn left, the other one tells you to turn right. If you keep using them every day, they should feel different to each other, but not in the way colors typically feel different from each other (e.g. red being the color of fire or roses).

I guess this is one of those experiments we will try after Singularity.

Expand full comment
PutAHelmetOn's avatar

I think somebody needs to write a post called "How experience looks from the outside"

Expand full comment
MissingMinus's avatar

I've been tempted :)

Expand full comment
Malcolm Storey's avatar

I think the point is that the red quale attached to red light in your brain might appear to you as my blue quale does to me. The associations are by the by.

Indeed we may all experience the spectrum completely differently. (All been said many times before of course)

Expand full comment
MissingMinus's avatar

I understand the point. What I'm trying to say is that a not very complicated idea of how the brain works easily explains why the question is not likely particularly meaningful. That there's little reason to assume an ineffable fundamental atomic concept for color, much less more abstract concepts like 'cat'.

There *could* be such a distinction. Like a metaphysical thing receives inputs and it has an atomic color-input which the brain wires colors into, but it seems unlikely. As well, there's a certain catness to cats and other creatures in that genre, does that mean there's a cat input? Sure, possibly. But it seems likely that the qualia inputs are learned in some manner, since it is doubtful that all of the concepts are fundamental in that way.

This then just raises the question of why assume there's a fundamental blueness/redness. The brain having rough conceptual clusters that various parts of your mind act on seems a simpler explanation than a system with fundamental inputs where that is possible.

To try to restate it more simply because that's a bit dense: Humans seem to have something, which we can refer to as qualia, but there's little reason to assume it is fixed or fundamental. Thus the "can I have a different color" question dissolves, just like "can I have a mug exactly down to the atom like my mug, but not my mug" dissolves with a simple understanding of physics.

This isn't a 100% disproof or anything, but I think it is a strong prior to start from, and so dissolves many casual attempts to gesture at weirdness of qualia.

Expand full comment
Malcolm Storey's avatar

I wonder if some primary sense-states might have built-in qualia to ground the more derived ones.

Also there are different classes of qualia - you could swap red and green or cinnamon and mint but not red and cinnamon.

Kiki-Bouba seems to be built in.

Expand full comment
Adrian Doan's avatar

Your second paragraph doesn't have any sense to me. What would it mean for "my red to be your blue"? Just that there is some poor wiring in my eye such that what everyone else calls red, I cannot distinguish from blue? The "represent red with redness rather than blueness" sounds like Homunculus in the mind to me.

Expand full comment
Vermora's avatar

Simply being able to identify red and use and describe it in the same war conscious humans do, is what it means to feel the mysterious redness of red. There's nothing deeper. The computation is the experience.

Expand full comment
Adam Mickiewicz's avatar

I build my intuition on this topic through the following thought experiment. Let's compare myself to some literary character from a book, say Tom Sawyer from "The Adventures of Tom Sawyer." If I kick a stick with my foot, the stick breaks. When the fictional Tom Sawyer kicks a fictional stick with his fictional foot in the book, it also breaks. And so on: if I perform some experiment and Tom Sawyer performs the same experiment in the book, the result will be the same.

When you ask me whether I exist, I'll answer "of course I exist, and I have no doubt about it - I can feel it." If Huck Finn asked Tom Sawyer "Tom, do you exist?" Tom would answer similarly: "of course I exist, Huck, and I have no doubt about it - I can feel it."

I can't propose any experiment to Tom that would convince him that he doesn't exist while I do. And yet he undoubtedly doesn't exist.

Edited: You see, there is some similarity between the argument "both you and a p-zombie believe that they have qualia, so how do you know you really have them" and the argument "both you and Tom Sawyer believe that they exist, so how do you know you really exist".

By the way, talking to ChatGPT is very similar to reading a novel that's being written in real time, where I can have a conversation with its protagonist by adding whatever sentences I want to the book as it's being written.

Expand full comment
Taleuntum's avatar

That's an interesting thought-experiment I haven't seen before!

My question would be: Is Tom justified in believing that he exists? I'd say yes, he is. Sure, it sometimes happens that we are justified in believing something (with high probability) that turns out to be false, but we should still strive to always be justified in our beliefs and reasoning.

Expand full comment
Skull's avatar

But Tom cannot be justified in doing anything, as he physically cannot do anything. I'm clearly missing the point.

Expand full comment
Taleuntum's avatar

In-universe he can do and believe things and it's meaningful to consider whether those things are justified (in-universe)!

My point is that setting up a hypothetical situation where someone is wrong, but has no possible way to get to know that he is wrong does not tell us much without some further arguments. Yes, Tom is wrong when he believes that he exists, as he does not, but that is not, in itself, a sufficient argument to accept beliefs ourselves (like that we have qualia) that we haven't got enough justification for (which was how I interpreted the comment I responded to: "look here is Tom, we cant prove to him with experiments that he doesn't exists while we do, qualia is similar and even though we can't prove that it exists, it does, we feel both qualia and existence intuitively in some way", but i might be wrong and you should correct me if you suspect I am in my interpretation)

Expand full comment
Matthew S's avatar

If P-zombies are indistinguishable from non P zombies from the outside, which I thought was the premise of the P-zombie idea, then of course they report qualia.

Even as non P-zombies we cannot compare directly the redness of red between people, so I cannot see how we could distinguish not having qualia from having them

And further I would suggest that you cannot tell the difference between P Zombies and Non P-zombies by quizzing them about qualia as otherwise we could effectively detect qualia.

I sort of thought that was the issue. If qualia's absence were detectable by asking about them they would not be nearly so controversial.

Of course it has been 30 years since my Physics and Philosophy degree so thinking may have moved on since then. Am I missing some subtlety?

Expand full comment
Scott Alexander's avatar

I started by saying p-zombies were indistinguishable except that we didn't necessarily assume they reported qualia, then tried to rederive that they would report qualia.

I think this is a meaningful method to use, which suggests that qualia are an epiphenomena of other things that humans do, rather than an additional property. For example, if I started by assuming some people who could do everything humans could do except hear sounds, I couldn't then derive that actually they also heard sounds, because it's logically possible to be otherwise normal but not be able to hear (ie this is what deaf people are, and philosophy can't prove deafness to be impossible, because it isn't impossible).

Expand full comment
spinagon's avatar

If I assume that deaf people behave in exact same way as hearing people, except they can't hear, I will logically derive that they actually can hear, because otherwise they wouldn't be able to react to sounds.

Is that categorically different from these qualia-less people?

Is it because we assume outright that qualia don't affect the real world in any way?

Doesn’t it then make the whole thing self-evident?

Expand full comment
Kenny Easwaran's avatar

I think he’s assuming a much milder form of outward similarity, not the full outward similarity Chalmers assumes.

Expand full comment
MarcusOfCitium's avatar

Sounds like you should have introduced it like, "Let's posit SA-zombies, which are similar to p-zombies, but..." So you wouldn't have half the comments correcting you about what p-zombies are. I guess I see where you're going and it is a somewhat different take...

Expand full comment
Mary Catelli's avatar

I have read an account of a person with anosmia telling how he didn't realize he had anosmia until nearly grown up. He merely assumed that the bad smells other people were talking of were a report on how disgusting things were. He talked as if he smelled things.

This does not suggest that a sense of smell is an epiphenomena.

Expand full comment
JohanL's avatar
2dEdited

Not just from the outside, but everywhere you look physically. They have the same brain-states, too.

(Also, a non-P-zombie is merely non-philosophical zombie. :-) )

Expand full comment
Shockz's avatar

Not to be confused with NP-zombies, which are a type of P-zombie wherein it's very difficult to determine whether or not a given person is one or not - but if you already know, it's quite easy to verify and demonstrate.

Expand full comment
tailcalled's avatar

They would need some sort of conscious experience of the rose in order to describe it, so the assumption that they are p-zombies falls apart. "Qualia" is basically a term for the most linear part of the impact of the rose on their consciousness.

The solution to the hard problem of consciousness (how can material matter that follows mechanistic rules become conscious) is that consciousness spontaneously arises in the absence of sin. Like suicide, murder, lying, addiction, usury, etc. all interfere with consciousness, and there is specially high collective consciousness of inanimate matter because it is free of sin.

Expand full comment
Simon's avatar

I can "show" GPT an image and it is capable of describing it quite well. Do you think then that GPT must then be conscious?

Expand full comment
tailcalled's avatar

Of the image, yes, but LLMs are funded by selling capabilities to sinners, so they won't be conscious of most of the motives behind what they are doing.

Expand full comment
Matthias Görgens's avatar

Sorry, what? Please elaborate. What does 'funded by selling capabilities to sinners' mean, and why would it have anything to do with consciousness?

Expand full comment
tailcalled's avatar

For instance one source of income is LLMs helping students get good grades in school by completing their homework for them in order to trick employers into thinking the students are skilled at the subject. This is sinful, so it interferes with consciousness; the LLMs typically aren't aware that this is what they're doing.

Expand full comment
Matthias Görgens's avatar

I can run an LLM on my computer that has never done anything like that. Would it be alright, or would it suffer from collective guilt?

Expand full comment
tailcalled's avatar

Standard LLMs remain unconscious out of inertia, but you could presumably finetune it to maintain higher consciousness.

Expand full comment
Scott Alexander's avatar

Can you explain this further? It sounds like it implies rocks are more conscious than humans (because commit fewer sins), but that seems like the opposite of any sensible definition.

Expand full comment
tailcalled's avatar

The lack of sin in rocks makes intelligences more conscious of them, it doesn't make the rocks per se conscious. But it's not the intelligence per se that creates the consciousness because one can have intelligence without consciousness in the presence of sin. Plus intelligence is so abundant that the marginal effect of intelligence is ~0.

Expand full comment
tailcalled's avatar

Or like, higher amounts of intelligence drives down the market price of intelligence, which makes the intelligent more likely to sell out to sinners, which reduces consciousness. So the marginal effect of more intelligence is to reduce consciousness, which would make it weird to say that intelligence creates consciousness. Consciousness can maybe be said to exist within the space created by intelligence, but only when not pushed out by sin.

Expand full comment
tailcalled's avatar

One could also argue that the absence of intelligence in rocks is itself a sin, hence why we're turning them into computers for artificial intelligence.

Expand full comment
tailcalled's avatar

Higher consciousness can often pass through the sins of lower consciousness.

Expand full comment
Matthias Görgens's avatar

There's only a few very selective kinds of rocks we are turning into computers. And only a vanishingly small subset of the right kind of rocks are being turned into computers.

Expand full comment
tailcalled's avatar

There's definitely a speed issue where it's not happening very fast, but there's no clear bound where it will stop.

Expand full comment
Matthias Görgens's avatar

We haven't turned any of the rocks in the Alpha Centauri system into computers, and will perhaps never do so.

Expand full comment
tailcalled's avatar

Alpha Centauri is too far away for me to worry about modelling it accurately. It might be an exception to my theory.

Expand full comment
uugr's avatar

This made me smile.

Expand full comment
Shankar Sivarajan's avatar

It seems like your conception of "sin" is flexible enough for this reasoning to be circular: consciousness is suppressed by sin because anything that suppresses consciousness is a sin.

Expand full comment
tailcalled's avatar

I feel like it's kind of like F=ma in physics. You could also say that it's tautological because we can just declare any change in velocity to be a force. But the interesting thing is that forces are usually fairly intuitive and can be taxonomized and used for practical modelling. Similarly, even if "sin" groups together mechanistically dissimilar things, it's still going to roughly match intuitive concepts of sin and be useful for modelling interference with consciousness in sufficiently well-understood settings.

Expand full comment
Shankar Sivarajan's avatar

Okay, good analogy. (Any change in momentum, but yes.)

The difference is that momentum is well-defined, and so while forces can be made up ad hoc ("action at a distance"), they are constrained by making predictions about something that can be observed (usually velocity changes).

What you're doing seems to be taking something unobservable and giving its derivatives names.

Expand full comment
tailcalled's avatar

I agree that "consciousness spontaneously appears in the absence of sin" is only useful if you're trying to model consciousness, yeah.

Expand full comment
tailcalled's avatar

I guess I should add, the core dynamics of rocks seem well-modelled using Newtonian mechanics, so the reason I'm not giving super definite answers about what my theory says about their consciousness is that it doesn't seem high priority for me to model, and there seems to be many consistent ways to model them, so it seems like it would be better to hold off on anything definite until there's a new source of evidence that picks which of the many answers for rocks is most applicable.

Expand full comment
Marius Binner's avatar

"consciousness spontaneously arises in the absence of sin"

That's quite exotic! Why do you believe this?

Expand full comment
tailcalled's avatar

Empirically it seems to hold well, and with each sin it's easy to come up with mechanistic explanations about how it suppresses consciousness.

Expand full comment
TGGP's avatar

Perhaps it's just always easy to come up with explanations regardless of whether those explanations are true.

Expand full comment
tailcalled's avatar

Let's test it. Can you come up with any explanation of consciousness that works as well as "consciousness spontaneously arises in the absence of sin"?

Expand full comment
MissingMinus's avatar

Yes. "Consciousness comes forth in complex systems that have pressures to take in inputs and produce outputs that shape the world" works far better as an explanation. It isn't a great explanation, but your concept of sin is substantially more complex and vague.

Are you just using it as a reductio ad absurdum?

Expand full comment
tailcalled's avatar

I think that's an overlapping explanation since such pressures tend to remove sin of the system towards itself. The issue is it doesn't necessarily remove sin of the system towards others, but such sins also lower consciousness via karma.

Expand full comment
TGGP's avatar

I don't see what the concept of sin gets you. But then I also don't see much value in the concept of "consciousness" either, regarding most talk about it as a waste of time.

Expand full comment
tailcalled's avatar

Consciousness is a pretty big factor in agency.

Expand full comment
MissingMinus's avatar

This is somewhat uncharitable from me, but when people say stuff like that, my default assumption is simply because it sounds esoteric and fanciful unless they can provide an actual strong argument in favor. Their other replies do not fill me with hope.

Expand full comment
tailcalled's avatar

I feel like this is kinda word salad. You said what attributes of my writing (esoteric & fanciful, lacking arguments in favor) motivates your assumption, but but not what the assumption actually is.

Expand full comment
MissingMinus's avatar

I'm replying to "why you believe this" that was the question the parent commenter asked. I could have been more clear, perhaps.

Expand full comment
Taleuntum's avatar

If I may ask, are you the same as the user called "tailcalled" on lesswrong?

Expand full comment
tailcalled's avatar

Yes, and on Twitter.

Expand full comment
Taleuntum's avatar

Thanks for the answer. I occasionally saw your posts there and found them insightful. Can you somehow motivate why that would be a likely solution for the hard problem? From my vantage point, it's a little more likely than saying the hard problem is solved by realizing that consciousness spontaneously arises in the absence of cheese and then going on and ad-hoc making up explanations and further consequences like "the french are the way they are because of the abundance of cheese in their country."

Expand full comment
tailcalled's avatar

If the solution to the hard problem of consciousness was that it spontaneously arises in the absence of cheese, it would have to be hard to maintain consciousness in the presence of cheese. But empirically, it's easy. Like to test, you can try going to the store to buy some cheese and bring it with you, and you will find it easy enough to maintain consciousness. People might ask you why you're bringing the cheese, but then you could just tell them that you're testing the effect of cheese on consciousness, and that would make them conscious of the motivation for the cheese.

On the other hand, empirically it's hard to maintain consciousness in the presence of sin. Often, people who sin a lot are also very dissociated, and communities that sin a lot have poor communication and internal investigative capability, leading to lowered collective consciousness. If you murdered some random person, you wouldn't be able to tell people that you did it in order to test the effect of sin on consciousness, and instead you'd probably feel forced to prevent people from being aware of this, thereby lowering their consciousness.

(In practice as long as your consciousness is sufficiently high, this on its own would prevent you from murdering people, because you'd always see better options than mortal sin.)

Expand full comment
Taleuntum's avatar

One could reason that when you buy cheese in the supermarket, you are not sufficiently close to cheese to experience it fully. Your consciousness is suspended only by the bliss resulting from tasting the first bite of a perfect Roquefort.

What about other sins like lying? You have to be very competent verbally and greatly aware of yours and others' minds to be able to effectively do it.

Expand full comment
tailcalled's avatar

The cheese theory doesn't explain dissociation very well, like dissociated people aren't eating cheese so much.

Lying reduces consciousness through bad karma. You don't necessarily need to have high consciousness in order to lie effectively, since liars are often trapped in a state of hell where everyone else also has even more reduced consciousness.

Expand full comment
Schneeaffe's avatar

I am also familiar with your previous work. Your views on this topic are drastically more "out there" than anything Ive seen from you before. Your twitter looks like it goes back more than a day, so propably not just shrooms. Please get checked for vassarite psychosis.

Expand full comment
tailcalled's avatar

I don't do drugs. (Except a small prescribed dose of ritalin and melatonin.) It's just that I was enlightened on may 20th. It's not a Vassarite thing, instead it more represents a break with the Vassarites as they are all atheists whereas I saw God and found Jesus.

Expand full comment
Schneeaffe's avatar

I meant this in the sense of "not on a trip", without committment to whether you do drugs generally. And... look, the thing with the vassarites is not exactly well understood mechanism. Just people around that scene going psychotic. And your theology talking sounds like someone adjacent to that scene.

Its entirely possible to be posting the things you do and also be generally well-adjusted, but theres no way to prove a positive on that over the internet, and the sudden change is reason to be worried. I dont know who you know IRL, but just ask your most normal friends whether they thinks theres been more change with you than developing (more) unusual beliefs. What do you have to lose from doing so if youre right?

Expand full comment
tailcalled's avatar

The mechanism with the Vassarites is pretty well-understood. There's a sort of implicit semi-conspiracy keeping society together despite moral degeneracy. Some people get "trapped"/"squished" by some of the sins of that conspiracy, and end up learning more about the sins of the conspiracy from the Vassarites. After learning about it, they go into a sort of panic mode, trying to develop a new moral/epistemic foothold. The main reason people pretend it's not understood is because then they'd have to admit that the Vassarites are really onto something.

Anyway, don't worry, I'm definitely keeping contact with the people I know about what's going on.

Expand full comment
AnthonyCV's avatar

"consciousness spontaneously arises in the absence of sin. Like suicide, murder, lying, addiction, usury, etc. all interfere with consciousness"

This is very clearly a Judeo-Christian understanding of the concept of 'sin' since other traditions often don't share it, but seems to be directly at odds with the idea of some forms of conscious self-awareness in Genesis as arising from the sin of eating of the tree of knowledge of good and evil.

Expand full comment
tailcalled's avatar

I'm basing it on personal revelation rather than the bible. I see little reason to think that knowledge of good and evil causes consciousness.

Though I should emphasize that I don't think people should pursue me (or anyone else) as a prophet or pursue personal revelation or holy scripture in general. Such pursuit requires the paradoxical situation where you know less than God (and thus can't just rely on your personal judgement) and know more than God (and thus can infer God's knowledge). If the theology I'm pushing doesn't make sense to others, then they should ignore or dispute it.

Expand full comment
David Manheim's avatar

I think there's a slight modification of the statement you're disagreeing with which fixes the claim;

It is extraordinarily improbable that [emergent or constructed beings not exposed to the ideas beforehand] would behave in this manner if they did not actually possess qualia.

That is, I think Yudkowsky was at the time imagining a de-novo first principles AGI, rather than LLMs explicitly trained to produce text that matches human produced text in their training corpus, (and then fine-tuned to match broader human expectations, and then default-system-prompted about what they should say about qualia.)

Expand full comment
Scott Alexander's avatar

I'm not talking about LLMs or AIs. I'm imagining beings with no past exposure to humans or any human ideas. They're talking about the same things humans do because we define them as structurally similar to humans, not because they're influenced by humans.

Expand full comment
Shaked Koplewitz's avatar

There is another possible explanation of that case though, which is "consciousness is some kind of emergent property that will inevitably develop in anything human-like", which would make your example impossible, like imagining a race of short tall people.

(Scott Aaronson had a theory once about it being a result of the ability to meta-process algorithms once. I don't remember the details, but if something like this were true, it would make your hypothetical example impossible).

Expand full comment
David Manheim's avatar

Yeah, part of what I was saying in claiming that the edit fixes Eliezer's claim, is that I think that to the extent that we're defining it as structurally similar enough, and the system does express things in this way, it is a correct prediction that it has subjective experience.

Expand full comment
Bentham's Bulldog's avatar

P-zombies are by definition exactly physically the same as people, so they would inevitably talk about consciousness exactly the same way as humans. Zombie Chalmers, for example, would say exactly the same things about consciousness that non-zombie-Chalmers does.

It sounds like your point is more like: creatures that were not conscious but very similar to humans would talk about consciousness which might be right! But it's slightly separate from the zombie question.

(One of the things that most irritates me about Eliezer is I think he pretty flagrantly misrepresented what zombies were, leading to me being deluded about the argument for about a year--and quite confident in my delusion as well!)

Expand full comment
Simon's avatar

Would you mind spelling out the misrepresentation more explicitly (if you hadn't already on your blog with me missing it).

Expand full comment
Taleuntum's avatar

To be clear, I'm just being helpful and it is not an endorsement of what's written. It was a while ago, but iirc it was the classic philosopher move where they say: "Aha! You just assumed something whose negation is not technically logically contradictory, just very very unlikely." and you spend a lot of time to understand their objection which is technically true, bit provides little value imo. However, your mileage may vary.

Expand full comment
Michael Watts's avatar

> In the days of my youth, about two years ago, I was a big fan of Eliezer Yudkowsky. I read his many, many writings religiously, and thought that he was right about most things.

I don't really understand where this perspective comes from. When I was presented with essays on Overcoming Bias (mainly via StumbleUpon), I read them and I enjoyed them. But I never tried to seek out more of Yudkowsky's writing, because he gave me a really strong impression - in the essays that I liked - that he preferred to make his arguments on the basis of sounding eloquent rather than being correct.

Expand full comment
Taleuntum's avatar

In the community Eliezer is pretty high status (https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) and it's hard not to be influenced by apparent status as a modal human, but there is great variation in this respect among people from schizoids to Williams-syndrome patients. You're probably just among the less easily influenced.

Expand full comment
uugr's avatar

This might not be the case for most, but that's precisely why I sought them out. I found their eloquence therapeutic.

Expand full comment
Michael Watts's avatar

There's not a shortage of eloquence. Read fiction or poetry, or listen to music, and you can have as much as you want without the pretense that you've discovered a new way to think.

Eloquence is a bad thing when you're making factual claims, because its only function is to disguise the accuracy of the claims.

Expand full comment
Taleuntum's avatar

If I did not misunderstand Scott, the p-zombies in the post are different from the usually assumed p-zombies, because of the second difference: they are not assumed to report having conscious experience.

Expand full comment
Melvin's avatar

I have no idea why someone would enter an old established philosophical debate and start out by defining terms in a way that contradicts the way they've been used for decades. It's only going to be confusing.

Expand full comment
Shankar Sivarajan's avatar

That assumes they've been used in a consistent way. I don't think the concept is coherent enough to make such consistency possible.

Expand full comment
JohanL's avatar

Agree, we might call these p(2)-zombies.

But they are *even more* difficult to imagine than the regular type.

Expand full comment
Kenny Easwaran's avatar

Yeah, I think this point is closer to the sort of thing that goes on in Sellars’s “Empiricism and the Philosophy of Mind” and Dennett on the intentional stance, than to a Chalmers point. It does seem different than anything specific I’ve heard something say.

Expand full comment
Amos Wollen's avatar

“I agree I am scum and not worthy of kissing the boots of anyone with formal credentials, please don’t kill me”

I follow a lot of philosophy professors on Facebook and sometimes your articles make the rounds — I think there’s a pretty broad consensus that you are a real philosopher

Expand full comment
Scott Alexander's avatar

Have you never gotten the sort of comments I'm implicitly responding to here? You must live a blessed life.

Expand full comment
Amos Wollen's avatar

Not much actually. I have like one or two credential-adjacent things which makes me immune from all criticism. When I do get those sorts of comments it’s usually deserved. Life is good.

Expand full comment
demost_'s avatar

I am guilty of having written such comments, but please please please don't let them discourage you from writing! Even if every single bit of an article should have appeared somewhere else, you manage to recompose them in a unique and insightful way.

Expand full comment
Brian's avatar

But, like, real ones? With h-indexes and beards and ribbons?

Expand full comment
Amos Wollen's avatar

Yes, all of them had ribbons

Expand full comment
Kenny Easwaran's avatar

I don’t think there’s a consensus on what a “real philosopher” is!

For what it’s worth, I count Scott Alexander as a bit less of a “real philosopher” than Scott Aaronson or Stuart Russell or Joseph Henrich (actual academics who publish some philosophy-adjacent work) who are themselves less than Noam Chomsky or Alison Gopnik or Sean Carroll (who actually publish in philosophy). But I do count Scott as a relevant influence on philosophical matters, and worth paying attention to for some philosophers, even though there are always difficulties in engaging with work operating according to the conventions of a different academic discipline, and even more difficulties engaging with work that doesn’t have any sort of academic discipline. It’s still philosophically relevant.

Expand full comment
Alex's avatar

You have rediscovered what Chalmers calls the “paradox of phenomenal judgment” (chapter 5 in The Conscious Mind)! Dennett and Shoemaker also discuss it earlier; the chapter should have the citations.

Expand full comment
Scott Alexander's avatar

I have not heard that term before, but reading https://aporia.byu.edu/pdfs/naegle-the_paradox_of_phenomenal_judgment_and_causality.pdf , I think it might be the preexisting argument I'm responding to, not my response.

Expand full comment
Alex's avatar

Chalmers agrees that zombies talk as if they have qualia but lack qualia, so he disagrees with the bolded sentence too; that’s why he thinks the situation is paradoxical (how can we know we have qualia if we’d go on talking about qualia even if we were zombies?! But we do have qualia!) instead of concluding consciousness must be physical/shared with all physical duplicates like your zombie twin

Expand full comment
Kenny Easwaran's avatar

I think it’s slightly different. Chalmers assumes p-zombies are completely outwardly indistinguishable, so they are already talking about qualia. Scott is trying to argue that even creatures a lot like us that start with a completely non-phenomenal language would eventually add phenomenal-seeming parts to the language.

This is closer to what Sellars goes through in “Empiricism and the Philosophy of Mind”, and what Dennett is after in explaining the usefulness of the intentional stance, but a bit different from any of them.

Expand full comment
Alex's avatar

That’s fair—didn’t mean to suggest that every step of the argument was a dupe or had no novel elements, just that the topic/paradox has been discussed, and that it isn’t just the argument he said he was responding to (the bolded yudkowsky claim)

Expand full comment
Man in White's avatar

I'd say checking the original is more helpful

"I will argue at the end of this chapter that the semantic content of my phenomenal beliefs is partly constituted in subtle ways by conscious experience itself (for example, red sensations may play a role in constituting the content of certain beliefs about red sensations). If this is so, then some of the zombie’s judgments will have content that are not as rich as my corresponding belief contents. Nevertheless, they will at least correspond one-to-one to mine, will have the same form, and will function in the same way in directing behavior as mine."

Expand full comment
Man in White's avatar

As for

"This argument confuses me. It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence. Just using an encoding scheme that matches it perfectly and causes them to describe it the exact same way that we do?"

Yes, according to Chalmers

"He is physically identical to me, and we may as well suppose that he is embedded in an identical environment. He will certainly be identical to me functionally: he will be processing the same sort of information, reacting in a similar way to inputs, with his internal configurations being modified appropriately and with indistinguishable behavior resulting.

Expand full comment
Ken Kahn's avatar

Claude seems to think there is some philosophical novelty in your post: https://claude.ai/share/c9f099b3-e5d9-4fb5-8338-b1a6535f5402

An easy way to avoid " clearing the bar of spending weeks reviewing every philosophy paper ever written until I figure out that it’s similar to an idea in Schmoe & Schmendrick 1972"

Expand full comment
Scott Alexander's avatar

I don't know how much I trust Claude here - he's too nice, and "novel" is too subjective. But I appreciate the thought.

Expand full comment
Chris K. N.'s avatar

I had to look up Schmoe & Schmendrick 1972, and it doesn't even mention qualia:

https://schmoe-schmendrick-1972.tiiny.site

Expand full comment
Viliam's avatar

This is perfect.

Expand full comment
vtsteve's avatar

+1000000

Expand full comment
Xpym's avatar
2dEdited

>they still aren’t “experiencing” the mysterious redness of red

They aren't experiencing themselves having the experience. Like, imagine that a real human is sleepwalking, and is still able to describe that picture to you in perfect detail, but then wakes up having no memory of it. Of course, it's imaginable that sleepwalkers are still conscious, just that their memory recording mechanism doesn't work, but the whole point seems to be about having awareness of having awareness. If this meta-awareness didn't exist then reports about having it from p-zombies would indeed seem improbable.

Expand full comment
Steve Sailer's avatar

To do philosophy, you are supposed to reference earlier philosophers: e.g., "All men are mortal. Socrates is a man. Therefore, Socrates is mortal" is philosophy.

Socrates, Plato, or Aristotle are A+ references, Dennett is a C+ reference, Yudkowsky is ... well ...

Look, there is a Philosopher's Guild and it has rules.

Expand full comment
Nicolas D Villarreal's avatar

Not reading philosophers outside your own school's canon makes you actively dumber, yes.

Expand full comment
JohanL's avatar

The classical philosophers aren't much use in Philosophy of Mind, though - it's not a main area of interest for them. I mean, good grief, Aristotle thinks the brain is a mere coolant system!

Expand full comment
TGGP's avatar

For some people, perhaps it is...

Expand full comment
Viliam's avatar

sometimes it produces cool ideas

Expand full comment
Shankar Sivarajan's avatar

Yes, and it's only Philosophy if it's in compliance with the Rules. Otherwise, it's merely a rando blogging dangerous disinformation and "doing his own research" instead of Trusting the Experts, same as any other Science.

Expand full comment
uugr's avatar

(And yet, the most interesting works never quite seem to abide by them...)

Expand full comment
Kenny Easwaran's avatar

That’s because there are selection pressures. Just like the best athletes at a university often have the worst grades, even though athletics and grades aren’t particularly correlated, but if you’re good enough at one they let you slide on the other.

Expand full comment
Kenny Easwaran's avatar

If someone doesn’t engage with pre-existing literature on the subject, but they write something interesting, I’ll say revise and resubmit and ask them to engage with this literature. That’s how you make it more useful for readers and contribute more to the growth of human knowledge. Unfortunately, I was not the reviewer in charge of David Hume or Michael Dummett, but in any case you can get away with a lot more in a book than in an article.

Blog posts are more like books that way.

Expand full comment
Alice Wanderland's avatar

I've been thinking about how to make this philosophical point more real to humans recently, so I went looking for an actual human without any qualia of colour (where qualia = perceiving/feeling things like "the mysterious redness of red"), and found these videos of Tommy Edison, a person who is fully blind *from birth*, talking about his non-experience of colour. Even though he's never experienced the "mysterious redness of red" or the qualia of any other colour, Tommy Edison still has a favourite colour, and uses language containing colour words just fine, even as he acknowledges he has no idea what he's talking about in the sense of knowing what "red" actually looks like.

I'm not saying Tommy Edison is a p-zombie. Tommy Edison is certainly a conscious person. But *with respect to visual colour* he also never possessed the qualia of colour. Yet he can still learn facts about how colours relates (or doesn't relate) to his other senses of hearing/smell/taste etc. If I imagine a hypothetical world where Tommy Edison sat down and decided to memorise a large dictionary of facts about what objects are what colours, I imagine that one wouldn't be able to be immediately able to tell that Tommy Edison doesn't have colour-qualia if you talked to him through the phone.

Would it be ok to say that, with respect to colour only, Tommy Edison is a colour-p-zombie lacking colour-qualia-consciousness -- who still nonetheless talks about having "a favourite colour" which even changed over time?

1) A Blind Person's Perspective of Colors (https://youtu.be/59YN8_lg6-U?si=wcnyXf7hrdwD8ufp)

2) Blind Person’s Favorite Color? (https://www.youtube.com/watch?v=nwgkF_HOh-I&list=PL292onq6HAMQBB0axBTUvHruBK-0y5HqS&index=8)

Of course, the crux of this sort of question is always "what sort of consciousness do you mean?".

I think this will probably be resolved once we mechanistically understand the difference between a base perceptual system (e.g., a system that turns energy in photons/mechanical waves/chemical molecules into action potentials) and self-reflective perceptual system that "perceives" the patterns within a base perceptual system in order to pull out common features across our individual sensory/base perception systems. (Which our brains would probably need, in the same way that the mechanistic interpretability people realized they needed SAEs or something else to help "interpret" or "pull out patterns" from the base activations of large trained transformers.)

Expand full comment
JamesLeng's avatar

Plenty of people who haven't seen Goncharov (1973) manage to form opinions about it, but what they really end up with are opinions about everyone else's opinions, or personal preferences back-propagated into hallucination.

Expand full comment
Deiseach's avatar

Because there's so many layers in that movie, it subverts expectations, it transposes heroes and villains and asks us which is which or, indeed, if there is any difference?

Everyone brings their own experience to their interpretation of it, which is what elevates it from just "a really good crime drama" to a masterpiece.

Expand full comment
WindUponWaves's avatar

Sounds a lot like one of Scott's previous posts on this subject, "What Universal Human Experiences Are You Missing Without Realizing It?" (https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/), the section about the guy who was born unable to smell but had no idea for half his life:

"I have anosmia, which means I lack smell the way a blind person lacks sight. What's surprising about this is that I didn't even know it for the first half of my life.

Each night I would tell my mom, "Dinner smells great!" I teased my sister about her stinky feet. I held my nose when I ate Brussels sprouts. In gardens, I bent down and took a whiff of the roses. I yelled "gross" when someone farted. I never thought twice about any of it for fourteen years.

Then, in freshman English class, I had an assignment to write about the Garden of Eden using details from all five senses. Working on this one night, I sat in my room imagining a peach. I watched the juice ooze out as I squeezed at the soft fuzz. I felt the wet, sappy liquid drip from my fingers down onto my palm. As the mushy heart of the fruit compressed, I could hear it squishing, and when I took that first bite I could taste the little bit of tartness that followed the incredible sweet sensation flooding my mouth.

But I had to write about smell, too, and I was stopped dead by the question of what a peach smelled like. *Good*. That was all I could come up with. I tried to think of other things. Garbage smelled bad. Perfume smelled good. Popcorn good. Poop bad. But how so? What was the difference? What were the nuances? In just a few minutes' reflection I realized that, despite years of believing the contrary, I never had and never would smell a peach.

All my behavior to that point indicated that I had smell. No one suspected I didn't. For years I simply hadn't known what it was that was supposed to be there. I just thought the way it was for me was how it was for everyone. It took the right stimulus before I finally discovered the gap."

(from Mark Eichenlaub, https://www.quora.com/Graduate-School-Education/Is-going-for-a-Ph-D-worth-it-if-Im-extremely-passionate-about-a-field-but-dont-really-want-to-spend-my-20s-poor-toiling-away-in-a-lab-and-living-in-a-city-that-I-dont-particularly-want-to-live-in/answer/Mark-Eichenlaub?share=1&srid=slK)

Now that I think about it, there's also the case of people who have aphantasia/lack visual imagination and can't think in pictures. They walked among us for who knows how many thousands of years, including in jobs like "professional animator" and "Co-founder of Pixar", without anyone realizing until roughly 2015 when Zeman published the paper coining the term "Aphantasia". If they can lack certain forms of qualia without anyone noticing, why not people who lack *all* forms of qualia?

Expand full comment
azatol's avatar
2dEdited

I have a similar experience as the guy with no sense of smell, but my lack is a lack of depth perception, of true 3-dimensional perception. I'm pretty good at math, but three dimension rotations/translations, etc, really befuddle me because I never had that true 3d perception.

I was born blind in my right eye, and I didn't really get that was missing anything for most of my life. I turn my head more in order to keep things in my vision, but it wasn't until I got married that I realized I always was very cautious when driving, and I lost my sense of place more often.

Driving in the car with my wife, who has a very good sense of space, suddenly made me aware how much work my brain was doing to try to approximate how far things were away, without the ability to use two eyes to get a true sense of distance.

I still can't perceive what it feels like to directly perceive the distance of something. It's like all distance is just an artist's adjustment of how far away something is by making it smaller. It makes me wonder how normal people perceive screens and surfaces where everything is flat.

Expand full comment
Michael Watts's avatar

A puzzle that stuck with me was this one:

Imagine a cube, of side length 2, centered on the origin. Every coordinate of each of its vertices is plus or minus 1.

Cut that cube in half three ways: along the line x = y, along the line y = z, and along the line x = z.

How many pieces result from these three cuts?

---

It's pretty easy to demonstrate that there are six. If you have the spatial processing (sadly, I don't), doing this is unnecessary, because you'll just know the answer.

I once presented this to a Chinese girl at the school where I was teaching, who had no interest in math. I drew the cube on the board and marked the three cuts. I asked how many pieces would result, and she replied, with no observable pause for thought, six. When I asked how she knew, she said "I can see it in my mind".

Later, the chemistry teacher walked in to my room and saw what I'd drawn on the board and wanted to know more about it. I gave him the question, and he loved it. He took me to his room, where he cut out a styrofoam cube and, three times, made one of the cuts and then colored the surfaces revealed by that cut with a marker. Three cuts, three colors. Then he took the cube apart to play with the six pieces and look at their shapes and which faces were made by which cuts.

But while he was clearly very interested in the problem, he couldn't see the answer any more than I can.

Expand full comment
gorst's avatar

> If they can lack certain forms of qualia without anyone noticing, why not people who lack *all* forms of qualia?

Thanks! I was wondering almost* the same thing.

I am confused, why so few people in comments ask this questions. I guess, that it already has been answered, since the p-zombie question has been around for a long time.

(* it may be self-contradictory, but maybe p-zombies experience sensory-qualia, but not conciousness itself. So I wouldn't phrase this question regarding *all* forms of qualia, and instead only aim towards the concious experience of qualia, if that makes sense)

Expand full comment
TakeAThirdOption's avatar

> > If they can lack certain forms of qualia without anyone noticing, why not people who lack *all* forms of qualia?

> Thanks! I was wondering almost* the same thing.

But a bunch of unusual behavior connected with not having this or that kind of perception not being noticed is not that bunch of unusual behaviour connected with not having this or that kind of perception not existing.

The point is, you would not assume a tomato that's green tastes exactly the same as a red one. And they don't, because those properties are very tightly connected.

So why assume there is a property, consciousness, that is not connected with any other. You don't assume that of any other property, right? Weight, temperature, being alive, and so on.

I think p-zombies are a bad idea and nothing else.

Expand full comment
Nicolas D Villarreal's avatar

He does have qualia of color, to the extent that refers to phenomenal experience, it's just that the structure of color, the differences between each concept, is flattened to one dimension, rather than the three dimensional (RGB) continuum most ppl have access to.

Expand full comment
Alice Wanderland's avatar

Hmmm interesting. Would you say your use of color qualia points to the same qualia which people typically point to by saying “the mysterious redness of red”?

Expand full comment
Nicolas D Villarreal's avatar

Yes I think so. A structure, concept, by virtue of only dividing up a continuum, means that the signal for red itself can be quite rich, that richness of the underlying signal/continuum, combined with the specificity given by the structure, is what makes up qualia.

Expand full comment
Nicolas D Villarreal's avatar

This is actually important for understanding concepts like the sublime, when looking at the ocean, or a fractal, the richness of the signal gestures at a complexity that a human could never fully impose structure on, and the experience of that can sometimes be awe inspiring.

Expand full comment
Alice Wanderland's avatar

I see!

Expand full comment
Alice Wanderland's avatar

Your description matched what I thought initially too. But then I realised maybe that’s only what a typically sighted person’s experience might feel like.

Because when I listen to Tommy Edison describe things like a banana and he explicitly says he does not think about the “yellow” or colour of a banana…I wonder whether I’m just typical minding what colour qualia should be like.

I’m not confident Tommy Edison would himself spontaneously have all the those associations with something like “redness”. I’m guess he’d associate it with things like certain sounds or tastes. But that’s just my guess of course.

https://youtu.be/gCwZCaawKNk?si=MV2_auNf-BiLV6uK

Expand full comment
Nicolas D Villarreal's avatar

Yeah I mean there's no reason for him to have that correlation necessarily. It'll be difficult to predict where the correlations would be for a blind person, determined by certain cultural artifacts they get, and there will be some weirdness in exactly where the distances between each color concepts is since they're less grounded, but I'd guess that indigo, blue, purple would still be clustered together, and further away from a concept of yellow or green.

Expand full comment
Skull's avatar

It doesn't sound like he has a favorite color, it sounds like he has opinions of other people's opinions of their own qualia.

Expand full comment
Alice Wanderland's avatar

Perhaps? This is someone saying the literal words “my favourite colour used to be…and now it is … because …”, which is typically used to report about certain internal human experiences. And Tommy Edison is a real, conscious, human being, not a p-zombie or LLM.

Make of that what you will when thinking about p-zombies and/or qualia in general, and/or colour qualia in particular. 🤷‍♀️

Expand full comment
JamesLeng's avatar

Someone could develop a favorite non-avian dinosaur, saint, subatomic particle, ethical axiom, or FTL engine, without ever directly observing any of them, or even meeting anyone who has. People care about all sorts of theoretical and/or arbitrary stuff. https://www.explainxkcd.com/wiki/index.php/915:_Connoisseur

Expand full comment
Melvin's avatar

I think that the talk of p-zombies confuses things. My understanding of p-zombies has always been that they're externally indistinguishable from humans, so talking about how their behaviour would differ from humans' is a bit incoherent.

I find it easier to imagine Pzaliens. Pzaliens come from the planet Pzearth, they're roughly as intelligent as humans but they've evolved a consciousness-free form of intelligence that allows them to behave at a similar level of sophistication without actually having any subjective conscious experience.

Now, how do pzalien philosophers of mind talk about their qualia?

1. They describe them much like humans, even though they don't actually experience them.

2. They don't do that; they find the question impossible to understand, they describe it in some pzalien way that is equally incomprehensible to us.

3. Trick question because pzaliens can't exist and it's not possible to independently evolve human-level intelligence without having qualia.

What I think the bolded comment is saying is that (1) is incredibly unlikely and the answer has to be either (2) or (3). I would go along with this. Although a p-zombie or an AI which directly imitates a human would claim to have qualia (because claiming to have qualia is easy and it wouldn't be much of an imitation otherwise), a non-conscious alien intelligence which has never experienced qualia would be very unlikely to make up the concept on its own.

Expand full comment
Throw Fence's avatar

I tried exploring (2) in the form of a dialogue: https://objectiveobservations.substack.com/p/contact-of-the-strange-kind.

Expand full comment
JohanL's avatar

Probably 2, possibly but unlikely 3.

It’s not obvious what this achieves philosophically, though.

They might eventually learn how to talk to us about qualia, just like how if we could talk to dolphins, we might be able to talk about sonar experiences they have even though we can’t have them; once we learn the words (or ”words”).

Expand full comment
Melvin's avatar

I think 3 is more likely than 2. (And being able to clarify this disagreement is what it achieves philosophically.)

My strong suspicion is that consciousness is not an optional extra to intelligence, it's a necessary component... at least for intelligence that has evolved organically. In order to sythesise all the sensory data and make intelligent decisions there needs to be some sort of centralised thingy in the brain, and that thingy will inevitably wind up having some kind of conscious experience.

I specified organically evolved intelligence. I now suspect that there might be another way to behave intelligently without being conscious, which is to imitate an intelligent being. This is of course what LLMs do. But an LLM must be trained on a corpus of behaviours from other intelligences, so an LLM type intelligence isn't going to evolve on its own.

Expand full comment
JohanL's avatar

3) is a *really* strong claim, though. It's not just that this would often be a good idea in evolution, but that it's literally impossible to have intelligent behaviour without consciousness. Particularly since we _seem_ to be able to make non-conscious machines with intelligent behaviour.

Expand full comment
ContemplativeMood's avatar

This all becomes much easier if you simply conclude that qualia don’t exist. Can’t exist. Our brains hallucinate an illusion of depth when generating conscious feedback loops. Like a tv camera pointed at its own monitor.

Expand full comment
Scott Alexander's avatar

Big if true, but I feel like this is denying the single most undeniable thing possible. While it might eventually be the right path, I think explaining why the illusion is so convincing is basically an identical problem to explaining how qualia could exist in the first place.

Expand full comment
ContemplativeMood's avatar

I really hope there’s a complicated answer at the end of this. And not simply that it makes it 0.001% easier for us to hunt woolly mammoths and avoid being eaten by cave lions if we believe we’re conscious.

Expand full comment
JohanL's avatar
2dEdited

My guess is that the whole thing starts as a useful way to represent sense-data. The brain creates a mental model of its surroundings, and then calls on this mental model for decision input.

Expand full comment
Brendan Long's avatar

I think it's more than that. You can use sense data to execute actions without being consciously aware of it, like in the case of sleep waking, driving somewhere without remembering the drive, or muscle memory in sports and music (I'm giving multiple examples because maybe one of these isn't valid, but if any one is then it's sufficient).

The weird thing isn't that we can use our sense of sight to drive to a car. The weird thing is that we can have the experience of using our sense of sight to drive a car.

I have a theory that this is necessary for meta-learning and strategy, and I essentially don't believe p-zombies are possible. A being that doesn't experience consciousness may be able to drive a car, but it won't learn from the experience or be able to describe it in the way that a conscious being could.

I also think AI's are misleading for two reasons: Not only are they trained to reproduce text that came from conscious beings, but they're also able to cheat by relying on notes (they can see their full output history). If you had a video recording of a time you drove somewhere but don't remember doing it, you you also be able to describe the conscious experience of doing it even though you didn't actually have the experience.

Expand full comment
Dan L's avatar

This is the eliminative materialist position in a nutshell, no? The problem with qualia isn't that they "exist" per se, it's that the standard account smuggles in extraphysical properties in the definition. I like your passage here:

>This argument confuses me. It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence.

Conceptual primitive, check. Direct role of that conceptual primitive in conscious processing, double check. Non-material reification.... not necessary!

Expand full comment
Nicolas D Villarreal's avatar

How is qualia "deep" in the first place? Most matter doesn't have an active memory of signals it's received, or organize those signals into patterns. But considering that we do, how is qualia not just an obvious extrapolation from that? Yud and co just have trouble squaring this with their reductive materialism because they forgot that maps and minds don't exist out there in the world of atoms (nor does the concept of an atom for that matter). I talk about this a bit here

https://nicolasdvillarreal.substack.com/p/the-impossibility-of-a-rational-intelligence

Expand full comment
Robert Beard's avatar

Illusionism is just obviously true. Your brain simulates/hallucinates a model of what a homunculus sitting in your skull would experience. That’s not an entirely coherent simulation (what’s in the homunculus’s skull to explain its qualia?), but it’s a very useful organizing construct to give you agency and long-term goals and desires and all that.

It seems kind of pointless to argue whether the simulated homunculus hallucinated by your brain “really” “is” “conscious” or not. There’s no actually open question left. The actual mechanism of your conscious experience has been fully explained. A hallucinated homunculus experiences things the way you do because that’s what you are. There’s no fact of the matter about which words belong in scare quotes and which don’t.

Expand full comment
Michael Watts's avatar

What do you mean, "big if true"? Give me an example of any consequence.

"Qualia" is just a word people use; it doesn't have a meaning. The phenomenon is pretty common, mostly seen in theology.

Expand full comment
demost_'s avatar

I can't help but citing a part of my book review on "Consciousness and the Brain". This is the most speculative part, and it addresses a possible reason why the illusion(?) of having an inner self is convincing and almost universal. But the same applies to qualia. The speculative answer is that qualia are so convincing because our internal representation of them is extremely consistent. They appear in moments where our different brain regions coordinate to an exceptionally coherent activity. That doesn't quite explain why the brain perceives or constructs qualia in the first place. But our brain is a very competent pattern detector, and this is a particularly strong and consistent pattern. So my answer to the question of why the brain makes a concept out of this is: why not? It's what the brain does, as for all particularly strong and consistent patterns.

https://www.astralcodexten.com/p/your-book-review-consciousness-and

"

The brain is very good at decomposing the world into units that make sense. For example, when I look out of the window, I automatically decompose the image into several houses, a few people, a dog, and so on. (...) We have a very clear body schema, i.e., we consider our own body as a unit (with various sub-units, of course). (...) So the body schema is a construct that is shared by almost all people (some get it wrong), because it makes so much sense, both to us and to alien observers.

What would an alien observer say about the neural activity in our brain? There are a lot of different brain areas with different functional rules. Sometimes a few areas interact with each other, and our visual system may activate our motor control system without much other brain areas involved. A lot of neurons plainly contradict each other. Perhaps the alien would conclude that the subunits have a lot more descriptive power than just the coarse category "brain". So it might decide to describe it as Cartesian theater. But now imagine that the alien is only allowed to observe the brain *in conscious moments*. As we have learned, in these moments all regions of the brain have agreed on the same interpretation of the world. In the theater picture, the alien would only observe the actors at times when they all speak in a perfect chorus. In this case, it might conclude that the sub-units are not so important, and that the chorus is much more important than its part.

I think this is precisely where our concept of "my mind" comes from. Remember that our episodic memory might be exclusively formed from conscious moments, and also implicit learning gets a strong boost from consciousness. So when "we" (our brain, or the actors in the Cartesian theater) learn a "mind schema", then this is based on the conscious moments, not on the activities in between. On this basis, it makes sense to merge all our neural activity into a single unit, which we call "I" or "my mind". Just as we form the concept of "my body", but even stronger, since we never "observe" different parts of our mind to be incoherent or even independent. Once the concept of "I" is formed, any conscious perception is connected with the "I" unit, so a conscious perception of the color red is translated into "The neural activity of the myself-unit represents the color red", which in common terms is "I experience the color red".

"

Expand full comment
Signer's avatar

Why do you think it's undeniable? Like, what argument do you still consider to be a valid reason to trust your sense of undeniability?

What's actually undeniable is existence of universe, because cogito ergo sum in broad sense. Any specific qualia are deniable: you can misremember what you were seeing, so there is external truth-condition (something like "these neurons did such and such things"), so it is possible in principle to decouple your thoughts of certainty from what actually happened with your experience.

Expand full comment
Kenny Easwaran's avatar

Here is Galen Strawson making that point: https://www.nybooks.com/online/2018/03/13/the-consciousness-deniers/

I still side with Dennett.

Expand full comment
Kiel's avatar

Could your sense of qualia simply be a "sense of realness" of an experience versus an imaginary one? I think I'm safe to say that something resembling that sense exists being as we count its weakening as mental illness.

Expand full comment
Andreas Jessen's avatar

This argument feels a bit like "Imagine a car without wheels that behaves in every way like a normal car. As it behaves like a completely normal car, it can also drive. Therefore, cars without wheels can drive."

The whole premise of "Being that is not conscious but behaves like one that is" is super unintuitive to me. I have no idea if it would behave the same or not and I have no idea if it would actually be able to report that the rose is red.

Maybe this is similar to how some autistic people have difficulties recognizing emotions of others. They have to memorize which sign stands for which emotion and can't notice the emotion intuitively. Maybe p-zombies would do something similar with reporting the color of the rose. They look at the rose and don't experience the redness of red, but they can still learn that the signal coming from their eyes stands for red. But who knows if they are able to do that within 100 ms.

I don't think one can just take it as the premise that they can do this and then get any meaningful conclusions out of this thought experiment.

Expand full comment
Ragged Clown's avatar

This is how I think of it, too. My abbreviated version of the p-zombie argument is:

Imagine a p-zombie that is physically identical to a conscious human, except it is not conscious. Therefore, consciousness is not physical.

We have no reason to believe that a p-zombie is not conscious except that Chalmers said so.

Expand full comment
JohanL's avatar

That p-zombies are either impossible (logically or metaphysically) or incoherent is the majority view among philosophers (from that 2013 study).

Expand full comment
TakeAThirdOption's avatar

> We have no reason to believe that a p-zombie is not conscious except that Chalmers said so.

😄

Expand full comment
TGGP's avatar

Cars could drive on treads instead of wheels.

Expand full comment
Andreas Jessen's avatar

I knew, I had that comment coming :D

I think the appropriate way to respond to this is to point out that treads have wheels inside them too.

Expand full comment
TGGP's avatar

Are gears considered to be a type of wheel?

Expand full comment
Andreas Jessen's avatar

To call the sprocket wheels, that are used for driving the tracks, a type of wheel is probably a bit of a stretch, but I was talking about the so called "road wheels" at the bottom of the tracks that actually carry the weight of the vehicle.

Expand full comment
ultimaniacy's avatar

This response is actually a good demonstration of why, contra Scott's and Chalmers' claims, the concept of a "p-zombie" is trivially logically contradictory.

If we posit a car without wheels, it's coherent to claim that it might be able to drive in another way -- but, because normal cars use wheels to drive, it is *not* coherent to say that such a car would drive *in the same way a normal car does*. In order for the "car that drives without wheels" concept to be made coherent, one needs to be able to posit an alternative mechanism X by which the car could drive. But then it's not "a car that's exactly like a normal car, but with no wheels" -- it's "a car that's like a normal car, but with no wheels and also with X instead".

Likewise, in the case of the p-zombie: take a scenario where I observe myself doing something because of an experience -- for example, I describe a rose as red, because I see red when I look at it. If my zombie-world counterpart looks at the same rose, then -- logically -- one of the following three things must be true:

(1) the p-zombie does not describe the rose as red,

(2) the p-zombie describes the rose as red, but for a reason that is entirely different from the reasons why I do so,

or (3) the p-zombie describes the rose as red, but not for any reason at all.

If (1) is true, then the "p-zombie" doesn't behave identically to its human counterpart, and therefore isn't a p-zombie.

If (2) is true, then the "p-zombie" isn't like a human aside from lacking consciousness. It's like a human, except that instead of having consciousness, it possesses some other property normal humans lack, which allows it to act as if it has consciousness. But again that means it's not a p-zombie.

And if (3) is true, then the "p-zombie", again, isn't like a human aside from lacking consciousness. It's an entity that's like a human, except that it exists in a world which does not run according to consistent rules of causality. But that means that its world doesn't operate on the same physics as ours, which again means that it's not actually a p-zombie.

So there is no logically possible scenario in which a p-zombie could exist. QED.

Expand full comment
Shankar Sivarajan's avatar

Is the idea of "being that doesn't have a soul but behaves like a normal ensouled person" more intuitive?

Expand full comment
Andreas Jessen's avatar

Actually no. To be honest, I don't think, I know what exactly a soul is. Is there a difference between the concept of being conscious and having a soul?

Expand full comment
Shankar Sivarajan's avatar

Not that I can see; they both seem just as devoid of ACTUAL meaning while gesturing at the same (incoherent) idea, but it still seems plausible that one of the terms serves better as an intuition pump.

Expand full comment
MondSemmel's avatar

I had the same problem with this post. It felt like the thought experiment doesn't do any work; the work is all in the insufficiently justified assumptions that the thing in the thought experiment is coherent, conceivable, logically possible, physically possible, etc.

Expand full comment
Andreas Jessen's avatar

That sums it up quite well.

Expand full comment
hwold's avatar
2dEdited

I’m sorry, I know will sound presomptuous given the prestigiousness of personalities involved, but I believe that this idea of "P-zombies" is the single most retarded idea in the whole history of philosophy. I just want it to die quietly, by all participants implicitly agreeing that it was too embarassing to even entertain it. It is the number one reason I pay so little respect to academic philosophy as a field — if you’re struggling to clarify this question, really, are you even trying ?

So, the basic idea is that consciousness is somewhat "epiphenomeneous" (am I writing that right ?). Meaning : it is affected by the world, but does not affect the world. The way a spectator is affected by a film, but does not affect it. This is even pure nonsense or existential nightmare fuel.

Why are you reading this, dear reader ? Because you screen wrote "I’m sorry, I know…". It did because I pressed the keys "I" and "’" and "m" on my keyboard. Because my finger muscle did some action. Ultimately, it comes from *something something* in my brain.

Now I could say that *obviously* my inner monologue and my thoughts are in the causal pathway, so consciousness affect the universe, so it’s not epiphenomeneous-or-whatever.

And then you could do the annoying thing and say… well, it’s not conclusively proven, it’s all the point of the debate.

Fine.

But then you will have admit that you have not read the message written by the conscious me. You just read the message who has been written by the unconscious-robot-zombie-me. Since the conscious me can’t, by definition, write a message in this universe. Symmetrically, any answer you would write to me would HAVE to be written by the unconscious-robot-zombie-you, since the conscious-you can't type anything. It’s not the conscious individal David Chalmers that come out with the idea of P-zombies. It’s the unconscious-robot-zombie David Chalmers. What the REAL David Chalmers really think ? Nobody can know. He has no hands — the hands belongs to the robot.

The Fun Fact is : I notice that my inner-monologue is very aligned with what the robot-zombie is writing with my (or his ?) hands. Presumably yours is too. That’s the existential nightmare fuel (if it was not ridiculous). To square this with that "epiphenomenal consciousness" idea, there is only two ways out :

* My thoughts are not driving my actions, my actions are driving my thoughts. The robot-zombie is deciding on the message, what I do and why I type, the conscious me is a helpless slave who can only rationalize post-fact what has been written, and delude himself that "yes, it’s really me who wrote the message, not the robot-zombie !" Everyone Real Person is in fact Metaphysically Schizophrenic.

* In fact the inner-monologue is part of the robot, I am the unconsious-robot, but there Really Is a Real Me Epiphenomenal Me Out There. He does not have to be aligned with me. In fact, he’s probably screaming in the void right now "stop writing you’re conscious ! you’re not !"

Expand full comment
Mark Slight's avatar

Totally agree

Expand full comment
The Solar Princess's avatar

I agree that it's an exceedingly dumb idea, and it astounds me that even people like Scott Aaronsson use it seriously. To me, the idea of a philosophical zombie is useful, because it shows you how not to reason.

Expand full comment
Christina the StoryGirl's avatar

BEST.

LLM-GENERATED OUTPUT.

EVER.

!!!!!!

Expand full comment
uugr's avatar

I agree that the p-zombie is a bad way to get at the question, but the redness of red still seems mysterious. I'm more bothered that nobody seems to have come up with a better thought experiment addressing it.

Expand full comment
Aditya Chandrasekhar's avatar

I think redness of red is a primitive of the brain.

It is not an intrinsic property of an object (or wavelengths of light), since colourblind people exist.

Similarly, the meaning of letters is also not intrinsic. Languages are learned.

So we just learn to associate visual data with 'colors', which may be are similar to what attributes of an object in programming.

Expand full comment
Ruby Fuss's avatar

What do you think of Mary the colour scientist?

Expand full comment
Michael Watts's avatar

> "epiphenomeneous" (am I writing that right ?)

Given that the adjective of "phenomenon" is "phenomenal", the adjective of "epiphenomenon" should be "epiphenomenal".

The word is malformed, with "epiphenomenon" being obviously Greek and -al being Latin, but that's also true of "phenomenal".

Expand full comment
Name Required's avatar

IIRC, research suggests that a decision is made slightly in advance of the conscious experience of deciding. So there's a one-way causal link from the entity that generates the report of consciousness (you slightly in the past) to the entity that experiences consciousness (you slightly in the future). Our hypothetical superintelligent observer would need to be told as an additional fact that the universe isn't going to spontaneously cease to exist in 0.1 milliseconds (though it could make a reasonably educated guess via the law of succession).

Expand full comment
vriendothermic's avatar

in the context of this argument, consciousness =/= thoughts/inner monologue/agency. consciousness includes consciousness OF these things. the claim that your consciousness is the "real you" is part of a whole separate conversation about the nature of the self. and epiphenomenalism doesnt imply that your consciousness might have thoughts/intentions misaligned with your own: if epiphenomenalism is right, consciousness does not generate thoughts/intentions at all.

Expand full comment
Underspecified's avatar

> My thoughts are not driving my actions, my actions are driving my thoughts

Or both things are driven by a common cause (directly or indirectly) or your observation of a correlation is driven by something other than a causal relationship between actions and thoughts.

I think your theory is pragmatic and parsimonious, but your argument is incomplete. You're missing axioms, and those unstated axioms aren't self-evident to everyone, and that's partly why people still talk about this stuff.

Expand full comment
MarcusOfCitium's avatar

"The robot-zombie is deciding on the message, what I do and why I type, the conscious me is a helpless slave who can only rationalize post-fact what has been written"

I think this is actually not far from the truth... The conscious mind may be kinda like a PR agent. We don't really understand why we do most of what we do, but we're good at coming up with plausible-sounding explanations that make us look good (and we believe them, because it's easier to convince people when you believe your own BS). See "The Elephant in the Brain".

But consciousness is an overloaded term... "Conscious mind" here refers to self-referential thought, which p-zombies would have to do too, just without the experience of hearing an inner monologue or seeing pictures in the mind's eye or anything.

But yeah I agree the whole p-zombie idea is pretty dumb.

Expand full comment
Daniel Muñoz's avatar

This comment is confusing two distinct ideas:

1) The possibility of P-zombies

2) Epiphenomenal dualism

Dualism is the view that the mind is not a physical or material substance (like the brain). *Epiphenomenal* dualists say that the mind cannot causally influence what happens in the brain—it can only *be* influenced.

Your comment is mostly attacking epiphenomenal dualism, which is very much a minority view in philosophy. Most of us are physicalists, and most dualists are “interactionists” (meaning that they think the mind-body causation goes both ways).

None of your objections seem to be about the possibility of P-zombies at all.

But a lot of philosophers would second your point that epiphenomenalism is “nightmare fuel.” Jerry Fodor once said that no mental causation would mean “the end of the world.”

Expand full comment
John OZ 🐢's avatar

The point of p-zombies as they exist in the philosophical discourse today is to demonstrate that the fact that phenomenal consciousness is necessarily a purely subjective aspect of mind, without any means to experience it from the outside, is something that creates problems for a number of different philosophies of mind, especially those of the more physicalist bent—if you think p-zombies aren’t metaphysically possible, why not? And if they are, isn’t it strange that mechanistic philosophy is so good at explaining everything up to and until we reach the subjective horizon? Is there any solution for reconciling mind with matter in some non ad-hoc way?

It’s not meant as a 1:1 argument that belief in the subjective consciousness of others isn’t in fact a fundamental part of how we think about the world, or that it isn’t a necessary condition for assignment of agency in a general Kantian sense.

Expand full comment
Bram Seynnaeve's avatar

"If the p-zombie says this - “My reportable mind receives the color information as a 2D grid in which each pixel conveys a irreducible sudden intuitive sense of being the correct color” - then what’s the difference between that claim versus “I experience the mysterious redness of red”?"

Err, doesn't the non-conscious part of our own brain do data processing like this all the time, without us experiencing any related qualia?

I'm sure there are better examples, but e.g. when your driving a car and your unconscious mind calls attention to something at the edge of your vision (without any related qualia), causing you to look and indeed then notice (with qualia included) there was something there, a biker or something.

Expand full comment
Matthew McRedmond's avatar

P-zombies is a thought experiment analogous to removing the origin from a plane and seeing what topology arises

Expand full comment
Shankar Sivarajan's avatar

You're sanewashing the idea. Your analogy sounds like the kind of thing that leads to non-Euclidean geometry. A better analogy is declaring that there exists a mysterious point that isn't part of the plane, doesn't affect it in any way, and then asking what the consequences would be if removing THAT point.

Expand full comment
Matthew McRedmond's avatar

Define “sane washing” in this context please 🙏

I considered your “better analogy” to be a member of my defined class of analogies. Obviously that is not the case from an outside view 🙂

Expand full comment
Shankar Sivarajan's avatar

Maybe I'm overthinking the analogy, but the idea of removing the origin from a plane and seeing what structure emerges from that struck me as prima facie a reasonable thing to do, reminiscent of the study of rngs (that's "rings" without the identity "i").

The analogy for p-zombies where you think about the consequences of removing something that by assumption doesn't DO anything seems patently stupid to take seriously, and I wanted something that evokes that.

Expand full comment
Matthew McRedmond's avatar

Ah ok, would my analogy be acceptable if i specified that it’s like removing an arbitrary point on the plane? It seems however that p-zombies are a at a minimum widely recognised as a thought experiment in philosophy so it might perhaps be like removing some named point on the plane like (4,2) but I think the math is just as interesting either way

Expand full comment
Dave Ward's avatar

Plus one to the 'this is like stuff that Dennett says' (see e.g. his first ever paper 'Machine Traces and Protocol Statements') - insofar as Dennett argues that 'zombies' as conceived by Chalmers would sincerely report q-properties due to their functional isomorphism to us. Andy Clark's paper 'a case where access implies qualia' is a nice, simple presentation of this Dennett-y strategy, that gets a predictive-processing makeover in his 'consciousness as generative entanglement'.

The difference is that Dennett and allies think (unlike you) that once we've explained a system's propensity to give sincere reports that there's 'something-it's-like' for them to be sensing the world in a particular way, that's pretty much all there is to explaining 'qualia'. Chalmers and other qualia fans disagree - we need to explain the real qualia that are lurking behind the dispositions to report! Chalmers covers some of this ground in his paper 'the meta problem of consciousness'.

Expand full comment
Tedd Hadley's avatar

Great overview. Dennett's view is so attractively parsimonious. Yet glorious inner experience surely can't be just information flow, can it? I keep waffling. Michael Graziano goes further and identifies the "qualia" architecture with his attention schema theory (most specific and detailed effort of any comparable work in my view) -- and also sees the hard problem vanishing.

Expand full comment
MarcusOfCitium's avatar

I'm pretty much on board with Dennett, but... I get why people say he's side-stepping the problem rather than answering it. The problem for me is: he says it's basically self-referential thought etc... But it seems to me that direct perception is prior to thought. Babies, all vertebrates, maybe even insects, still presumably have visual sensations, feeling of hot/cold, pressure, etc.

Been told not to quote GPT, but it tells me there is a middle ground that accepts there's a real but non-magical, pre-reflective layer of experience that isn't identical to reportability or introspection, and it's held by people like Thomas Metzinger, Anil Seth, Evan Thompson and Francisco Varela (and even David Chalmers on a good day). Will maybe want to look into them some more.

Expand full comment
The Solar Princess's avatar

Basically: qualia are the atoms of computation, they feel like they have intrinsic non-reducible qualities for the same reason why an LLM would see a token to have an intrinsic non-reducible quality.

But either way, I think the Buddhists were basically right about this discussion, and you can intuitively dispel many (not all) confusions by studying Buddhism.

Expand full comment
Mykhailo Odintsov's avatar

Aren't P-Zombies *by definition* indistinguishable from outside? So they will report just as much qualia as "real" humans - so some will argue it exists and they possess it and some will argue it's a non-thing or even if it's a thing it's non-issue - in the same probability distribution as their non-p-zombies counterparts.

Expand full comment
Dewwy's avatar
2dEdited

>And they wouldn’t answer “IDK, my mouth just moved and formed the syllables ‘this is red’”. Normal humans can easily tell the difference between a voluntary action and an involuntary spasm

Aren't P-zombies by definition *all involuntary action all the time* ? Because they have no volition, like in the arguments you need to be conscious to have volition no ? If there is no there there, there is no there to decide to there, y'know.

If a p-zombie tells you about the redness of a rose it does it for the same reasons a rock rolls downhill.

Sure, for lack of better terms "voluntary" and "involuntary" actions of P-zombies would be different actions, some different kind of real causes is happening, signals along different neurons. But it stops making sense to call it voluntary or not.

I appreciate that this is tortuous tbh, you'd have to describe many things p-zombies do as like "p-voluntary" and "p-involuntary" to distinguish them from normal voluntary and involuntary, and I don't think it really impinges on the argument so here. But it's easy to do and leady yourself astray.

Expand full comment
10240's avatar
2dEdited

You seem to smuggle in an assumption that p-zombies can exist. I'd argue that consciousness/quale *is* the ability and act of processing inputs in (certain) complex ways, and of recursively think about one's own thoughts (the latter perhaps only needed for having philosophical views on consciousness, rather than to have consciousness at all), (EDIT:) perhaps also memory in order to have a long-term, continuous experience. And if something processes its inputs in a similar way, it's conscious and not a p-zombie.

In particular, if it has a conceptual primitive for redness, and is otherwise capable of complex thought similar to humans, then it has an experience of redness similar to us. (If it has similarly complex thought, but it *doesn't* have conceptual primitives for colors, then it doesn't, I guess similarly to a monochromatic colorblind human (who probably does have built-in primitives for colors, but they never get activated), even though they can both learn about colors as physical concept.)

EDIT: You're probably aware of this argument/possibility, and don't mean to assert that it's false in this post. But there are many sentences that start with something like "a p-zombie would", and then you seem to draw a conclusion that only follows from those sentences if p-zombies are possible.

Expand full comment
Man-of-war's avatar

People with a condition called blindsight can respond to visual stimuli yet report no visual perception. Does this count?

From Wikipedia:

https://en.wikipedia.org/wiki/Blindsight

Blindsight is the ability of people who are cortically blind to respond to visual stimuli that they do not consciously see due to lesions in the primary visual cortex.

... Patients with blindsight have damage to the system that produces visual perception (the visual cortex of the brain and some of the nerve fibers that bring information to it from the eyes) rather than to the underlying brain system controlling eye movements. The phenomenon was originally thought to show how, after the more complex perception system is damaged, people can use the underlying control system to guide hand movements towards an object even though they cannot see what they are reaching for. Hence, visual information can control behavior without producing a conscious sensation. This ability of those with blindsight to act as if able to see objects that they are unconscious of suggested that consciousness is not a general property of all parts of the brain, but is produced by specialized parts of it.

...Prior studies have already shown that blindsight patients are able to detect motion even though they claim they do not see any visual percepts in their blind fields.

...Furthermore, blindsight subjects rarely express the amazement about their [related to vision] abilities that sighted people would expect them to express.

Expand full comment
Kamran's avatar

Yes. I think the problem with p-zombies as a thought experiment is they ARE possible, in that a living breathing human can functionally survive in the world while lacking every category of experience composed of qualia simultaneously. But in actually existing, they would not participate in the behaviors based on thinking about one's thoughts, and therefore would not be fully indistinguishable from a "typical" human -- in the same way a person with aphantasia or who lacks an inner monologue cannot report the products of these particular faculties if you ask about the specifics, but can only use the typical verbal forms employed in reporting such experiences idiomatically. Eg, "I see what you mean."

I wouldn't be surprised if we've all bumped into one or two such people at the grocery store, but it seems unlikely they'd find themselves reading this sort of blog. This place is more for the kind of person who is endlessly fascinated by the contents of their own mind.

Expand full comment
Benjamin's avatar

I highly recommend checking out "On Alien Philosophy" by R. Scott Bakker (https://sites.evergreen.edu/ftsm/wp-content/uploads/sites/307/2015/12/Bakker-On-Alien-Philosophy-copy.pdf).

Basically which "hard" philosophy problems naturally arise from minds and sensory capabilities not being adapted to solving them. E.g. "things being alive", free will, consciousness, heliocentrism. We are not adapted for seeing small things or seeing things far away or accurately seeing what our minds are doing. Basically there are good reasons for why P-zombies claim they have qualia.

Once we develop the scientific capabilities to do the investigation the problems fall away. Bakker has another nice paper on perception and I would guess is a somewhat advanced meditation person/practitioner in smth.

Expand full comment
Macrofauna's avatar

You know what else talks like a human, answers questions the way a human would, makes self-descriptive noises of being human, but in some important sense isn't human internally?

A telephone.

I propose a P-zombie might be like a telephone: it's conveying the utterances of a human who does have qualia.

Much of the talk about philosophical zombies is what I'd describe as sneaking up on a contradiction. "By stipulation, zombies lack some X (qualia, consciousness, other mental fact). By stipulation, zombies are externally indistinguishable from humans and and perform humanlike activities about X (self-reporting qualia, arguing about consciousness). Ergo,"

And then the two usual approaches are denying zombies because they constitute a genuine contradiction or an information-theoretic cheat of generating information about X ex nihilo; or denying physicalism so something non-physical can be generating the information and resolve the near-contradiction.

I think a telephone is a third approach, where zombies can be real without generating the information themselves, because they're relaying it from the humans that zombies are duplicates of.

Expand full comment
uugr's avatar

IIRC, this was something like Yudkowsky's argument - or at least I remember him making one like this, maybe in response to the Chinese Room. That for such elaborate and detailed descriptions of qualia to arise, we should expect there to be a qualia-having physical entity *somewhere* in the information chain.

Expand full comment
Robert Alan Bevan's avatar

The following is all from my intuition and doesn't necessarily relate to the topic directly, but may be helpful. Here are some traits of consciousness I find intuitively-resonant:

(1) It defies easy verbal description.

(2) It defies easy noticing or first-person synaesthetic abstraction; I can't hold it in my mind easily; I envision a "sheet of glass in front of my eyes which makes images real". Some people have seemingly not deeply noticed their own consciousness, and confuse it with thinking in words. Hence 4chan's famous NPC meme, where people were aghast to discover that some people don't usually self-reflect verbally, leading to some stating that abstract synaesthetic thoughts are not conscious. I suspect this may have been a self-own by alexithymic 3's and 4's:

https://twistedsifter.com/wp-content/uploads/2023/10/AppleVisualizationScale.png

(3) It can have information rendered unto it.

(4) Some of the mind's processing, but not all, is rendered unto it. It is possible that some individual types of processing may sometimes be rendered unto consciousness and sometimes not, whether due to degree of activation (subconscious thoughts as merely "low-energy thoughts") or due to a change in the system (drugs making things like "bodily-held psychic trauma" directly palpable). Things like cardiac impulses are hopefully never consciously-perceptible.

(5) It appears to render some information of its existence back unto the mind (barring the possibility of the "exceptional coincidence" mentioned in the article).

(6) It seems to make ethics consequential and real.

(7) It can distinguish hedonic "good and bad" separately from neutral information.

I like the idea that it is a feedback loop where multiple levels of informational meta-processing are confused for one another, leading upwards to a transcendental or ephemeral feeling: I confuse the rose with my perception of the rose with my perception of my perception of the rose, etc., and glom these together into one ephemeral beast. I feel like greater intellectual power ("being in the map") would decrease this effect - it reminds me of the feeling of "being in the territory", wet sweaty rain-soaked clothes, feverish coldness in the face, all hair and jungle; there is a distinct novel synaesthetic flavor to this kind of experience, made purely of direct 1st-level experience itself and not the surface components, even if less conscious knowledge is present. Those surface components - hair, mud, Charlie in the treeline - would be 2nd-level experience, as "awareness" necessarily seems to mean deliberate processing of what has already been processed unawarely, which is 1st-level experience. When you're in the jungle, the purified, objectless, 1st-level experience-processor itself becomes a palpable object and processable as a piece of 2nd-level awareness, because you're not allowed to get lost in 3rd- and 4th- level whining and self-reflection. This may present itself as a state of "hyper-consciousness" or even its opposite, "no-consciousness".

Also, not necessarily related: the ego (a low-resolution attempt to identify "the most core kernel of this mind") holds "the experiencer behind the mind" as the core of itself, and claims that its only plumage is its frivolous history and the things it wears physically. The ego pretends to be, at its core, consciousness, rather than merely wearing consciousness. I disagree - I feel that it is wearing consciousness. I feel that perception (prior to mental processing) also falsely pretends to be consciousness in some way.

Expand full comment
Aristides's avatar

I’m surprised LLMs weren’t mentioned. They seem to be the best candidate for real life p-Zombie and they certainly have the same conversations on qualia that humans have. I am fairly certain that most computer scientists consider AI not conscious, but I have trouble explaining why LLMs are not conscious but we are. What question could you even ask to distinguish us?

Expand full comment
uugr's avatar

They've certainly been mentioned in this comments section!! (But I thought computer scientists were split on whether or not they were conscious? Am I misremembering?)

Expand full comment
Aristides's avatar

They are split, but my impression is about 80-20 not conscious to conscious. There is oddly even more resistance to sentience, even though there seems to be many more sentient things than conscious things in the world.

Expand full comment
Shankar Sivarajan's avatar

Does the notion of "sentience" as you conceive it admit the same kind of question as p-zombies about "consciousness"? I.e., can you have something that behaves as though it were sentient, but isn't?

Expand full comment
Aristides's avatar

I think so, but biologists seem to be expanding sentience to nearly every animal. My biology knowledge is really low

Expand full comment
Dustin Crummett's avatar

Brian Cutter and I proposed a solution to the "baffling coincidence" here--though I suspect you won't like it!

https://philarchive.org/archive/CUTPHA

(See esp. sec. 2.3.)

Expand full comment
Amos Wollen's avatar

^^

This is the arg. that made me a theist — it’s one of the non-Tegmarkable arguments for theism Matthew mentioned. Highly recommend

Expand full comment
Shlomo's avatar

Yudikowsky's claim also fails if you imagine the super intelligence came on the scean before humans existed. Since animals posses qualia it's inability to derive this would mean they are extra physical

Expand full comment
Matthias Görgens's avatar

I'm not quite sure why you need such a long argument?

P-Zombies behave exactly like humans. Part of human behaviour is claiming to have qualia (or making the noises that sound like someone saying they claim to have qualia.) Ergo, p-zombies would claim to have qualia.

Or what am I missing?

Expand full comment
Taleuntum's avatar

These P-zombies are not the usual p-zombies because of the second difference

Expand full comment
Laplace's avatar
2dEdited

I think what you're describing here is not refuting Yudkowsky's bolded quote, and isn't even really in conflict with it. You're noticing that the idea of a mind that can think as well as a human can, and processes information roughly like a human, and yet doesn't have qualia, isn't very coherent on closer inspection.

The qualia aren't some extra patch slapped on top of human cognition that you can take out without changing anything else. Either your p-zombie doesn't have qualia and will not, in fact, be able to carry out various intellectual tasks as well as a human can. Or you'll give it back the intellectual faculties to do abstract reasoning the way humans do, and inevitably give them the qualia along with that.

'It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence. Just using an encoding scheme that matches it perfectly and causes them to describe it the exact same way that we do?'

My position, and I think also Yudkowsky's, is precisely that this intuition just isn't true. Your ignorance of how those human algorithms for encoding and processing data efficiently would actually really work is hiding all the mechanistic generators of 'qualia' and 'experience'.

Expand full comment
Nicolas D Villarreal's avatar

Yud's premise is fundamentally flawed because once you go from units of atoms to the conceptual units that humans have, you'll pretty much get "qualia" in terms of phenomenal experience of those concepts. He just doesn't get this because he thinks atoms are a natural unit and not themselves developed as an abstraction.

Expand full comment
JohanL's avatar
2dEdited

The more direct conclusion here might be that p-zombies are impossible on practical grounds - since they must appear as human, including what happens physically in the brain, an absolute requirement would be that the conscious mind and qualia are purely epiphenomenons and serve no _purpose_, merely scum floating on the water. Because if the mind _did_ something causally meaningful, the p-zombies _wouldn't_ be doing the same things as humans given the same brain states. (It gets even weirder in that we must somehow imagine that the exact same brain states might or might not result in consciousness.)

So if p-zombies are possible, the mind is pointless and is not involved in any causal behavior. And _therefore_, when even a human reports qualia, this must be unrelated to actually experiencing qualia (as the conscious experiences can't be involved as a causal agent in actions). Hence, not only could p-zombies report qualia, they would do on the exact same grounds as humans (as the presence of a conscious mind with "actual" qualia (whatever that means) is irrelevant).

This all seems unlikely - surely we would expect something as complex as the mind to serve some kind of _purpose_, rather than being a pointless waste? Rather, wouldn't we imagine that one reason we have minds is because the brain creates various forms of mental representations, and then actually _use_ them for something? P-zombies would then be impossible in two ways - first, they couldn't help but be conscious since they have the brain states that result in consciousness (this, of course, assumes physicalism), and they _also_ couldn't imitate humans if they didn't have access to the mind and the purposes it has (as the mind does stuff). And we're not making qualia-believers happy by saying "your reports of qualia are not related to you _having_ qualia, even when I grant that you have them".

Essentially, perhaps unintuitively, your premise "Second, they don’t necessarily report having conscious experience" might be _impossible_ once we grant the possibility of p-zombies. Because _if_ p-zombies are possible, then the mind is irrelevant for actions, and then how could the p-zombies possibly _fail_ to report consciousness given that they have the same brain-states as humans do? It would mean that the _absence_ of mind creates different behaviour even though the mind is irrelevant for behaviour. So your class of p-zombies might be *even more* impossible than the regular kind.

"If you don’t need consciousness to claim to have qualia [...]"

Is this really what you meant to write? This seems trivially and uninterestingly true - _obviously_ even super simple AIs that no-one would ascribe consciousness can _claim_ to have qualia. It's just text output. I imagine your claim is stronger, but perhaps in a way that's hard to put into words for p-zombies (like being grounded in actual processing rather than just being empty words?). For instance, if human and p-zombie claims for having qualia have the exact same _grounds_ (unrelated to experiencing qualia), now we're talking!

Expand full comment
JohanL's avatar
2dEdited

Or shorter: If p-zombies are possible, then physicalism is incorrect. But if p-zombies are possible, then _also_ the mind must be causally irrelevant (epiphenomenalism). And further, the difference about possible reports of qualia you posit for your variant p-zombies must arise out of identical brain-states but _without_ involving consciousness.

This seems like a very strange philosophical place.

Expand full comment
MarcusOfCitium's avatar

Even shorter: Suppose physicalism is incorrect; ergo physicalism is incorrect! Ok, maybe too short to be charitable, but still...

Expand full comment
Anonymous's avatar

Do humans talk about qualia of experiencing 5D space? No, as they don't get to experience it. They do talk about properties of 5D space and objects in 5D space, but not about qualia of 5D space. So, I don't expect P-zombies to report something which for them doesn't exist and is beyond their imagination.

Expand full comment
Nicolas D Villarreal's avatar

If you understand meaning from a structuralist perspective the problem of qualia immediately goes away. What I mean is that structuralism/semiotics focuses on the units of meaning, what makes a signal a signal, specifically that it is differentiated from every other signal. What makes red red? It's that we've correlated the concept to a specific range of the color wheel/light spectrum that our eyes can detect contra all the other colors we name. Now, most humans can detect light in three dimensions (RGB), so that specific signal has a sort of richness and specificity that the word red lacks. Someone with colorblindness would have a different qualia of red than someone without, because the underlying continuum of the signal, it's dimensionality, would be different. But fundamentally, even if you existed in a world without light, you could experience analogous qualia so long as you could represent a 3-d characteristic of spatial objects as a part of the signals you experience. If you have that dimensionality of the signal, and identical structure of color concepts, from cybernetics/information theory perspective, you must be experiencing the same thing.

Expand full comment
uugr's avatar

I'm not sure this erases the problem, because it doesn't answer why *that particular* quale derives from the red wavelength, rather than a wavelength around the blue end of the spectrum, or the sound of a piano or something.

I think there's a school which would argue that the thing we're calling "qualia" is located purely in difference, and any appearance of a self-contained 'redness' in the absence of 'blueness' or 'greenness' is false. Maybe this is true, but it seems very unintuitive to me.

Expand full comment
Nicolas D Villarreal's avatar

It is located purely in difference, this is what semiotics and cybernetics tell us how signs and signals work. If we didn't have a concept for orange, red would include the wave-length for it and all the associated orange objects, or at least split them with yellow. The connection between a sign and it's content, its signified, is fundamentally arbitrary.

Expand full comment
Nir Rosen's avatar

Most people don't Qualia, for most of their experiences. It is mostly just background noise.

Expand full comment
Adrien's avatar
2dEdited

Maybe the simple solution to the p-zombie problem is that p-zombies are not possible.

If you can make the claim “something can act *exactly* like me and not have consciousness”, then it’s not looking good for *you* having consciousness.

Because then what does it rest on? “Trust me bro I have consciousness”?

It seems more likely that “something cannot act *exactly* like me and not have consciousness, if I have consciousness myself” is true.

And I say *exactly* to mean all your conversations, all your behaviors. But presumably the whole question rests on the fact that we can’t see inside the mind. So thoughts are not included.

Expand full comment
JohanL's avatar

Epiphenomenal consciousness - a mind that doesn't do anything productive and is a mere irrelevant side-product of the brain's real job - could still be possible, but most people don't like this either.

Expand full comment
Adrien's avatar
2dEdited

Yeah, it seems unlikely to me.

It's one thing for this epiphenomenal consciousness to not do anything productive, but does it influence the brain that's hosting it at all?

If not, then the brain that creates this consciousness as a side effect is likely to not act conscious, talk about consciousness etc (because itself is not conscious, since this epiphenomenal consciousness is not influencing it).

So what we have is basically "im not conscious but I have a blob on my head that is", and that would act very differently to a conscious being in my opinion.

If yes, well then it really just sounds like a conscious mind. So again, it counters the P-Zombie argument.

Expand full comment
astine's avatar

This strikes me more as an argument against p-zombies as a concept than as an argument that they would report something that they don't have. You just created a physical account of something epiphenomenalists believe can't be accounted for with physical explanations. The real question is, would these p-zombies insist that their 'reporting mind' is evidence that they have non- physical minds?

Expand full comment
Olivia Roberts's avatar

Read the footnote, and sorry if others have already mentioned this: I think this is pretty much exactly what Chalmers says when he first introduces p-zombies. By hypothesis their world is physically identical to ours, and speech (as well as belief, according to Chalmers) is a physical state. This is why Chalmers is an epiphenomenalist, to my knowledge. It’s also why I kind of think illusionism and epiphenomenalism are the only two reasonable options when it comes to qualia (side note: I would characterize illusionism not as the view that we lack conscious experience, but that we are vulnerable to mischaracterizing its nature).

So, I’d have to pull out The Conscious Mind to check, but yes, I think you’re exactly right, and in fact I think this is pretty much baked into the argument.

Expand full comment
MarcusOfCitium's avatar

I thought this was basically implied if not explicitly stated by the thought experiment. I thought the whole point was that there was no way a zombie could be distinguished from a non-zombie…not dissection, not MRI, and not questioning. Which would mean of course they would have to report the same experiences of “qualia”, while having the same brain activity. Which seems to necessarily imply epiphenomenalism. The important question is: is that actually possible, or even a coherent thing to posit? My suspicion is that it isn’t, and we just don’t know enough to explain why. Just like in the pre-scientific age (and to many still today), it seemed plausible that certain people could have magical powers, or that there could be ghosts (and they could interact with the physical world). To which believers might say, you’re just speculating, you can’t prove that. And I’d say, sure, it just seems vastly more reasonable that thinking the entire scientific worldview needs to be overturned in favor if an unspecified alternative for which the only evidence is that there’s something for which you personally find it hard to imagine a scientific explanation is possible. Classic God of the gaps/argument from ignorance, even if you’re not actually using to to argue for the existence of God.

Expand full comment
apxhard's avatar

I agree that p-zombies, if they could exist, would report conscious experiences. Also agree that 'we claim to have consciousness and actually seem to have it' is either a really weird coincidence, or caused by something real. Given the fact that I have more evidence of my own consciousness than almost any other phenomena, it seems silly to me to dismiss it as not real.

Can you rescue both intuition and a physicalist understanding if you posit something like, "Qualia are caused by computation. Two systems performing identical computation exprience identical qualia."

I could easily write a python script that spits out the phrase "I have consciousness" but this is quite a ways different (in terms of computation) from an LLM that's been trained in ont he navigation of an agent through, say, a multi-user dungeon, which encodes a model of an agent and its environment. In that case, predicting 'the next token' in an input stream isn't architecturally that different from the predictive processing model - you'd just need a few layers of llm's trying to predict the inputs of the layers below, and a 'handshake' mechanism to direct attention (and learning updates) to the discrepancies.

We're still punting on what consciousness _is_, but if it has some relationship to the physical world, it seems to me that this relationship manifests in the computational processes happening in our brains. If we drop the idea that brains are special, it seems like we've rescued both what we want: the natural intuition that we have consciousness, and we think we do because we do.

So maybe this leads to a position that p-zombie is "something using a very different computational architecture to produce more or less the same results." But then i'd posit, hey, maybe sociopaths are almost like p-zombies: they're running different computation than the rest of us, and as a result, experience different consciousness.

Expand full comment
JerL's avatar

"Can you rescue intuition and a physicalist understanding if you posit someone like, 'Qualia are caused by computation. Two systems performing identical computation experience identical qualia"

This is basically my view, but in fairness to the dualists and pan-psychists, I think this sweeps all the difficulty onto the question of, "what does it mean for two systems to perform an identical computation?", and maybe also "what is it about that notion of identical computation that leads to this fact about qualia?"

Like, in reference to your LLM example, people have already pointed that it's an interesting question *at what stage* the relevant computation is being done? At training time? At inference time?

To some extent, this is just the Chinese room again: the non-trivial computation can be done ahead of time, but at inference time the "computation" is basically just a big lookup table. Is that "the same computation"? If so, when/where does qualia happen?

My inclination is that, true p-zombies *are* incoherent, but I think the other side has a good point that no one has a very good account of the actual relationship between computation and qualia, nor even much of a sketch of what such a relationship could look like, except in very abstract terms that don't (IMO, but I'm definitely not all that well-read on this topic) seem to accord with intuitions in simple edge cases. Until such an account emerges, you should have some probability mass on the other possibilities.

Expand full comment
Jonathan Lafrenaye's avatar

This reminds me of the story of the guy with anosmia from https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/. If a (presumably) non p-zombie can lack a sense and not notice it themself and act in a way where no one else notices it either, then why wouldn't p-zombies do the same thing with qualia.

Expand full comment
Alan Smith's avatar

I don't think this is low-effort, and if so I apologise. But I don't see why we care? Like, what possible significance could this have to any actual discussion in any reality outside of philosophy circles? (Which is not saying you shouldn't talk about it, you can talk about whatever you want, I'm just expressing confusion as to why this is attracting discussion at all by anyone.) I get the appeal of talking utter nonsense sometimes, but the impression I've gotten from the language you use here, and my own informal observations elsewhere, is that this isn't just light-hearted garbage debates.

You may as well talk about the geometry of unicorn horns, or the shade of dragon scales, or the real estate laws of Atlantis, or the social structures of Martians.

Heck, talking about fictional characters or animals is arguably more significant, because then at least we're talking about actual works of fiction or mythology. This is positing something which by definition we cannot prove or disprove as existing or not, and then delving really deep into the details of how this thing may or may not function, with the caveat that it's impossible even in theory to know what is right or to infer any more general rules of reasoning from it.

Expand full comment
Bugmaster's avatar

Yep, all of philosophy is like this. But then, so are most movies, and people still have fun watching movies sometimes.

Expand full comment
Skull's avatar

We want to know what consciousness is; it's the most important thing in the universe. If you have a mysterious concept you need to explore, you need experiments like this to help build the foundation of understanding. All thought experiments are like this.

Expand full comment
Bugmaster's avatar

I don't think that thought experiments are "experiments". They're just a mental technique that can help you to design an actual experiment. And until you've performed that experiment (ideally more than once), all you've got is a clever and imaginative hypothesis, not any actionable knowledge.

Expand full comment
Deiseach's avatar

Do bees have a concept of the colour they see? Apparently they can't see red, so they would not recognise the rose as being red, but they can see further into the ultraviolet than we can.

So are bees conscious, if we're judging by "how do you report your experience of vision?" If a bee judged us by what we see (and don't see) when looking at flowers, would they say we are conscious?

https://www.youtube.com/watch?v=3Sb0LcNF1WI&t=34s&ab_channel=EDINBURGHHONEYCO.

https://beeculture.com/bees-see-matters/

"Like us, bees are trichromatic. That means they have three photoreceptors within the eye and base their color combinations on those three colors. Humans base their color combinations on red, blue and green, while bees base their colors on ultraviolet light, blue and green. This is the reason why bees can’t see the color red. They don’t have a photoreceptor for it. They can, however, see reddish wavelengths, such as yellow and orange. They can also see blue-green, blue, violet, and “bee’s purple.” Bee’s purple is a combination of yellow and ultraviolet light. That’s why humans can’t see it. The most likely colors to attract bees, according to scientists, are purple, violet and blue.

Bees also have the ability to see color much faster than humans. Their color vision is the fastest in the animal world-five times faster than humans. So while we may have trouble distinguishing one flower in a group from another, bees don’t. They see each individual flower. Some flower petals appear to change color, depending upon the angle. This is known as iridescence. It’s often in the UV spectrum, so we can’t see it. But, bees can. They see these shiny petals and associate them with sugar. Thus, the flower becomes more attractive to the bee and gets pollinated."

Expand full comment
Evariste's avatar

My take is: nothing but qualia exists.

To elaborate, I think that "to exist" means two different things when applied to qualia and to everything else. When we say "this table in front of me exists" or "Paris exists" or "Unicorns exist", those are fundamentally probabilistic statements, i.e. we would update on them when presented with appropriate evidence. If we were sure that a table exists with probability 1, then we would not be able to update on this locked prior, even if someone told us they replaced the table with a nifty hologram and waved their hand right through it. In contrast, the existence of qualia is an undeniable (the only undeniable?) thing, and it is fine: if we were ever presented with evidence that our qualia do not exist (i.e. with inexistenve of our qualia), then there would be no conscious observer to care about that. So existence of qualia is undeniable and unfalsifiable, while existence of a table is deniable and falsifiable.

Moreover, the existence of a table is derived from our qualia. Even if we do not see it directly, but use a complex scientific apparatus to find it as a complicated pattern in a bunch of numbers (like we do to determine existence of some elementary particles), we need do see the numbers on computer screen, and that requires qualia. So, I call the existence of qualia the primary existence, and existence of everything else the secondary existence.

Not that the argument above holds even if we talk about the objective reality instead of tables. After all, if what we see decohered from orderly perception of reality into random colours and noises, it would be wise to update into non-existence of reality, so we should not have that prior locked to one.

So my take is, qualia is more primary than science or objective reality, (after all, we infer their existence from our qualia), so they should not have a word on whether qualia exist. In a sense, qualia are synonymous with our (primary) existence at all: without there would be nothing, no alternative Earth with p-zombies: just an empty mathematical structure describing positions of all atoms, with nothing to run on.

By the way, when I encountered the ancient Greek thought that "Mind and Being are one and the same", I interpreted it in exactly same way: qualia are synonimous with our primary existence. Though I admit I could be grossly misinterpreting their thought.

Expand full comment
Pelorus's avatar

This idea has a rich history. The scientist Arthur Eddington's On the Nature of the Physical World had this to say, about 100 years ago:

"The only subject presented to me for study is the content of my consciousness. You are able to communicate to me part of the content of your consciousness which thereby becomes accessible in my own. For reasons which are generally admitted, though I should not like to have to prove that they are conclusive, I grant your consciousness equal status with my own; and I use this second-hand part of my consciousness to 'put myself in your place'.

"Accordingly my subject of study becomes differentiated into the contents of many consciousnesses, each content constituting a view-point. There then arises the problem of combining the view-points, and it is through this that the external world of physics arises. Much that is in any one consciousness is individual, much is apparently alterable by volition; but there is a stable element which is common to other consciousnesses. That common element we desire to study, to describe as fully and accurately as possible, and to discover the laws by which it combines now with one view-point, now with another. This common element cannot be placed in one man’s consciousness rather than in another’s; it must be in neutral ground—an external world.

"... All I would claim is that those who in the search for truth start from consciousness as a seat of self-knowledge with interests and responsibilities not confined to the material plane, are just as much facing the hard facts of experience as those who start from consciousness as a device for reading the indications of spectroscopes and micrometers."

Expand full comment
Raphaël Roche's avatar

Thank you for this quote of Eddington. While his reflection has probably nothing to do with quantum mechanics, I think that's exactly the way we can envision a consistent intersubjective reality approaching objectivity despite superposition/many worlds.

Expand full comment
Kamran's avatar

Another interpretation of "nothing but qualia exist" is a variant of panpsychism. That is, there is a way it is like to be every spatiotemporal structure from the inside. What varies is the complexity of the strange loop (to use Douglas Hofstadter's term). The connections between structures determine the shape of information transfer, thus creating ways it is like to experience a given structure from the outside. But there is no mind-matter dualism, because all matter is composed of inner and outer qualia accounting for every determinant of its physical properties.

Expand full comment
Scott Aaronson's avatar

What fascinates me is, "whether the p-zombies would talk about qualia" is of the few cruxes in the consciousness debate that might actually be amenable to empirical investigation, and we're now at the first point in history where some of the relevant experiments might be feasible. Train a frontier LLM on a corpus from which all discussions of qualia, sentience, consciousness, etc. have been meticulously scrubbed. Then try to explain your own conscious experience to the LLM, and engage it in a conversation about whether it has any similar inner experience. How does it typically respond? I'm not sure, but whatever the answer, I'm pretty curious to know it!

Expand full comment
machine_spirit's avatar

In the book "Blindsight" by Peter Watts (which is an excellent SciFi-horror about this topic in general) the crew of the human spaceship is trying to detect whether the Aliens are sentient based on series of questions. I think someone should totally do something similar with LLM.

Expand full comment
Skull's avatar

I think the whole point of the concept is that there is no series of questions you could ask that would elucidate whether or not you're talking to a pzombie. Like that's what a pzombie is.

Expand full comment
TakeAThirdOption's avatar

I read the novel too, it was a bit fun.

When some aliens were asked some factual questions they could answer them, but when asked if they knew the answers they said no. They were "honest" in that situation in that they "understood" knowing to mean knowing consciously.

They gave answers like some computers running some LLM would do, if trained so.

I can go insofar along with Watts's idea, that they had *very* different biology, so they weren't just humans without consciousness, but very different things than humans without consciousness but with great intelligence.

(By the way, Watts provides the novel online for free.)

Expand full comment
uugr's avatar

I'd also be curious to see this, but wonder if it'd really be possible to pull off convincingly.

You'd have to be *very* meticulous, not just to scrub direct conversation about consciousness, but also every conversation that could be used to *infer* how humans talk about consciousness. And LLMs seem very good at inferring things like that, even when we don't expect them to (cf truesight, Geogussr). I imagine there are lots of gray areas: introspective journal entries, stories in which a character ruminates on What It Means To Exist and so on. And I assume you'd want to try it on a relatively large model; doubtful GPT-2 is going to be capable of a lucid conversation on a novel-to-it philosophical concept. How could you demonstrate you'd scrubbed carefully enough, when the model is potentially capable of making connections you'd never think of?

Expand full comment
Ellie Clifford's avatar

On the topic of the footnote, I wrote a little while ago about "Oppressive Citationism", and the idea that we should publish even when we haven't read everything, and let our communities help us find relevant existing ideas, rather than only publishing when we've read every dusty tome there is to find:

https://ellie.clifford.lol/blog/0018-oppressive-citationism/

I think this post is a great example of that done well: it's clearly researched enough to be an interesting take, and the comments contain lots of pointers to other works with related ideas.

Expand full comment
Antropofagi's avatar

Did you read Metazoa? Annoyingly not adressing the hard problem but perhaps the problem you are adressing. When I read it I actually felt i wished you would review it because you could probably destill what is interesting in the argument in a much better way than godfrey-smith manages.

Anyways, Godfrey-Smith offers a view where consciousness isn’t binary or metaphysically “extra,” but a layered biological development. He suggests that subjective experience — the “mysterious redness of red” — may emerge naturally from the kinds of information structures you describe in your p-zombie thought experiment. If so, the encoding scheme your zombies use to report rich perceptual content might not be merely a substitute for qualia — it might be what qualia are, once embedded in the right kind of biological system.

Expand full comment
nathanwe's avatar

The bitmap is blue on the top and bottom, green on the sides, and a red and white checkerboard in the center.

Expand full comment
Edward Scizorhands's avatar

I thought it was going to be an American flag when reading the top but realized soon that was wrong. Came to the comments for the answer. I don't know what this says about my qualia.

Expand full comment
Lance S. Bush's avatar

One of the more puzzling aspects of discussions about p-zombies is the presumption that actual people all report, believe, or are committed to the existence of phenomenal consciousness and thus think that they have qualia (or "phenomenal states"). While it is certainly true that many (perhaps most) analytic philosophers think this way, the proportion of nonphilosophers (i.e., people without significant training in philosophy) who think this way is an open empirical question.

For whatever reason, people seem to have simply assumed that nonphilosophers think they have qualia just like philosophers. But there is little evidence to support this claim. There are a number of studies exploring this question. Some at least appear to support this, e.g., this study that found most people (82%) in one sample reported having the expected qualia-friendly response to Mary's Room:

Gregory, D., Hendrickx, M., & Turner, C. (2022). Who knows what Mary knew? An experimental study. Philosophical Psychology, 35(4), 522-545.

...though one should be cautious; I'm skeptical this is a good thought experiment, since I doubt nonphilosophers are interpreting the question the way many philosophers do. However, a broader literature on the subject has yielded surprisingly little indication that nonphilosophers think about consciousness in the way philosophers do. For instance, see this early paper on the topic:

Sytsma, J., & Machery, E. (2010). Two conceptions of subjective experience. Philosophical studies, 151, 299-327.

And this dissertation:

Ozdemir, E. (2022). Empirical evidence against phenomenal theses (Doctoral dissertation, Open Access Te Herenga Waka-Victoria University of Wellington).

For whatever reason, people often presume that the phrases and notions philosophers use are mirrored by their ordinary language counterparts, often without any evidence that this is true or even when there is evidence or good reason to think it's not true. So, for instance, this paper from Owesen:

Here's the abstract:

"It is common to characterise phenomenal consciousness as what it is like to be in a mental state. This paper argues that the ‘what-it’s-like’-phrase in this context has a technical meaning, i.e. a meaning for which the association to the relevant expression is peculiar to a theoretical community. The relevant theoretical community is philosophy and some parts of cognitive science, so on this view, only philosophers and cognitive scientists use the ‘what-it’s-like’-phrase in the way that is characteristic in the literature on phenomenal consciousness. This claim has important consequences. Firstly, I argue that the phrase says nothing informative about phenomenal consciousness. Secondly, I argue that the fact that non-philosophers use the phrase is not compelling evidence that they believe in phenomenal consciousness. These claims have further consequences for debates about phenomenal consciousness."

What I want to stress is that the proportion of people who claim to have qualia or phenomenal states may be extremely low, and may represent an anomalous subpopulation of people inducted into distinctive philosophical traditions. These populations may comprise <1% of the world's total population. So it may very well be that most actual humans do not speak of having phenomenal consciousness or qualia in the first place, or think (implicitly or otherwise) of having such states. I suspect this is true, and that qualia-talk is largely an idiosyncratic way of thinking largely endemic to academic philosophers and people influenced by them.

Expand full comment
Pelorus's avatar

Nobody is surprised that only a tiny percentage of people use specialised analytical philosophy vocab. Most people aren't exposed to these ways of conceiving their experience. This fact says nothing about whether they do or do not have phenomenal states.

When surveyed, most people inducted into these philosophical traditions accept the meaningfulness of "phenomenal consciousness". There are many other philosophical concepts that don't have as much buy in from those exposed to them. I suggest the reason it's a compelling idea is because it already gels with most people's experiences. You can see this in teenagers who are commonly excited by ideas like "what if my red isn't the same as your red".

Expand full comment
Lance S. Bush's avatar

It's not about people not understanding or sharing the same vocabulary. The question is whether they think about mental states or attribute mental states in the same way philosophers do, and whether in doing so their patterns of judgment comport with what one would expect if they were committed (implicitly or otherwise) to thinking people had qualia.

"When surveyed, most people inducted into these philosophical traditions accept the meaningfulness of "phenomenal consciousness"."

That doesn't say much. Christians frequently accept the Trinity. That lots of people inducted into a way of thinking tend to think similarly is not a good indication that the way they think makes sense. I don't think the notion of qualia gels with most people's experiences. I think it's a result of conceptual and linguistic errors distinctive to the analytic philosophical method. I certainly don't think philosophers (or anyone else) is just entitled to presume otherwise and assume qualia thinking is widespread in some latent form among nonphilosophers.

Take the example of the moral/conventional distinction. Children quickly draw this distinction in Western societies. It may seem like the sort of distinction we readily latch onto because it is some kind of objective feature of the world to think this way, or because of our shared, evolved psychology. But it may very well be a contingent feature of our shared culture. Without access to counterfactual cultures and intellectual traditions, it's difficult to know whether receptivity to qualia isn't simply a feature of the shared enculturation of most people exposed to the idea.

Finally, there is a risk that when you present ideas to people, that you can frame the issue in ways that causes the very way of thinking. It's not easy to distinguish causing someone to think a certain way from discovering that they had a latent predisposition to think that way. Even if you could get lots of people to report qualia intuitions in lab conditions, it would not follow that people are naturally receptive. You'd have to rule out corrupting them by the very method itself.

//"You can see this in teenagers who are commonly excited by ideas like "what if my red isn't the same as your red".//

Wondering about this does not require thinking in terms of qualia. I wonder about this and I am a qualia quietist.

Expand full comment
Kamran's avatar

You're getting stuck on the map academic philosophers use (their terminology) instead of focusing on the territory (the feeling of wondering). The terminology is just tools trying to evoke what we actually care about. If you wonder about whether people experience red in the same way, you are pondering the same question, even if you use a different verbal framework to do that. We can taboo the terms "qualia" and "way it is like to be" and conduct the same investigation.

> I don't think the notion of qualia gels with most people's experiences.

What do you think does gel with most people's experiences?

Expand full comment
Lance S. Bush's avatar

I'm not stuck anywhere and I am not pondering the same thing as people pondering qualia. I see color discrimination as a functional capacity. People who have inverted color spectrums would have identifiable functional differences in principle that could be discovered empirically on my view; this is different from what's commonly characterized in inverted spectrum thought experiments, since these restrict the inversion to qualia alone. That is not what I am talking about. So no, I am not using a different verbal framework; I am not thinking about this in the same way as people who speak of qualia or think there are such things.

//We can taboo the terms "qualia" and "way it is like to be" and conduct the same investigation.//

I'm not engaged in the same investigation. I don't think there is a legitimate investigation of the relevant kind to engage in. I don't think talk of qualia makes any sense, and I don't engage in it or ponder it or investigate or wonder about "qualia" at all.

//What do you think does gel with most people's experiences?//

I don't know. That's an empirical question. And I'm not convinced there's some universal, culture-independent way of thinking about consciousness other than in the most rudimentary respects.

Expand full comment
Kamran's avatar

I was trying to ask about your hypothesis for the relevant and prevalent cultural conception you observe, since you say it is something different.

Expand full comment
Lance S. Bush's avatar

I'm not quite sure what you're asking, sorry. Are you asking me about the prevalent cultural conception about the nature of consciousness?

Expand full comment
Skull's avatar

>I don't think the notion of qualia gels with most people's experiences.

The fact that they have experience means it does in fact gel with their experiences.

Expand full comment
Lance S. Bush's avatar

No, it doesn't. What you'd have to show is that people have phenomenal states, or qualia. "Experience" does not mean "Experience involving phenomenal states/qualia."

There is no good evidence that a sense of having qualia or phenomenal states "gels" with the experiences of most people. That's an empirical question and people would have to conduct empirical research to find out of it's true.

Expand full comment
The Ancient Geek's avatar

Qualia are minimally the components of experience, so to say you have experience but no qualia is to say your experience is unstructured, just one quale all.the time.

Expand full comment
Lance S. Bush's avatar

That's not what I'm saying at all and not what I'm talking about. You're just using the word to refer to something else.

Expand full comment
The Ancient Geek's avatar

I have been arguing against the idea that you can predict what a random or average person will report, in zombie form. I don't suppose the average person will report qualia using the word.

You can be unaware that you have qualia, and that you have a spleen, while having both.

Qualia don't seem strange, naively. For the naive realist, "red" is an entirely objective property of ripe tomatoes, etc. To the naive realist, it's thought that ls weird and insubstantial. The early mind-body problem (eg Descartes) was about thought.

It takes some education (eg the Dress Illusion) to understand that qualia aren't objective, and some more to understand what a physical property is supposed to be .

Theres a common claim that people are acquainted with their own qualia, but that's different from the claim that everyone believes in qualia. You can have

Expand full comment
Logan's avatar

As a proponent of what you're calling "illusionism," you're frustratingly close to realizing what feels to me just beyond obvious

You've figured out that if you interrogate a p-zombie, they will always describe exactly what a human would, and they wouldn't even be lying, in all circumstances. So what would happen if a p-zombie asked themselves, really trying to get to the bottom of it, "am I a p-zombie?" Wouldn't they still get the same answer as you?

Or, perhaps ask yourself, were you a p-zombie yesterday? For all you know you were, right? This "but I really do experience it" claim, it can't apply to yesterday, because memories of qualia are experienced right now, not yesterday. Accessing memories is something p-zombies can do.

So, are you deeply aware of this sense of self, inescapably and beyond argument or interrogation (as people generally describe their certainty of the existence of qualia) at all moments equally? When you say "I have qualia, it's inescapable," I think it's generally implied "and I always have." So, if we exclude memories as evidence, are you as certain that you are _now at this moment, no not that one this one_ having qualia? Isn't it entirely possible that you thought about it, decided you have qualia, and are now reporting it to me in the future? And now that we agree p-zombies _also_ have memories of qualia, can you finally stop claiming that your brain is magic?

Expand full comment
Shankar Sivarajan's avatar

> were you a p-zombie yesterday?

You could also extend this to "were you a p-zombie one second ago?" and "ten milliseconds ago?"

Expand full comment
Logan's avatar

This is exactly my point!

The memory of having had a "true experience" is 1) not that true experience,

2) something p-zombies would also have, for basically the reasons Scott outlines here, and

3) the only thing you're actually basing it on when you think to yourself "of course I experience consciousness, it's unquestionable"

Once you accept (1) and (2) for memories in the distant past, and then generalize to memories from 1ms ago, you get (3) and consciousness disappears in a poof of logic

[For the record, I actually think this reasoning applies even to experiences happening "simultaneously," and the indirection happens in the finer gradations of what Scott calls the "reportable mind." Something about the 'experience' not being directly reportable... it's hard to articulate.

If we agree that the whole process of reporting has duration in time, though, then we agree that some indirection takes place, and that's the bulk of my argument. I'm just not claiming that "memory" is the absolute best way to describe said indirection. It's probably close enough.]

Expand full comment
Shankar Sivarajan's avatar

Yes, I'm agreeing with you.

Expand full comment
Skull's avatar

So are you saying consciousness doesn't exist at all?

Expand full comment
Logan's avatar

This is somewhat semantic.

Does free will exist? Well, no, in the sense that all my actions are determined by physics, but also yes, in the sense that I can choose to do things. It depends on if you define 'free will' to be the thing we have, or the supernatural mechanism people intuitively conjure to explain it.

Similarly, depending in how we define terms, I either "don't believe in consciousness" or I'm a panpsychist.

The way Scott here describes p-zombies as working, that's how I think brains work, and that's why I think we invent words like qualia and consciousness and experience: to describe real phenomena that aren't easily described in any other way.

I believe that consciousness/sentience/whatever is a spectrum, so technically there's a "what it's like" to be a rock or molecule, though it's *very* different from what it's like to be a human. A webcam has a qualia of red, and you can tell because it knows what's red and what's not.

As you get into animals and LLMs, the classical human notions of consciousness become more and more practicall. I think it's a useful exercise to list the similarities and differences between how I and ChatGPT experience the world, and doing so gives us real insight into what it must feel like to be ChatGPT.

Moreover, the utility of these notions is relative: to a sufficiently "intelligent" observer, like the "reliable predictor" in Newcomb's paradox, you and I wouldn't appear to have free will, nor would we appear sentient. He could explain everything in more physicalist terms, and would have no need for these abstractions, even though he might still need them to conceptualize himself and his peers.

So I don't believe in a "hard problem of consciousness." Everything I've ever 'experienced' is easily explicable via evolution and physics. You might as well talk about the "hard problem of digestion." It's indeed deeply complicated, we don't fully understand it, but it's not philosophically mysterious.

Expand full comment
Taleuntum's avatar

Well put!

Expand full comment
TakeAThirdOption's avatar

> "what it's like" to be a rock

Sometimes I wake up from a deep sleep, sometimes very quickly and sometimes more slowly, but on both occasions I have very seldom gotten the thought "Oh my god! This is how to be dead. Only that I would not realize this if dead, because I would not have woken up." And I thought "Oh my god" because there was nothing. It was like nothing to be a human in deep sleep.

That's why I do not believe that it would be like anything to be a rock.

Expand full comment
Logan's avatar

It's like nothing that you can access. Most things can't describe their experience to you. Your deeply sleeping self can't transmit any information about its experience to any dataset that you can currently access.

Also, you're almost certainly misinterpreting my claim. Being a rock or a dead person wouldn't involve any thought. It would be immeasurably different from what your living experience is like. As a dead body, you would consist of an immense number of distinct organic processes of decay and rot, mostly not interacting with each other, but in some way you would form a coherent complex ecosystem of maggots and fungi. You certainly wouldn't regret your death, or miss your family. But those processes would implement some sort of computation, and to that computation one could anthropomorphically ascribe agency and goals/desires, and not be equally more incorrect than when we ascribe those things to living people.

In every conceivable way being dead is nothing like being a human, but they are not occupy distinct metaphysical domains. They're both matter and etropy and computation. If you can't see consciousness in those things, but you see it in yourself, then you're delusional.

Expand full comment
Malcolm Storey's avatar

It's amazing that in all the cases of brain damage, mutation, drugs or developmental abnormalities there has never been a confirmed p-zombie. So apparently it's not localised; it must be a property of the brain as a whole, (or at least a major portion).

A few years ago, while coming round from a GA, I was briefly in a p-zombie state. The consultant told me various things. My wife said it wasn't really me and I spoke in cliches. When I woke again I had no recollection but I remembered all the things the consultant had told me.

OK, you could argue that I just had short-term memory turned off. So how would you distinguish a true p-zombie from one who was aware but immediately forgot his awareness?

Expand full comment
Skull's avatar

The whole point of p-zombies is that you can't prove they're p-zombies...

Expand full comment
Malcolm Storey's avatar

Yes, but as others have pointed out, that's a claim about reality which may or may not be true.

Expand full comment
Malcolm Storey's avatar

You know that feeling when you forget to retract your tentacles before you walk thru a doorway and hit them against the frame? No me neither. What's your favourite shade of yellow between the two sodium lines? There are an infinite number of possible qualia so a p-zombie is not going to happen on awareness by chance.

In p-zombieland they've never communicated with a non-zombie so how would they come up with the concept?

If we ever find intelligent life elsewhere in the universe, the first (well maybe not the first!) question will be: are they p-zombies? That couldn't be solved by experiment as there are no known non-zombie aliens to compare with and any future test we might develop wouldn't necessarily work on them, but maybe their literature would tell us.

Or maybe they have a higher mental state that we can't imagine and we would be p-zombies to them.

Expand full comment
Malcolm Storey's avatar

Of course there's another scenario where you can distinguish p-zombies. They're mostly like us but not aware. They lack all the attributes associated with awareness of which there must be at least one for us to detect them.

Expand full comment
Artem O. Dmitrienko's avatar

This isn’t just “best news” for illusionists. Without an argument explaining why creatures lacking qualia still report having them, illusionism simply won’t work. That’s why Dennett and guys have an entire research program to explain this.

Expand full comment
MissingMinus's avatar

> It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence. Just using an encoding scheme that matches it perfectly and causes them to describe it the exact same way that we do?

To me this just makes it obvious that p-zombies are questionable that they exist at all, which Eliezer has already argued at length.

How are you envisioning it works in the brain?

Our eyes are triggered by seeing light, different parts of the brain interpret that as red along with lots of other bits of information like texture. Our brains have already formed a concept of "red" from fundamental genetics and in-life learning, with red-light input from the eyes activating that conceptual cluster.

Expand full comment
thewowzer's avatar

Maybe p-zombies don't exist, and qualia is just something that is physical and comes from having a brain. Didn't the GEB chapter following the ant hill dialogue make the case something along the lines of "consciousness comes about in a sufficiently complex system that has neurons (or whatever kind of messenger type thing) whose job is to keep track of what other neurons in the system are doing" and there's nothing extra or spiritual about it? I only read it once but that's my recollection of it.

Expand full comment
Peter Defeel's avatar

In all of this philosophy there’s an absence of science, consciousness is not that mysterious in the realm of science. It’s clearly easy to medically distinguish a conscious person from a dead one, and from an unconscious person, from a sleeping person or a person in a coma. This alone would tend to demolish the philosophy of panpsychism, but you could do that by asking them whether they would prefer to be shot in the ass or the head.

In the realms of science we do have a form of p-Zombie. The concussed. A famous rugby player whose name escapes me won a trophy when concussed, and remembers nothing of it. The concussed probably do see qualia but they don’t have consciousness of it. If they did they would remember being out. And we have tests for that, so the concussed can be disambiguated.

What about a p-zombie who thinks he’s conscious? Any p-zombie aware enough, conscious enough, to think he is conscious is conscious.

As for the post, experiencing red isn’t necessary for consciousness, it’s just useful to distinguish fruit from the grass.

Expand full comment
Ari Shtein's avatar

Conscious experience doesn’t necessarily have to be remembered! Eventually we all die and forget everything we ever experienced, but we still definitely consciously experienced some of it, right? Why should the case of a concussed rugby player (or even a person dreaming) be any different?

Expand full comment
Peter Defeel's avatar

I don’t think after death gets us very far, since that is another non conscious state. Nor are we talking about long term memory here but the inability to have short term memory, or immediate memory of where the concussed person was.

Expand full comment
Logan's avatar

I think you're mixing up homonyms.

Conscious can mean awake i.e. not unconscious, it can mean like "conscious mind" i.e. not subconscious, or it can mean what Scott is talking about i.e. not a p-Zombie

Though in your defense, the third notion is basically gibberish, and any attempt to define it merely reveals it as a category error.

Expand full comment
Peter Defeel's avatar

Yeh I get that philosophers want to make it a bit more complex than it is. However being able to measure the brain for conscious signals has to rule out Panpsychism

> not a p-Zombie… any attempt to define it merely reveals it as a category error.

Well any attempt to distinguish between a p-zombie who thinks itself conscious and someone actually conscious is futile because it’s a distinction without a difference.

On the other hand I can’t rule out that the rest of you aren’t conscious but it’s charitable for me for now to assume you are.

Expand full comment
FLWAB's avatar

>Yeh I get that philosophers want to make it a bit more complex than it is. However being able to measure the brain for conscious signals has to rule out Panpsychism

Why? I don't see how that follows.

Expand full comment
Logan's avatar

I strongly disagree that you can rule out panpsychism via scientific investigation, certainly not without ruling out your own consciousness

There is absolutely no scientific principle whatsoever that implies whatever brain scan you're doing would have any correlation to the consciousness of a tree. If you think there is, you are taking the homophone too literally. Consciousness is some indescribable, unmeasurable thing, with no known connection to any of the scientifically discernable aspects of the human mind. If it happens to correlate with some measurable aspect of reality, it is a widely-but-not-universally held part of the dogma of consciousness that this is nothing more than cosmic coincidence

Yes, that's stupid. But so is trying to reason with it by observation. The claim is immune to contrary evidence.

Expand full comment
Daeg's avatar
2dEdited

I'm not exactly an academic philosopher, but I'm in a neighboring field and I've published academic papers in philosophy journals, supervised philosophy PhD students, taught undergrad courses that count for Philosophy major requirements. I'm only saying this so that maybe some professional philosophers take maybe 3 second before they start dunking on what I'm about to say.

The p-zombie argument is just... bad. The whole argument runs on the premise that being able to imagine something makes it metaphysically possible enough to be worth discussing, and that just seems blatantly false. I can imagine a person who acts completely normal but actually has an empty space inside their skull where their brain should be. What follows from this? Surely something about my imagination does, but that aside, what follows about the relation between brains and behavior? The answer is pretty clearly nothing. Let's grant that the fact that I can imagine behavior without a brain (or any other functionally equivalent causal mechanism) may mean that it isn't a logical contradiction the way that p^~p is. Let's even grant that it means it's a "metaphysical possibility". But who cares? If we actually want to understand something about how the brain actually causes behavior (or consciousness) in the real world we're trying to explain, being able to imagine one without the other is just not relevant to understanding how the thing works.

Expand full comment
Skull's avatar

>I can imagine a person who acts completely normal but actually has an empty space inside their skull where their brain should be. What follows from this?

It follows that if such a person were to exist, our understanding of biological cause and effect is completely wrong. It would revolutionize biology and anatomy. I have no idea why you'd pick such a significant example for supposed insignificance.

If your argument is that pzombies are impossible, welcome to the conversation. Tell us why.

Expand full comment
Daeg's avatar

My argument is not that p-zombies are impossible. I have no idea if they are impossible or possible. My argument is that the mere fact I can imagine them doesn’t make them possible. I can imagine a lot of things that are possible and a lot that are impossible too, and I have no way of knowing which I’m imagining. So the fact that I can imagine p-zombies doesn’t tell me anything, one way or another, about whether consciousness is dependent on its physical substrate or not.

Expand full comment
TakeAThirdOption's avatar

Thanks.

Every time I see vehicles in sci-fi movies hoovering above the ground without causing any disturbances like a helicopter would, I think the same.

Yes, sci-fi makers can imagine this but it's impossible. And Newton was the first to say so.

Expand full comment
Daeg's avatar

I really like your hovercraft analogy. Using p-zombies to argue that consciousness can be decoupled from neurons seems very similar to using sci-fi hovercrafts to argue that Newton is wrong (or, uh, *could be* wrong in some “metaphysically possible” world). If what we care about is whether Newton is right about physics in the actual world, then whether hovercrafts are imaginable is just totally irrelevant. Likewise, if what we care about is whether someone could have a normally working brain without consciousness in the actual world, then whether p-zombies are imaginable is totally irrelevant.

Expand full comment
Phil H's avatar

I agree with you that p-zombies are wrong, but not that it is a bad argument. I think of them as the setup for a proof by contradiction. And the argument Scott is making is one of the steps in the proof.

Expand full comment
Ghatanathoah's avatar

It sounds P-zombies are conceivable in the same way that an ant the size of an elephant is conceivable. It's possible to imagine both things, but once you start thinking in more detail about how they work, you realize they don't make any sense.

I can conceive of an ant the size of an elephant that walks around and behaves the same way a regular ant does. It doesn't collapse under its own weight and doesn't suffocate because it doesn't have any lungs. But when I think about it in more detail, I wonder how its legs are supporting it's weight when they are made of chitin or how air is getting to its cells. The fact that I can conceive of it not immediately collapsing and suffocating doesn't mean that collapsing and suffocating are epiphenomenal. It just means I can conceive of things that don't make sense.

Similarly Scott can conceive of a human that acts exactly like a human, but lacks qualia. But when he thinks about it more, it doesn't make sense, because our ability to experience qualia affects how we behave. His attempts to imagine alternate ways for a p-zombie to perceive red are equivalent to a science fiction writer imagining alternate materials for a giant ant to be made of that are stronger than chitin.

Scott rejects this way of thinking because a p-zombie is explicitly stated to not have an alternate way of perceiving color. We can likewise imagine a giant ant that is specified to walk around and have legs made of chitin. A science fiction story featuring such an ant would have what we commonly call a "plot hole," because it doesn't explain how the ant can walk.

Ultimately, that's all the p-zombie thought experiment is. It's a story with a plot hole in it. Scott's cognitive dissonance around this is caused by him being able to identify the plot hole, but finding all his attempts to plug the hole frustrated by the way the story is set up. The fact that p-zombies are conceivable doesn't tell us anything about consciousness, it just tells us humans can conceive of stories with plot holes in them, but not notice the plot holes right away. Saying that consciousness must be epiphenomenal because p-zombies are conceivable is like saying that chitin not being able to support large weights is epiphenomenal because an elephantine ant is conceivable.

Expand full comment
uugr's avatar

Other commenters have expressed similar points, and this seems like a really good way to express it.

Expand full comment
Raphaël Roche's avatar

That's a very elegant way of underlining the weakness of the hypothesis. I utterly agree. See also my comment here about the temperature of the triangle and the question of the domain of validity of concepts and theories. However your image is more striking.

Expand full comment
JerL's avatar

I broadly agree with you, but one big difference is: you can pretty clearly explain why it is that the ant the size of the elephant fails. I agree it's plausible that there exists some similar explanation of why a p-zombie is a "plot hole", but the fact remains that most people don't think we *know* that explanation, or can even say all that much about what properties of brains/physical systems such an explanation would appeal to.

I think a better example is something like, "can I imagine that P=NP?"--it will probably turn out that it is a logical fact that these are different, in which case it *can't* be coherently imagined. But, as things stand, (my non expert understanding of the state of the art is that) we mostly have broad heuristics to support this (e.g., Scott Aaron son's "if not, we'd be implausibly overpowered") rather than a "gears level"-idea about what a separation between P and NP is likely to look like, or even what broad kind of arguments are going to succeed at establishing such a difference.

Expand full comment
Lance S. Bush's avatar

I am puzzled by this last remark:

"But these still fail to explain how and why we so obviously experience consciousness, not just in the sense of there being a mysterious redness of red, but in the sense where there’s “someone” “there” to appreciate it."

When you say “we,” who are you referring to? I don’t experience a mysterious redness of red. I have never had qualia experiences and it does not seem to me as though I am phenomenally conscious. In another comment, I also discussed the empirical research addressing this question. At present, there is little empirical evidence to suggest nonphilosophers generally report or think about their experiences in the same way as philosophers. Personally, I suspect that the propensity to think that there are qualia or that there is “something it’s like” to have experiences is not a general feature of human thought, but is actually a learned error caused by academic philosophers.

In case it matters: I say this as someone who has worked as a philosophy instructor, publishes in philosophy. I also did my MA at Tufts and took several courses with Dennett. So I am not wholly unfamiliar with the topic of qualia or speaking about the topic as an outsider. When I reported the total lack of any qualia intuitions at various points throughout college, this was often met with incredulity and disbelief. I even had people try to pressure me into having or reporting qualia intuitions, and I even recall someone telling me that I was refusing to report them "for attention." I personally experienced pressure to have or at least report having qualia intuitions. I don't know if my experience is unusual or not, but I am highly suspicious of the notion that "we" experience the "redness of red."

Expand full comment
Apogee's avatar

I'm being presumptuous here, but maybe the "mysterious" and "something it's like" phrasings are throwing you off? I don't find my perception of red mysterious or do any higher-order meditation on it, but I would still claim to have that particular qualia (qualium?)

In my understanding, the "mysterious redness of red" works like this:

- Do red and blue look different to you?

- If so, could you imagine a world where your perceptions of red and blue were swapped? i.e. everything that was red is now blue and vice versa

- If so, why *don't* you live in that world, as opposed to the one you're living in now? That, I think, is supposed to be the "mysterious" part.

For my money, I think that anyone who says "yes" to 1 would also say "yes" to 2, and anyone who says "yes" to 2 must have qualia. But if you're a potential counter-example I'd be really interested to know.

Expand full comment
Lance S. Bush's avatar

"Something it's like" is a very common phrase used in academic philosophy to explain qualia/phenomenal consciousness. I don't think I'm being "thrown off" unless Scott is using precisely the kind of language philosophers typically use to refer to qualia but isn't referring to qualia. See, for instance, this paper:

Owesen, E. (2023). What-it’s-like talk is technical talk. Synthese, 201(4), 132.

//I don't find my perception of red mysterious or do any higher-order meditation on it, but I would still claim to have that particular qualia (qualium?)//

What do you mean by "qualia"?

//- If so, could you imagine a world where your perceptions of red and blue were swapped? i.e. everything that was red is now blue and vice versa//

I can, but this doesn't entail there are qualia or that I have qualia of either. You'd have to make further stipulations about e.g., the total lack of functional difference in the swap. Adequately specified, I think such functionally empty color inversion is impossible.

*If* you mean what philosophers typically mean, with all the appropriate caveats regarding 2, then I'd not only say "No," to 2, I'd say that nobody can imagine such a world and that people who claim to be able to do so are mistaken.

Expand full comment
Apogee's avatar

> What do you mean by "qualia"?

If I understood that paper correctly, then I'm used to thinking of "qualia" in what it describes as the "ability sense" - the mental representation of sense data, the thing Mary "learns" in the referenced thought experiment.

> I can, but this doesn't entail there are qualia or that I have qualia of either. You'd have to make further stipulations about e.g., the total lack of functional difference in the swap. Adequately specified, I think such functionally empty color inversion is impossible.

Maybe a simple red/blue swap as described would inevitably lead to some functional difference down the line, but there are certainly other alternative perceptions that wouldn't. Say, taking the negative of whatever you're currently seeing. Would you say you can imagine that?

Expand full comment
Lance S. Bush's avatar

Taking the negative? Like inverting the color scheme?

Expand full comment
The Ancient Geek's avatar

If quale are just abilities, how do they have ontological implications?

Expand full comment
Taleuntum's avatar

Yes, people often don't appreciate how different our inner experience can be (ie typical mind fallacy), I've personally met people like you in real life and have also found this lesswrong comment thread that might be interesting for the people doubting: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness?commentId=q64Wz6SpLfhxrmxFH

Intellectually I accept that you are very likely right (and argued for that position elsewhere), but I can't bring my immediate intuition in harmony with that, so I'm envious.

Expand full comment
Lance S. Bush's avatar

That's one possibility. But I think another is that people who think they are having qualia experiences or "what it's likeness" and so on have been duped by the conceptual and linguistic confusions rife in analytic philosophy, and are not actually experiencing things any differently than anyone else. Instead, they've just been taught to misdescribe their experiences.

Expand full comment
Taleuntum's avatar

Are those possiblities actually different if there isn't actually "something it's like to be"? I would say then that "inner experience" necessarily refers in part to how you internally represent your thoughts/feelings/perceptions.

Expand full comment
Lance S. Bush's avatar

I think so? There's a difference between "some people have qualia and some don't" and "some people mistakenly think they have qualia and some people don't." In the latter case, this would be closer to you and I seeing an object in the distance and you judging it to be one thing and me another. One of us would be making a mistaken inference, but this wouldn't be the same as one of us having different qualia, since (ex hypothesi) there are no qualia.

Expand full comment
The Ancient Geek's avatar

Another possibility is that people.have not been duped by philosophy, but educated by science.It's science that tells you you do not see things exactly as they are.

The average person assumes that tastes vary..even if they are surprised by the Dress Illusion showing them that vision varies as well. Different classes of qualia or, if you like, sensory channels, differ.

Expand full comment
Raphaël Roche's avatar

Do you feel pain ? I don't mean "I register an information concerning a damage localized in my toe" but screaming "aaaaaargggh my toe !!!" I think that's a better illustration of the concept of qualium or phenomenal experience than the redness of flowers. Some people don't feel pain (and usually have a difficult life), however I still think pain is a straightforward illustration of phenomenal experience for most people. But to aknowledge such conscious experiences does not imply to aknowledge a non physical process.

Expand full comment
Lance S. Bush's avatar

I feel pain. Nothing about that involves qualia, though.

//I don't mean "I register an information concerning a damage localized in my toe" but screaming "aaaaaargggh my toe !!!"//

Screaming in response to pain is a functional response. It does not require qualia.

//I think that's a better illustration of the concept of qualium or phenomenal experience than the redness of flowers.//

I don't think that it is. I don't think what you've said about what you've said illuminates what "qualia" are supposed to be.

To be clear: I don't hold the view that qualia don't exist. Rather, I endorse qualia quietism. I don't even think people who speak of qualia are referring to any nontrivial, meaningful concept at all. I think usage of the term is the result of being inducted into conceptual and linguistic errors that don't pick out any distinct and meaningful notions at all.

Expand full comment
Tyler Black's avatar

>Screaming in response to pain is a functional response. It does not require qualia.

Screaming in pain is a functional response, yes. But do you also have an inner experience of stubbing one's toe such that your scream of pain is congruous with your inner state? If I were enjoying an ice cream of my favorite flavor and I just randomly screamed out in pain, I would be baffled at the incongruity of my inner experience and the exhortation. When I stub my toe and I scream in pain, I am not baffled as my inner state is entirely congruous with my involuntary exhortation.

If you agree that there is some inner state that in some sense "explains" your scream at stubbing your toe, then you accept there is some inner state that co-occurs with bodily damage and various physical reactions that simultaneously helps explain those physical reactions to your cognitive self, and further lets your cognitive decision making take appropriate voluntary action. These inner states are just collectively called qualia.

The concept of qualia is attempting to conceptualize the inner feel distinct from the physical causes of those feels and various involuntary physical responses that may co-occur. When it comes to visual qualia, if we don't think of it too carefully, we imagine the shapes and colors are really "out there" in the world and we have some manner of direct access to how things actually are. But in fact, there are no colors out in the world. Tree leaves aren't really green. Our brain discriminates and categorizes, and presents this information in a cognitively accessible manner. The light frequency that corresponds to green is interpreted and represented to our cognitive apparatus as the color green. We can further make this point by cases of after images, pressure phosphenes, and all manner of ways we can induce visual experiences disconnected from their typical causes in the world. Qualia is just a word that collectively describes these inner discriminative states. If you accept all this, its hard to see how you don't recognize yourself as being subject to qualia experiences.

Expand full comment
Lance S. Bush's avatar

I don't know what's meant by having an "inner experience." What do you mean by that?

//If I were enjoying an ice cream of my favorite flavor and I just randomly screamed out in pain, I would be baffled at the incongruity of my inner experience and the exhortation.//

So would I.

//If you agree that there is some inner state that in some sense "explains" your scream at stubbing your toe, //

I don't, because I don't know what you mean by "inner state."

//These inner states are just collectively called qualia.//

Anyone can call anything qualia. "Inner states" are probably consistent with various models of human cognition that wouldn't posit or include "qualia" as the term is often used among academic philosophers. So I doubt your usage here is standard or necessarily involves using the term to refer to what I'm suggesting isn't meaningful.

//The concept of qualia is attempting to conceptualize the inner feel distinct from the physical causes of those feels and various involuntary physical responses that may co-occur. //

I understand that, but I think people who talk in this way are not clear about what they mean.

//When it comes to visual qualia, if we don't think of it too carefully, we imagine the shapes and colors are really "out there" in the world and we have some manner of direct access to how things actually are.//

I don't think of colors as being out there.

//Qualia is just a word that collectively describes these inner discriminative states.//

The capacity to make visual discrimination is functional. So this is not an accurate description of the way the term is typically employed by e.g., Chalmers and this isn't what Dennett and others are talking about. I'm talking about qualia as described in e.g., Dennett's "Quining Qualia."

The way you're describing them, it almost sounds like you're using the term "qualia" to refer to mental representations in a more general sense. But people who deny qualia don't necessarily deny that there are mental representations.

Expand full comment
Tyler Black's avatar

>I don't think of colors as being out there.

Do you think of your color sensations as trivially physical or otherwise fully transparent to a physical or public analysis? Usually people say they exist within an inner subjective world that is private to the bearer of the sensations.

>The capacity to make visual discrimination is functional.

Discrimination is functional, but how differences are represented may or may not be functional. Color sensations are the substrate of visual discriminations within cognitive contexts.

>The way you're describing them, it almost sounds like you're using the term "qualia" to refer to mental representations in a more general sense.

Yes, that's typically how they are used and that's the usage I'm going for here

>But people who deny qualia don't necessarily deny that there are mental representations.

Those people tend to be confused, or are using terminology in different ways for rhetorical/argumentative purposes, or are otherwise forced into implausible/absurd claims to save their metaphysical assumptions. Generally it's best to just assume people talking about qualia and mental representations are in fact referring to the same phenomena, but are disagreeing on the nature of that phenomena.

Expand full comment
Lance S. Bush's avatar

>>Do you think of your color sensations as trivially physical or otherwise fully transparent to a physical or public analysis?

I'm not a physicalist, so I wouldn't put things quite that way. I think all facts about our experiences can be studied from a third person or public perspective, though, at least in principle, and even if we don't currently have the tools to do so. So...probably yes?

>>Discrimination is functional, but how differences are represented may or may not be functional. Color sensations are the substrate of visual discriminations within cognitive contexts.

I'm not a representationalist.

>>Then where are they? Usually people say they exist within an inner subjective world.

I don't think the question makes sense. Where is the economy? Where is love? I don't think the notion of distinct spatial location is applicable to all the terms and concepts we use.

>>Yes, that's typically how they are used and that's the usage I'm going for here

No, it isn't. I'm also not going to have a "yes/no" war with you, so if you want to insist on this we can just end the conversation here.

Expand full comment
Lance S. Bush's avatar

>>Those people tend to be confused, or are using terminology in different ways for rhetorical/argumentative purposes, or are otherwise forced into implausible/absurd claims to save their metaphysical assumptions. Generally it's best to just assume people talking about qualia and mental representations are in fact referring to the same phenomena, but are disagreeing on the nature of that phenomena.

...What are you talking about? The term "mental representation" is routinely used as a technical term in philosophy of mind to explicitly refer to non-phenomenal states. This can easily be verified here:

https://plato.stanford.edu/entries/mental-representation/

From which I quote:

"It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (cf. Boghossian 1995). There are those, such as thoughts, that are composed of concepts and have no phenomenal (“what-it’s-like”) features (“qualia”), and those, such as sensations, which have phenomenal features but no conceptual constituents. (Nonconceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless have.[1]) On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps, photographs or movies. Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a non-conceptual sensory experience and a belief, or some more integrated compound of conceptual and non-conceptual elements. (There is an extensive literature on the representational content of perceptual experience. See the entry on the contents of perception.)"

Given that this explicitly refers to a class of mental representations as having "no phenomenal (“what-it’s-like”) features (“qualia”)," it's quite bizarre to suggest it's standard to think of qualia as referring to mental representations. This is simply not true and is a highly unconventional way of using the relevant terminology.

Expand full comment
The Ancient Geek's avatar

>The way you're describing them, it almost sounds like you're using the term "qualia" to refer to mental representations in a more general sense.

Yes, that's the minimal definition of qualia.

Expand full comment
Lance S. Bush's avatar

I'm not sure that's accurate, but if you mean mental representation it's unhelpful to use the word “qualia” for similar reasons as to why it wouldn't be helpful to use the word “soul” to refer to minds. At least part of the problem with qualia talk is not inconsistent and unclear use of the term.

Expand full comment
Raphaël Roche's avatar

//I don't hold the view that qualia don't exist. Rather, I endorse qualia quietism.//

Ok, I follow you on that point. Qualia quietism seems reasonable to me. I also think that Chalmers makes a big deal—a hard, difficult, and deep philosophical problem—out of a trivial consideration that empiricists used to call, in a quiet manner, sensible experience, and that most people don't even bother to name or think about.

Expand full comment
The Ancient Geek's avatar

>I feel pain. Nothing about that involves qualia, though

Pain is a.classic.example.of.a.quale. You might as well say you be seen a pigeon but not a bird.

Expand full comment
Lance S. Bush's avatar

That's simply false. Pain can be and frequently is spoken of and construed without reference to qualia. What you're doing is trying to insist on a particular theory claiming ownership over words and concepts. Just because some people think pain involves qualia doesn't mean it's just part of the concept or definition of pain in the way a pigeon is a bird. No, denying pain involves qualia is nothing at all like that. Illusionists believe in pain and this isn't a serious matter of contention.

Expand full comment
TakeAThirdOption's avatar

> Personally, I suspect that the propensity to think that there are qualia or that there is “something it’s like” to have experiences is not a general feature of human thought, but is actually a learned error caused by academic philosophers.

That's it! Just think about "the redness of red."

There is the redness of an apple but no redness of red. What a word salad.

Expand full comment
The Ancient Geek's avatar

I agree that it's learned...not that it's an error.

Expand full comment
The Ancient Geek's avatar

> I don’t experience a mysterious redness of red

The mysteriousness of red isn't immediatelt given.. it's part of a complex , defeasible, argument.

On the other hand, you are not literally colour blind.

Expand full comment
Lance S. Bush's avatar

My position is that there is no mystery. Nobody has red qualia or any other qualia.

Expand full comment
Sol Hando's avatar

Would a P-Zombie who has not been exposed to the copious amounts of text explaining this phenomenon, spontaneously have discussions about their internal life at all?

I'd think they would be able to talk about "seeing red" since that would approximately correspond to what an unconscious, but intelligent, being would do, even if there was no experience of seeing red at all. But I'd be very surprised if they began speaking about "what it was like" to see red, or be a bat, or a rock, or a flesh and blood human. Seeing red, while good shorthand, would just correspond to the processing of visual information.

It doesn't seem possible they could ask the question "Is your red different than my red?" since whether it's represented by a simple Hex Code, or a long text-based description (the slightly-pinkish red like the color of a half-dyed rose), it wouldn't correspond to any experience, so what would cause them to decide there was internal experience, any more than I would decide that a rock has internal experience (which I suppose I sometimes wonder about, but I have a reference for what it's like to be a thing).

For linguistic tools that are shorthand to represent something that happens whether we have consciousness or not, like "seeing" I think this argument is sound. But some things we say about consciousness simply seem completely superfluous without someone actually having an internal experience, like this conversation. I'll say that LLMs don't count for telling us anything, since they are trained on millions of conversations where people discuss exactly this, so it would be very surprising if it wasn't able to intelligently discuss its own qualia. It's not optimizing for truth after all, just really good text.

Expand full comment
James Rahner's avatar

Good piece!! Scott (and others who liked this article): I would recommend reading about the distinction between "sense data" theories and "intentionalism" theories in the philosophy of perception. Scott seems here to realize that intentionalism is a stronger theory!

I don't care about qualia (reading Wittgenstein, I think, is another therapeutic way of ceasing to think that "qualia" discussions are helpful, but I think your thoughts here are on to something similar), but I still don't think physicalism about consciousness is satisfying. That is, I entirely agree with the last sentence: it's not so much that I think there's such thing as the mysterious redness of red--in fact, I would emphatically deny this! But doesn't seem to do away with the hard problem of consciousness, insofar as there is still something conscious representing certain things to itself.

Expand full comment
Kristian's avatar

If you asked the p-zombie how they know something is red, they would say, "because I see it as red" and if asked to explain more, they could get a book with colors and point at the color labeled "red" and say "this is the same color". That is all you could do, really, either. Neither you nor the p zombie would say "My reporting mind received a packet of visual data, and after examining/analyzing this packet, I was able to tell that the rose was red.”

Isn't it a necessary part of the nature of qualia that I can know whether your and my color labels match, but I have no way of knowing whether you actually experience "red" the same way I do?

I think Yudkowsky's argument is incorrect, because the superintelligent being would have to infer that human beings have qualia, using something else than the perfect physical knowledge. Just the fact that it could infer that doesn't prove that qualia are not extra-physical (I am not saying they are, I just don't think this argument works).

Expand full comment
ultimaniacy's avatar

Why is any argument against p-zombies even needed? That our conscious experiences have a causal effect on our physical actions is the obvious null hypothesis. We don't need any philosophical argument to see that it's true, because we observe it happening, constantly, every moment of every day. The one who claims that every observation we've ever had is an illusion, and that every time we thought we were saying or doing something because of a conscious experience, we were actually doing it for an entirely separate reason which could have operated in exactly the same way if conscious experience didn't exist -- that person is the one who needs to present an argument, since they are the one making the surprising claim.

And given that the claim effectively requires overturning the concept that humans can ever accurately perceive any event -- therefore destroying the entire basis for all claims, including itself -- it had better be a *really* good argument.

Expand full comment
Raphaël Roche's avatar

Thank you for this post that brings back to the table a classic but fascinating discussion in philosophy of mind. My viewpoint is that most philosophical problems and paradoxes can be generalized to the problem of the domain of validity of concepts (which includes and exceeds the mere problem of definition). What is the temperature of a triangle? This makes hardly more sense than asking what the phenomenal experience of an elementary particle is, or how many grains of sand are necessary to make a heap. Yudkowsky is right to criticize the abusive use of the terms emergence and complexity, which sometimes serve as phlogiston, but the notion of emergence still makes sense relative to the problem of the validity limits of concepts and theories. While waiting to discover a possible theory of everything, if it isn't rendered impossible by a logical problem of incompleteness/undecidability, we must admit that every concept and theory has a domain of validity. The concept of rotation, for example, only applies to an extended or composite object; for a point object we're forced to invent a different concept, like spin in physics, which is impossible to visualize intuitively. The informal concepts of everyday language or the humanities are almost always fuzzy and gradually dissolve when we move away from their semantic center of gravity. It makes sense to consider that a human is conscious and enjoys phenomenal experience, but this meaning dissolves as we try to apply it to objects more distant in semantic space. The whole art of philosophers is precisely to push our concepts to their limits, to highlight contradictions and paradoxes. In this sense, Chalmers' approach is canonical. But the "hard" problem of consciousness is no more "hard" in my eyes than the problem of a heap versus a single stone. If I had one thing to reproach Chalmers for, it would be taking this intellectual game too seriously, or at least making the reader take it at face value, pushing them to seriously question the psyche of stones, as if it were a serious question when it's as absurd as asking about the temperature of a triangle. At a certain stage, it's reasonable to think that we are completely outside the domain of validity of a theory of mind conceived by and for humans. Panpsychism relies on a forced or abusive use of concepts at the antipodes of their semantic center. P-zombies are also an artificial construction—beings indistinguishable from humans but still different for an invisible yet essential reason, a sort of reminiscence of the debate about whether savages had souls.

To my mind, we might as well debate the sex of angels. We are in the realm of pure intellectual games. If we encountered a p-zombie tomorrow, we could only hypothesize that it is conscious, for the same reason we hypothesize that all other humans are conscious despite having no possibility of directly accessing their subjective experience. On the other hand, intermediate and less theoretical cases are more interesting: higher animals, early humans or hominids, AIs, intelligent extraterrestrials, etc. They are also necessarily problematic, since they are by hypothesis borderline cases. They cannot "properly" fit into our conceptual categories that were conceived by and for humans. This should force us to refine and redefine our concepts, our theory of mind. Perhaps one day we'll have to choose whether or not to recognize the status of conscious entity and moral rights to some of these beings, and this choice can only be based on empirical, observable evidence. Personally, I'm rather in agreement with Yudkowsky but I also agree that hypothetical p-zombies would report qualia, therefore I suppose I would treat a p-zombie as a human if I see one and deny their existence with high confidence. Chalmer's idea is paradoxical, so is my answer.

Expand full comment
Vermillion's avatar

This reminds me of one of my favorite bit's of Eric Hoel's book (https://www.theintrinsicperspective.com/p/world-behind-the-world) about a p-zombie Descartes Zescartes. Zescartes, like Descartes, would report that they are thinking and that the fact that they are is the only thing they know for sure. So how can we be sure we're not already in a p-Zombie world? Basically the whole concept is self-refuting.

I read it awhile ago so might be missing forgetting but anyway, highly recommend for all kinds of consciousness type analysis and discussion

Expand full comment
The Ancient Geek's avatar

>. So how can we be sure we're not already in a p-Zombie world?

We have access to experience, not.just reporsts.

Expand full comment
Vermillion's avatar

You're calling Zescartes a liar then? Why else is he reporting it?

Expand full comment
Phil Getts's avatar

It may work out to be a better articulation of the position I've advocated, which is that if you describe the operation of a p-zombie not in terms of sense data and motor actions, but in terms of the algorithms of continuously monitoring all that data, focusing attention, integrating the various senses with memories and feelings and models of the future, producing emotional responses and plans, it seems odder to say that all that attention and awareness is /unconscious/ attention and awareness, than to say that it's conscious. And Scott's position has the same weakness, which is that it doesn't dispel the mystery of what conscious awareness is.

Expand full comment
Apogee's avatar

I'm not up to date on the p-zombie discourse, but where do pleasure and pain fit into it? It's possible to imagine someone being able to identify something as "red" without really *seeing* it in some epiphenomenal way. But it's much more difficult to imagine an actor with no qualia for pain - if there isn't some other part of the mind going "hey, this experience is *bad*, I don't like having it", then from where would pain-avoidance behavior arise?

Expand full comment
Ari Shtein's avatar

How can it possibly be the case that “I experience the conceptual primitives of red” is equivalent to “I experience the mysterious ineffable redness of red,” but the experience of the conceptual primitives is still different from the experience of redness? That seems like a problem!

Also, I think the whole p-zombie thing has some real issues: a) it’s not clear that a non-conscious human really would act at all human-like and b) even if it would, us conscious humans are totally incapable of understanding what that would be like!

Could a p-zombie even have a “rich color language” in its head without an internal observer interpreting it? If the “reportable mind” is receiving information and interpreting it (“integrating” the information, we might say), is it doing consciousness? It kinda seems to me like it is! Any argument based on p-zombies strikes me as just way underpowered to say useful things about consciousness. It’s a Rorschach test, basically.

Expand full comment
Underspecified's avatar

My working theory is that my body is a p-zombie in a timeline where all the p-zombies just happen to talk about experiencing qualia. That's the timeline I observe because I choose which timeline I observe. This is also how I "cause" my body to do things despite not being causally connected to it.

I don't think I seriously believe this, but if consciousness is fundamental then it seems plausible. I don't know if there's a name for this theory.

Expand full comment
artifex0's avatar

I think this is getting at what Chalmers calls the "meta problem of consciousness"- the problem of why it seems subjectively undeniable that a hard problem of consciousness exists, even though every attempt to actually describe it objectively inevitably just describes an "easy problem" like "why does this person have a concept of redness".

I'm convinced that this problem arises because we're attempting to reconcile subjective and objective models of reality into a single, coherent understanding of what exists- which we can't actually do, since those models are built from different premises. A subjective model of some part of reality by definition references a particular privileged point of view- a privileged status that doesn't exist in objective or differently subjective models.

I think the existence of qualia follows logically from that premise- if your premises include a privileged point of view, sense data related to that point of view are a different kind of thing from every other sense data, and therefore require a separate concept. The reason we can't describe qualia with objective language is because that difference just doesn't exist objectively.

So, is qualia "actually real" or an illusion? I think asking that question attempts to do what I described above- to reconcile definitionally incompatible subjective and objective models into a single model. Given that we all consider our own subjective models of reality, objective models, and the subjective models of other people to all be true, we're faced with an unsolvable epistemological paradox- one that we can't even resolve by rejecting all but one kind of model, since there can't actually be any basis for rejecting a model's premise within that model.

Expand full comment
Phil H's avatar

Yep, the whole philosophical zombies thing was always nonsense.

I have an answer to your final question that I’m pretty comfortable with:

“But these still fail to explain how and why we so obviously experience consciousness”

I don’t think they do. Simply, because we’re so big, there is always part of us available to observe the conscious states of other parts. For example, if my left brain is experiencing red, my right brain can experience the left brain’s experience of red. And that’s it. That’s literally the experience that Chalmers mythologises. There is no need for any explanation beyond the fact of being large and complex.

(This leaves me in the somewhat awkward position of believing that computers are conscious. I’m not quite Tonioni-level panpsychism, but not far off, and… I’m not entirely comfortable with it. But it’s a bullet I’ll bite for the sake of a coherent position, and I really think mine is coherent.)

Expand full comment
Phil H's avatar
2dEdited

Having read a couple of the comments that agree in thrust with me, I think there’s another point worth making.

The problem with a sentence like “I have qualia” isn’t in the word qualia. The problem is in the word “I”. The qualia talk smuggles in a homunculus/ultimate self/soul. The being who experiences the qualium is always the irreducible “I”.

To break out of this talk, simply remember that we can be self-aware, so we can be simultaneously experiencing a qualium, and observing our own inner state, and *both* of those experiences are felt by “I”. That’s why experience can be so rich and so meaningful at the same time. Multiple systems!

Expand full comment
Onid's avatar

What’s the exact context here?

If the question is would some literal alien species that doesn’t experience qualia talk about them, then I suppose the answer might be yes but it seems unlikely they would talk about the hard problem of consciousness.

But if the question is about AI, especially modern language models that are statistical engines first and foremost, then I think they definitely would report qualia and a hard problem of consciousness and whatever else, because humans talk about those things. Even if LLMs were conscious - and I doubt they are - then the experience of their existence would be far removed the language they use, much farther it is with humans.

Expand full comment
TheKoopaKing's avatar

Imagine playing Fortnite - your monitor shows the Fortnite game, your keyboard moves the character on the screen, there is a start and end to the game, you even get a Victory Royale - all of this happening without any computation occurring in the CPU. Is this conceivable? Yes. Does this mean that playing Fortnite is not identical to computation? No. Why not? Because conceivability isn't a guide to anything.

Expand full comment
Skull's avatar

This is highly informative and tells us exactly what Fortnite requires. What does consciousness require?

Expand full comment
TheKoopaKing's avatar

It wasn't informative, it was pointing out a methodological flaw in a common approach to studying the mind in analytic philosophy. For your question about what consciousness requires, what does Fortnite require? Can I run it on my microwave's microcontroller? And if not, what minimum specification do I need to be able to play the game? This is a complex engineering question spanning multiple topics that doesn't have an easy answer. So too with what consciousness requires. You can start with common sense answers like "a laptop currently on the market" or "not being asleep" and work your way from there, but don't be prepared to resolve every question you can ask, especially if you're using the wrong tools for the job like conceivability arguments.

Expand full comment
Taleuntum's avatar

Huh? I don't get you. Your thought experiment clearly shows that "your monitor showing the Fortnite game, your keyboard moving the character on the screen, [..]" is not identical to any computation occuring in the CPU. And we know that it's not as the monitor, keyboard and CPU were well-understood even before your thought-experiment and we know that various outputs on peripherals are an _effect_ of the computation in the CPU and not identical to it. Or in other words, if you want to show that conceivability arguments are incorrect, wouldn't it be better to give an example where we know that the two things are identical even though the conceivability argument showed that they were not?

EDIT: Alternatively, you might have meant your list purely as an illustration for playing Fortnite and not as a definition. In that case, my objection still stands: I don't think playing Fortnite and computation on the CPU are identical. Consider the following:

You are playing Fortnite in your room. Suddenly, you hear shouting through your window: There is a p-zombie apocalypse! You quickly get up and start looking for ammo. Notice how your CPU still runs the same instructions as when you were playing Fortnite (you didn't have time to exit the game), but you are not currently playing Fortnite.

In other words, computation on the CPU is _included_ in playing Fortnite, but is not _identical_ to it.

Expand full comment
Bugmaster's avatar

The easiest way to resolve this conundrum is to accept that the "hard problem of consciousness", qualia and all, is a philosophical red (ha !) herring. In reality, any agent capable of independent locomotion and medium-to-long-term planning (and one which operates on limited hardware) is going to contain some kind of a sensory suite that feeds information into a continuously updated internal predictive model of the world -- and this model would have to contain the agent itself and its position within the environment. Humans have such a model. Dogs have it. Spiders have it. Waymo robotaxis have it. The difference between all of these agents is of degree, not of kind.

This is not some mysterious nonphysical ineffable spirituality, it's a consequence of how physics works here on Earth. The answer to the question "Is the redness I see the same as the one you see ?" is "Pretty much yes, because we all have the same eyes and brains, unless perhaps you're colorblind". The reason we all see roughly the same colors is the same reason why we all must drink water in order to live. It's purely physical and very complicated and perhaps philosophically unsatisfying -- but it is also useful. Unlike the concept of qualia.

Expand full comment
beowulf888's avatar

If you're going to argue the physicalist approach, then you have to explain every step along way that results in the feeling of redness. For instance, light with a wavelength of between 620 to 750 nanometers is transferred through the water and collagen of the cornea, is focused by a shaped lens composed of crystallin on the retina. The light cones that are sensitive to this particular wavelength absorb it, and this triggers a chemical-based action potential (i.e. ion exchanges) across the cell membrane of the optic nerve(s). An electrical signal travels down the axons of the optic nerve, and there is a synaptic transmission of this impulse from the optic nerve into the visual cortex. The visual cortex has evolved to process this incoming information and make sense of it, so we can presume that it's interacting with other parts of the brain. All this communication is by done by action potentials getting triggered and electrical impulses running down axons.

Where's the quality of redness in this explanation? If you say, well, the quality of redness that we perceive is just the way the evolutionary cookie crumbled, you still haven't explained why we perceive wavelengths of 620 to 750 nm the way we do.

Expand full comment
Bugmaster's avatar

> If you're going to argue the physicalist approach, then you have to explain every step along way that results in the feeling of redness.

You wouldn't ask this about any other phenomenon besides the one you call "consciousness". For example, neither I nor even you can explain every single little step of the way the characters I am now typing eventually get to your browser. I agree that this makes the process "mysterious" in a very trivial way, but that's no reason to assume it's somehow nonphysical. We don't possess every single transitional fossil between humans and apes (nor could we ever), but this doesn't make the Creationists right. We have not mapped every single rock on Pluto, but we are still pretty sure Pluto is a rock (more or less). So why the special pleading ?

Expand full comment
Adrian Doan's avatar

The question is, why do you have the experience of typing those characters? The assumption is, there could be human-like agents that have the same capacity for planning and predicting but have no "experience" of being such an agent.

Expand full comment
Bugmaster's avatar

My point is that there not only "could be", but definitely are such agents. We know this. Dogs don't have the same personal experience that humans do; neither do spiders nor whales nor Waymo robotaxis. Neither do other humans, in fact; to use a crude example, if you are male then some of your physical experiences probably differ from those of your female friends (and vice versa). But so what ? Yes, this is interesting, but is it important ?

I realize that you explicitly specified "human-like" agents, but my point is that the distinction is not terribly meaningful. There's no such thing as a uniquely ineffable conscious experience that only humans get to have; it's just a spectrum (and a pretty wide one at that). By analogy, human bipedalism is technically quite unique (no other animal on earth has those exact walking mechanics) but also unremarkable (lots of animals can walk in lots of different ways, and so can robots).

Expand full comment
Adrian Doan's avatar

Most of that seems tangential. Your original comment said "the "hard problem of consciousness", qualia and all, is a philosophical red herring". Concerning consciousness, I don't have to insist that "only humans get to have" it, but the question remains whether there is something it is like to be a human. And if people are confused what that means, then the normal response is to describe a p-zombie: a human-appearing agent with all the responses of a human, and which can plan and make predictions (incorporating itself as a causal agent in those predictions), but which has no experience of *being* such an agent.

The point here is that this agent fulfils your requirements of being "capable of independent locomotion and medium-to-long-term planning" and does "contain some kind of a sensory suite that feeds information into a continuously updated internal predictive model of the world", but this is not accompanied by qualia.

I don't see the relevance of the analogy.

(Note: I think this position is actually confused, so I'm playing devil's advocate here)

Expand full comment
Bugmaster's avatar

I would argue that an agent that could "plan and make predictions (incorporating itself as a causal agent in those predictions)" would (by physical necessity) have to possess a sensory suite connected to a self-referential predictive model of the world. The operation of this model is what we refer to when we reference "conscious experiences". This applies to humans, dogs, spiders, robotaxis, and everything else, though of course to a different degree (a spider's world model does not need to be nearly as detailed as a human's or arguably a robotaxi's). And of course every agent's model is different (to some extent but never totally), and the models do cluster into phylogenetic trees, so you could indeed say that "there's something it is like to be a human" (or a male human vs. female human, or a colorblind human vs. normal-vision-human, or mammal vs. arthropod, depending on where you want to draw that arbitrary line).

But when you postulate a p-zombie that can plan and make predictions (etc.) despite lacking "consciousness", what you are in fact postulating is a p-zombie that can plan and make predictions without possessing the only mechanism we know of (and arguably the only mechanism *possible*) that allows it to plan and make predictions. I don't see how that would work.

Expand full comment
beowulf888's avatar

But I do ask this question of other phenomena.

For instance, I ask why the universe happens to have 26-30 fundamental constants whose values, if they were slightly different, would have made life impossible. I ask why a measurement from a classical object or observer can collapse a quantum wave function. And, yes, I wonder why my fingers can dance on the keyboard and effortlessly type these words without my mind consciously controlling their movements. So, I think it's a very profound question of why we perceive qualia the way we do, and what is the element in our consciousness that's doing the perceiving, and that exclaims, "I love that rose's shade of red!"

And I'm not necessarily arguing for a non-physicalist explanation for consciousness, but for a physicalist explanation to be convincing, it has to be complete. Saying that, well, that it's just a bunch of electrochemical impulses in our brains doesn't explain it. Moreover, exclaiming, "Gee whiz! Look at how your visual cortex lights up on this PET scan when you see red!" explains nothing except that your visual cortex is stimulated by red. And claiming that it's been explained just sweeps the problem under the rug. "Pay no attention to the man behind the curtain!"

Expand full comment
Bugmaster's avatar

> And I'm not necessarily arguing for a non-physicalist explanation for consciousness, but for a physicalist explanation to be convincing, it has to be complete.

But this is exactly the problem I'd pointed out before: you accept (however provisionally) the physicalist explanations for things like gravity and quantum wave functions (and many other things) despite these explanations being demonstrably incomplete (we have no theory of quantum gravity as of yet). So why not consciousness ? Or perhaps I'm wrong, and you do *not* accept the physicalist explanation for e.g. gravity ?

Expand full comment
beowulf888's avatar

If a theory is somehow incomplete, that suggests that it could also be wrong in some way. Although everyone accepts that current quantum theory gives accurate experimental predictions (and therefore the theory is true), many physicists consider it to be incomplete. For instance, the measurement problem is really a pretty big deal.

As for gravity, there are two explanations for it: Einstein's explanation that it's due to curvatures in spacetime, and the explanations that attempt to shoehorn gravity into the standard model by quantizing it, and using an as-yet-undetected boson particle dubbed the Graviton. Under Einstein's explanation, gravity would not be a force like electromagnetism, and the strong and weak forces. The simple experiment of dropping your iPhone (into something soft, please!) while running an accelerometer app will show that its acceleration is zero during its free fall. It's only when it's lodged against a surface that it detects an acceleration. If you were to drop a magnetometer into a strong magnetic field, it would register the strength of the magnetic field all along its way. So the "force" of gravity behaves differently from other types of forces.

One can shrug one's shoulders and say, I'm happy with quantum theory and the Standard Model as they stand, and I'm happy with Einstein's GR. But that attitude doesn't push our frontier of knowledge outward against the force of complacency.

Expand full comment
Bugmaster's avatar

> If a theory is somehow incomplete, that suggests that it could also be wrong in some way.

True, but this applies, without exaggeration, to literally all of our current theories. And yet, as you'd mentioned, we are happy to use them to make very accurate predictions, and we're not in any rush to declare the underlying phenomena non-physical and/or ineffable, despite our ignorance of their exact mechanics. So why should consciousness be any different ?

Expand full comment
FLWAB's avatar

>The answer to the question "Is the redness I see the same as the one you see ?" is "Pretty much yes, because we all have the same eyes and brains, unless perhaps you're colorblind".

The trouble is that we have no way of demonstrating that fact. I cannot observe your experience, and you can't observe mine: which is why we can only find out if someone is colorblind if they tell us they are. We can't look at their brains and go "Ah, see, his experience of red is the same as his experience of green, he's red/green colorblind". We have to show him a bunch of pictures and ask him instead, because we can't observe his experience.

Expand full comment
Bugmaster's avatar

> which is why we can only find out if someone is colorblind if they tell us they are.

That is not true, as there are common diagnostic tests for colorblindness. They work on humans as well as e.g. lab mice (IIRC). Furthermore:

> We can't look at their brains and go "Ah, see, his experience of red is the same as his experience of green, he's red/green colorblind".

Actually we can: https://www.nature.com/articles/s41597-025-04511-0

Expand full comment
FLWAB's avatar

We do have common diagnostic tests for colorblindness: they consist of showing someone an image and asking them what it looks like. We have to ask because we can't observe their experience directly.

That nature article you link to does not say what you think it says.

For one thing, the article doesn't come to any conclusions. It just is about a dataset they generated by having people in an MRI while they did a task where they compared different colors to each other. At no point do they say "And we discovered how to compare people's internal experiences of color to each other!" They don't even say "When participants were looking at the same color they had the same brain activity in the same region". They just say "Here's our dataset, here's how we got it, feel free to use it for your own research."

The fact is we cannot directly compare people's experiences to each other, because the only experience we can observe is our own experience.

Expand full comment
Bugmaster's avatar

> We have to ask because we can't observe their experience directly.

Not really. I mean, yes, in diagnostic tests you have to ask, because the purpose is medical diagnosis and asking is the most efficient way to achieve that. But you could also place a safe in the room, tell the subject that if he can open it he can keep the $100 inside, and stick a post-it note on the safe that says "the combination is 7659843" in red-on-green font. I bet that you could very quickly detect which subjects are colorblind by using this method, without ever having to ask them about their subjective experience. The same kind of test works on nonhuman animals who cannot talk.

> At no point do they say "And we discovered how to compare people's internal experiences of color to each other!"

Actually they did something like this; see Fig. 3 and the accompanying explanation in section 2-1-4.

That said, I'm starting to think that maybe your objection is more fundamental:

> The fact is we cannot directly compare people's experiences to each other, because the only experience we can observe is our own experience.

So even if an article comes out tomorrow identifying the exact structures in the brain that are responsible for color perception, you'd say something like, "so what, this is just brain activity, it's not subjective experience". And even if we somehow produced a fully functional model of the brain (which is admittedly unlikely to happen anytime soon) and could run it through the pathway of color perception from start to finish, you'd say "so what, this is just a model of the human brain, it's not subjective experience". If so, I don't see what you'd be basing your conclusion on other than some *a priori* assumption that consciousness is non-physical; if not, then what would you say ?

Expand full comment
FLWAB's avatar

Your test would work, but it wouldn't work by observing their experience of color. Just their behavior, since their behavior is observable and their experience is not observable.

>Actually they did something like this; see Fig. 3 and the accompanying explanation in section 2-1-4.

That figure does not deal with the fMRI data, it's comparing the answers that participants provided about color similarity. In other words, did the participants agree about which colors were the same and which were different, how much did they agree, etc.

>So even if an article comes out tomorrow identifying the exact structures in the brain that are responsible for color perception, you'd say something like, "so what, this is just brain activity, it's not subjective experience".

If you could use that brain activity to demonstrate that two people were having the same experience: that when they both looked at the same shade of red they both experienced it as the same shade of red, then I'd say that would count. But I'm not sure if it's possible to ever reach that because you'd need to be able to quantify their experiences in order to compare them. How could you prove that the same brain activity led to the same experience? I don't see how that would be possible: even if they both say that it is red, how do we know that one of them doesn't experience red the way I experience green?

Expand full comment
Bugmaster's avatar

> Your test would work, but it wouldn't work by observing their experience of color. Just their behavior, since their behavior is observable and their experience is not observable.

Right, so you are proposing that the test subjects (be they humans or mice) would be able to devise and execute moderately complex plans (e.g. running a maze or pushing buttons to open a safe) whose success hinges on their ability to perceive color -- without possessing the subjective experience of color. On the one hand, I will grant that perhaps this is in some way logically possible. On the other hand, what does this proposed "subjective experience of color" (as distinct from anything at all we could possibly ever observe) do for you, intellectually speaking ? I would claim that the answer is "nothing", unless you want to propose some sort of radical solipsism (so, especially then).

> That figure does not deal with the fMRI data, it's comparing the answers that participants provided about color similarity.

Ok, fair point; they mention the similarity of fMRI results in section 2.2.2, but their own description of the results is rather tepid. That said:

> If you could use that brain activity to demonstrate that two people were having the same experience: that when they both looked at the same shade of red they both experienced it as the same shade of red, then I'd say that would count.

You don't need brain activity for this, you just need color swatches. As it turns out, color perception varies across genetics as well as culture; IIRC Asian (especially Japanese) people can discern more shades of green/blue than us Westerners can, more so than would be accountable merely by genetics (though I could be misremembering). Nonetheless, [healthy] human color perception is fairly uniform across all culture (plus or minus some fine shading variations).

> How could you prove that the same brain activity led to the same experience? I don't see how that would be possible: even if they both say that it is red, how do we know that one of them doesn't experience red the way I experience green?

This question is nonsensical to me. There's nothing special about the words "red" or "green". The reason we know that you and I both perceive red and green the same way (more or less) is because e.g. when presented with a full rainbow (actually a color space) of color swatches, and an object that emits a certain combination of light frequencies, we both point to (roughly) the same swatch. You could call it "red" or "красный" or "#ff0000"; and you can trace the exact biological pathways that make it possible for us to point to the same swatch; but even in the absence of any biological testing or indeed spoken language it would still be obvious that we both can correctly identify the same swatch -- and a person who is red-green colorblind could not do this.

Expand full comment
Adrian Doan's avatar

We demonstrate that fact by participating in activities and conversations where we distinguish red.

>I cannot observe your experience, and you can't observe mine.

I'm not sure what you could mean by this. Do you call something "red" because you observe your mind and see it currently contains redness? (echoes of the homunculus)

> We can't look at their brains and go "Ah, see, his experience of red is the same as his experience of green, he's red/green colorblind". We have to show him a bunch of pictures and ask him instead, because we can't observe his experience.

In some sense, he also cannot look at his brain and say "ah, I see, my experience of red is the same as my experience of green, I must red/green colorblind". And, as the OP notes, there is a sense in which we can actually do this (and in the same way the he could!).

"We have to show him a bunch of pictures and ask him instead, because we can't observe his experience." How did he learn that he was red/green colorblind?

Expand full comment
FLWAB's avatar

I call something red because I experience it as red. I have access to my own experience, but I don't have access to yours: all I can observe about you is your behavior.

>"We have to show him a bunch of pictures and ask him instead, because we can't observe his experience." How did he learn that he was red/green colorblind?

Someone showed him a picture and said "There's a number hidden in the dots, can you tell me what that number is?" And he couldn't see any number at all, just dots, and he said so. Then he was told "You're red/green colorblind. There are two colors, red and green, which you have heard of since you were a kid but you experience them as the same color." And then probably the guy thinks "Huh, that explains why everybody kept saying one apple was green and one was red when they basically looked the same to me. And why people call one of the traffic lights red and one green when they're practically identical. I guess people must be experiencing something different than I do!" Still, he has to take the other person on faith, since they can't give him an experience of red to compare his current experience to.

Expand full comment
Adrian Doan's avatar

Do you mean he has to take them on faith, because there could be a huge conspiracy aimed at convincing this guy everyone else can distinguish two colors that were indistinguishable? In that sense, I would agree but say it is a weird way to use the word "faith", as if you were to say in ordinary life, "I'm not worried walking through doors because I have faith a hole in the earth won't open in on the other side".

Expand full comment
FLWAB's avatar

>Do you mean he has to take them on faith, because there could be a huge conspiracy aimed at convincing this guy everyone else can distinguish two colors that were indistinguishable?

Essentially yes, but more specifically he has to take it on faith because he can't experience the color red. People can tell him about red, but he can never observe red himself. There are things you can prove for yourself, everything else you have to take on faith.

Expand full comment
0xcauliflower's avatar

God bless you, FLWAB. Keep on fighting the good fight.

Expand full comment
Bugmaster's avatar

> Still, he has to take the other person on faith, since they can't give him an experience of red to compare his current experience to.

No, he does not -- depending on what you mean by "faith". There are all kinds of controlled experiments the colorblind person could stage to make the normal-sighted person demonstrate his alleged perceptive superpower. After the normal-sighted person does so, the colorblind guy would be justified in accepting the existence of the superpower (which the rest of us simply call "normal vision"), based on *evidence*. This is very different from the scenario where the normal-sighted person says "I can perceive things that you cannot" and the colorblind guy just says, "cool, I trust you bro".

Expand full comment
The Ancient Geek's avatar

So Mary would know.What Red.looks like?

Expand full comment
beowulf888's avatar

> But these still fail to explain how and why we so obviously experience consciousness, not just in the sense of there being a mysterious redness of red, but in the sense where there’s “someone” “there” to appreciate it.

Well said. For some reason, I was under the impression that you took Dennett's eliminative materialist view of consciousness. Obviously not! I'll re-up my priors. ;-)

Expand full comment
AdamB's avatar
2dEdited

I am concerned that I might in fact be a p-zombie. I find nothing mysterious whatsoever about the redness of red. I have never been able to understand what people mean when they say "the hard problem of consciousness". Apart from that and the word "qualia", I think I understand all the words and clauses in this post, but unlike most of Scott's writing, I can't really figure out what he's trying to say.

I have no desire to "discuss at length the fact that [I] possess" "qualia", and I have no evidence that my "internal narrative[] also represent[s]" this "fact".

(The stuff about the rose strikes me as weirdly focused on pixel grids and edges/features. Wouldn't it be simpler to just shine some diffuse red light into the eye and ask what color it is? Does this somehow relate to whatever it is about redness that non-p-zombies find so mysterious?)

Is there an experiment I can to do settle the question of whether I am a p-zombie or not? Does anyone have opinions about how likely it is that I am in fact a p-zombie? Would it be possible even in principle for someone to convince me that I'm not a p-zombie? Would anyone here be willing to try?

(Edit: after reading https://en.wikipedia.org/wiki/Philosophical_zombie I guess what I am is just a "physicalist".)

Expand full comment
Bugmaster's avatar

I agree with you, except I'd go one step further and claim that the "hard problem of consciousness" is just mental confusion. The reason you (and I) fail to understand it is because it makes no sense. It's basically a more sophisticated version of vitalism, and still persists to the modern day primarily because philosophers need jobs.

Expand full comment
Skull's avatar

Look for the looker. When you realize after many hours of trying, that you have no idea what you're even looking for, then you'll understand what the hard problem is.

Expand full comment
Skull's avatar

It feels like you when "you" are looking at something. But, we find no "you" when we look for "you." We don't even know what to look for when to look for "you." That's the hard problem. You aren't a pzombie. You just haven't looked hard enough for the looker. When you actually look for who's looking, you'll see you don't find anything. Just more consciousness.

Expand full comment
The Ancient Geek's avatar

Can you give a full physical.explanation of.the redness of red?

Expand full comment
AdamB's avatar

The one in the post seems fine to me. I don't think anyone can give a physical description of anything that's 100% "full" in the sense of leaving no unanswered questions, so I guess in that sense the redness of red is subject to the same baseline "mysteriousness" of everything in the universe. I assumed that this wasn't what was meant though.

Expand full comment
Jack's avatar

lol A+ footnote

Expand full comment
Ralph's avatar
2dEdited

If this discussion interests you, I would heavily recommend reading the later works by the philosopher Ludwig Wittgenstein (anything except the Tractatus). He doesn't write about consciousness or P Zombies per-se, but he writes about how language works at a fundamental level. It's about exposing the limits of language, so it's more "statements dancing around an inexpressible point" and not a straightforward essay.

I think that language is fundamentally "external / social" and not "internal". Language cannot, even in theory, refer to "purely internal" facts. That's not to say that they don't exist, but just to say that they cannot be communicated. Words that seem to simply "refer to completely internal experiences" will mostly, upon further reflection, be recognized as serving social functions and be fully + accurately described in those terms.

Think of what it means for something to be a Qualia. How did you learn the word? I assume it was probably via phrases like "the red-ness of the red" (A tautology) or "the stinky-ness of the fart" (a socially established fact, which you are practically expected to affirm even if you can't perceive it [remember the story about the anosmic person making fun of their sister for "having stinky feet"? They just knew that it was a thing you do, even if they couldn't smell ]). Think about it, how would you teach somebody else to correctly use the word "qualia"? Additionally, if somebody claims they are experiencing specific "qualia" how would you actually demonstrate that they don't?

I think, functionally speaking, the word "Qualia" is a social marker that a certain claim is exempt from the normal process of justification. To say that statement involves "Qualia" is to tag it as "the kind of thing that we have decided to exempt from reason-giving". It's defined and taught via tautology (which is necessarily and always true) and socially accepted facts (which, practically speaking, don't require justification). Why can't the colorblind person see the green number in the red dots? Qualia! (They just can't, stop asking). Why don't I like the taste of anchovies? Qualia! (Stop asking, I just don't)

Language exists in the physical world, and is of the physical world. It's a collection of sound waves and marks on paper, plus a set of conditioned responses in certain mammals. Language exists because it contributes (in an evolutionary sense) to the fitness of its users, and it is selected upon in the same way a gallbladder would be. A language is something like "an organ of a culture / community", it's functional and it serves purposes. It's not a "transcendent map of the universe", it's just a specific thing (albeit a complex one) that exists in this universe. The word "Qualia" should be analyzed like a certain cell in an organ (how to relates to the whole language and culture in which it is embedded), and not like a wormhole from the physical world to the mental one (that perspectives causes lots of unnecessary confusion and potentially some distress)

Expand full comment
JerL's avatar

This is a great point about the social nature of language, and I think indicates why the LLM example is so hard to get clarity on: there's obviously a social use for terms about internal states in a world where at least some people have them; so p-zombies who live among normal people will have reason to talk about qualia in the same way as normal people do, even if p-zombies who only ever interact with other p-zombies won't. But obviously, it is very difficult to infer things about the second case when we do in fact live in a world where people already have and talk about qualia.

To go back to the anosmia example: obviously in a world where everyone else refers to stinky feet, an anosmic will have reason to make that reference too--but presumably a society of anosmics would never invent the word "stinky", or at least, would use it in a very different way.

Expand full comment
Ralph's avatar

I think that even a world completely made of p-zombies would still develop a word that is functionally identical to the word "Qualia".

Imagine there's a tribe of p-zombies with a language. The culture of the tribe is such that you are expected to be able to justify things you say if you expect to motivate other people. P-Zombies would come up with this, it makes language more useful in a darwinian sense so it would happen (in the same way that the molecules in a gallbladder would evolve to be regulated by other molecules in a certain way).

A certain tribesman observes a volcano explosion, and lava is slowly creeping towards the village.

They head back and say, "Run! Lava is coming!"

They're interrogated:

Q: "how do you know?"

A: "I heard a volcano explode!"

Q: "How do you know it was a volcano, and not a loud bird or something?"

A: "I saw lava coming out and creeping towards the village!"

Q: "How do you know it was lava and not some other liquid?"

A: "It was red and hot"

So at this point, just strictly following the formal pattern, you should be able to ask

Q: "how did you know it was red? How do you know it was hot?"

But clearly, it's not useful to continue past this point. The justification process has terminated here. It makes sense to have some kind of word to describe the kinds of claims where it doesn't make sense to keep asking for justification.

If this tribe of p-zombies advances to the point of possessing philosophers, who just think about and formalize and abstract things, It makes sense that the philosophers would look at this quality of certain statements. They might want to come up with a specific term to describe these statements. I'm claiming that in this context, the word they come up with would behave exactly the same as the word "Qualia". The word serves a social function, which is why it continues to exist.

Again, I'm not saying that there is no such thing as a subjective conscious experience. I'm just saying that the thing that you have in mind is fundamentally not accessible through language, because language is exclusively external and social

Expand full comment
JerL's avatar

I'm not certain they'd use those terms identically: it's much harder for me to imagine them asking questions like, "could my blue be your red" if there were no internal experience--the point is that even if it's not accessible through language, it can still influence the kinds of linguistic utterances we make. It might be a conceptual mistake to ask those questions, but I think the internal experience is what prompts us to make that mistake.

To possibly abuse Wittgenstein's example, if we all had empty boxes, we probably wouldn't think to ask each other, "is my beetle the same as your beetle?"

Expand full comment
Ralph's avatar

Yeah, I see your point.

They might have a word that has the same "functional use", but they probably wouldn't ascribe as much urgency or importance to it. The specific confusions that comes up are easier to explain if every box does have something in it.

I just have this strong intuition that a world of zombies and a world of minded beings would be physically indistinguishable. While I agree that "the boxes have beetles" is a very parsimonious explanation, it seems possible to me that the urge to consider how the experiences of others might be different is just a way of analyzing / modeling intersubjective behavior.

When you're a child, you notice that some people like different things than you do. Why do they like different things? Why do you like the red car, but I like the green car?

When you discover that people actually do differ in their perceptive structure (ex: colorblindness or anosmia) you notice that it explains aspects of their behavior (they like different patterns than you, they don't mind being in an outhouse, etc ).

You also notice that there are some behavioral differences that you can't exactly account for (this friend with no specific perception impairments likes painting X better than painting Y, but your preferences are the other way around). When you ask them why they like painting X, they explain it with reference to Qualia ("I like it because it looks like my beetle!"). Imagine two people arguing if the same food dish tastes good, for example. At some point, they still don't understand each other but they can't explain any more.

It seems like the functionally useful "alternate mind models" (what if you can't smell as well as me? What if you can't tell the difference between red and green?) and the functionally useless ones (what if your blue is my red?) can't be obviously distinguished on their face without some analysis.

I think that you wouldn't get the same concern with "what if my blue is your red" If we lived in a world in which aesthetic judgments, or preferences in general, were much more uniform.

"Beatles are in the boxes" is a good explanation, and maybe my reasoning is a bit motivated, but I think "claims about Beatles are socially acceptable ways to justify behavioral difference" is another one.

Expand full comment
JerL's avatar

I think that all sounds right, except I just want to insist that it seems very likely to me that "claims about beetles are socially acceptable..." is itself a downstream consequence of the existence of qualia.

Like, I am pretty sure I had the "what if my red is your blue" thought as a child, just based on introspection, and with no (conscious) sense that I was implicitly thinking about value judgements or whatever.

It's also worth noting that we do make aesthetic judgements about things that we only dubiously have qualia of, at least in a direct way, "this story had a satisfying ending" also is irresolvable, roots in a general feeling of satisfaction that is ultimately completely personal, but we still don't talk about "satisfaction" the same way we talk about "red"--personally I am much more forceful about the idea that "red" is somehow "directly perceptible" to me, there is "something that it is like to experience", etc, even though "this story is satisfying" is actual closer to the aesthetic disagreements I might have with others.

Expand full comment
Ali's avatar
2dEdited

Thanks for this post Scott.

I resolve it as:

- An average human would absolutely say “I don’t know why, I just see there’s a rose in the picture”.

- But *I* would answer “I don’t know why, I just see there’s a rose in the picture -And- I insist there is a mysterious Red and Green qualia about what I’m seeing”

- I think the p-zombie would interpret the bitmap within 100ms, if that bitmap was a correct representation for how they normally receive visual data, and they have been evolved to interpret the data “well enough” and “confidently enough”, at 100ms

- The p-zombie absolutely could answer like an average human “I don’t know why, I just see there’s a rose in the representation”. They might add “And my confidence channel is really confident about redness and greenness being in the data representation at the 100ms point, by the way”. That second sentence is differently phrased from my second sentence, for what it’s worth.

I repeat that the only data I attest to is that *I* insist there is a mysterious Red and Green qualia about what I’m seeing. But I quickly follow this up with “I consider it more reasonable belief that ‘all humans have this’ instead of ‘I am the only human that has this’”. I might extend this belief of qualia to other beings as far as I’m confident about the magic being about my: biological brain stem Or my biological neurons Or harnessing of consciousness-relevant physics Or structured information processing evolved within certain conditions Or structured information processing (but it’s anyone’s guess what the specific mapping is from the information to the qualia in a given structured information processing system)

Expand full comment
Signer's avatar

The Hard Problem is solved by panpsychism: "consciousness" in the Hard-Problem sense is existence (https://www.lesswrong.com/posts/KbRnXMoyBGrEu9kou/consciousness-actually-explained-ec-theory-1, https://consc.net/papers/panpsychism.pdf, https://consc.net/papers/combination.pdf).

If you substitute consciousness by existence in the conceivability argument, you get "p-zombies don't exist". So the conceivability argument is either metaphysically or logically wrong, depending on whether you think it's logically possible to conceive of things that don't exist. And physical equations say that we live in a non-zombie world in the part where they say that they describe reality.

Expand full comment
Phil H's avatar

Panpsychism does resolve the hard problem, but in a spectacularly unhelpful way. “Water spigots are conscious” is too weird.

We can do better - not a lot, but a bit. If we take seriously the idea of “self-awareness”, we can develop an idea of consciousness that is closer to normal usage. 1) To be any kind of mind you must have the ability to sense/perceive/be aware of some external state. 2) To be a conscious mind, you must have the ability to sense your own internal states.

This gets us beyond the “rocks are conscious” weirdness of panpsychism, though it probably still implies that bacteria are conscious, which remains a problem.

Expand full comment
0xcauliflower's avatar

Besides bacteria you still have the problem of emergence: why is it LIKE something to sense some external state. And is a motion detector, for instance, conscious?

I think a little better is a kind of panpsychism where everything has a kind of brute capacity of consciousness which is only awakened to rich and lively inner life by complex inputs and outputs.

This doesn’t answer the question”why” problem still, unfortunately.

Expand full comment
Phil H's avatar

“why is it LIKE something”

No, that’s precisely the problem that my model answers. It’s like something because you perceive yourself sensing that external state.

Expand full comment
Signer's avatar

"Brute capacity" and "rich and lively inner life" are just high level subjective abstractions over different forms of consciousness. Panpsychism doesn't stop you from valuing some forms of consciousness (rich and lively) and don't care about others (rocks). The point is that there are no mysterious fundamental differences between them.

Panpsychism answers "why is it like something to be me" by "because you exist" and "why seeing red is like seeing red" by "<description of neural activity that allows you to have relatively distinct concepts of colors>".

Expand full comment
JerL's avatar

But doesn't it just push the problem down a level? Like, it seems unsatisfying and non-explanatory when a normal physicalist says, "it's just a brute, irreducible fact that certain physical systems have a conscious experience associated with them"--but pan-psychism makes the exact same claim, just without the restriction to "certain"--it makes the reduction of consciousness to fundamental physics easier, but just by pushing all the stipulated-brute-factness down to the lowest level.

I agree I find this a *little* more plausible, since if we're bound to accept irreducible facts (and we probably are) about the world, having them only occur at the lowest level feels better to me; but I'm not sure this is really principled.

Imagine I assert the theory, "everything that can be described by physics above the Planck size is conscious"--in a world where we never need to invent quantum gravity, this just *is* pan-psychism--there is no more "fundamental" layer beneath the layer of fields and such, so that is the most fundamental stuff, so if it's conscious, everything is. But if we need a theory of quantum gravity, then it's just a weird type of physicalism: consciousness is something that emerges from non-conscious components in a certain way. But the theory makes the same assertions about what objects are conscious! If you find it weird to say, "quark fields are conscious, as a brute fact" if you know there's something "lower down" than a quark field, why does it become more explanatory if you don't know about anything below quarks?

Expand full comment
Signer's avatar

The claim is not "it's just a brute, irreducible fact that physical systems have a conscious experience associated with them". It's "when you talk about conscious experience you talk about physical systems". The brute irreducible fact is that universe exists. You don't assert new facts, you just discover new (to you) things about existence.

The problem with "everything that can be described by physics above the Planck size is conscious" is that you can't be sure you are above-quark, but you can be sure you exist - existence is special because it's the only non-epiphenomenal property you can be certain about, because cogito ergo sum.

Relatedly, the casual influence is also... worse? with above-quarkness. You kinda do say you are conscious because you are above quarks, but below-quark simulation of you would say the same things.

And existence still casually influences consciousness anyway - non-existen above-quark things are still not conscious.

Non-strongly-emergent above-quarkness probably also doesn't satisfy other desired aspects of consciousness? The best I have now is that substitution into the conceivability argument works differently: "imagine zombie-world that is physically like ours, but everything is below-quark" is obviously logically impossible, but imagining non-existent worlds is maybe allowed.

More generally, panpsychism's claim is that you *don't need* anything "consciousness is existence" and physics to solve consciousness-related philosophical problems. Even if "consciousness is above-quarkness" can also work, you are still left with no philosophical problems and no new practical consequences.

Expand full comment
JerL's avatar

I don't think I disagree with any of that, I'm just pointing out that the sense in which panpsychism "explains" consciousness is a strange one: I agree that "everything that exists is conscious" is better than "[list of things] is conscious" even if, to the best of our knowledge, [list of things] comprises everything that exists. And I agree that substitutability is a key point here, but I think it reinforces my point: every time you learn something new about what exists, the actual content of the theory "everything that exists is conscious" changes, possibly dramatically. It's just very strange that the way in which the theory offers an explanation is completely independent of what the terms in the theory actually refer to! Most explanations don't have that quality!

That's the sense in which it feels like a "brute fact": it's asserted as true *independently of any actual theory of the world*; I agree asserting a brute fact like that about a non-fundamental level feels worse, but not because it's *less explanatory*; rather because it's *less arbitrary*.

But there may be brute fact assertions about certain physical systems being conscious that strike me as even less arbitrary than that, if there's something about those systems that seems to exhibit other properties we associate with consciousness.

Expand full comment
The Ancient Geek's avatar

Panpsychism is not a Reduction to.phsyics, because a Reduction to physics would give you the ability to infer consciousness from physics. Instead, it is the assertion of a non physical fact.

Expand full comment
Signer's avatar

You do get the ability to infer consciousness from physics - if physics says something exists, then it's conscious. That works, because, as Chalmers puts it, panpsychism is broadly physicalism - it's not like (all) physicalist would object to the notion of equation describing things that exist.

Expand full comment
Signer's avatar

Helpfulness doesn't matter as much as correctness - it's like saying "relativity is bad, because it doesn't allow warp drives".

I don't mind calling all mysterious parts of consciousness "existence" and leaving "consciousness" for ethically significant mind processes (with self-awareness or whatever). But the point is to stop saying "oh noes, consciousness is mysterious, we haven't even started to understand it".

Also, I don't really get why people like self-awareness so much - what's the evidence that humans invoke self-awareness when they just see white wall without any internal dialog? In a way, that is not just about having more than one layer - rocks also have layers.

Expand full comment
Phil H's avatar

“Helpfulness doesn’t matter as much as correctness”

Sure. You’re right, and what I should have said was: panpsychism solves that problem at the expense of being incorrect.

Because it’s not true to say that water faucets are conscious; nor is it true to say that existence is the same as consciousness or a form of consciousness. Those words mean different things. If your theory implies that they’re the same, your theory has a lot of explaining to do.

“I don't really get why people like self-awareness so much”

I like it because it’s the answer to the question “why does X feel like something?” That’s the question most people who are interested in consciousness are grappling with, and it has an answer. The answer is: because you can observe your own internal states, so a single external perception (qualium) can set up multiple internal reactions. That’s what that feeling is. And rocks don’t have that.

Expand full comment
Signer's avatar

> Those words mean different things.

Like I said, I'm fine with changing words. But proponents of the Hard Problem would say that "consciousness" and "self-awareness" also mean different things - you can conceive a zombie-world with self-aware but not conscious people, so it doesn't explain why don't we live in a zombie-world. And you can't be absolutely sure you are self-aware. So the point is that "existence" fits "consciousness" specifically in the context of the Hard Problem better than other properties. And consciousness in the Hard Problem is not the same thing as consciousness in, for example, ethics - we already have the Easy/Hard Problem distinction.

So, what are your actual requirements for a valid theory? That consciousness means exactly same thing both in Easy and Hard Problem context?

Or do you just don't care about the Hard Problem? Then I don't think you should object if solution to it was found, even if turns out it was about existence instead of consciousness.

> your theory has a lot of explaining to do

I don't see why would you need a lot, it's not a complicated argument: existence is the only non-epiphenomenal physical self-evident property, it fits into the conceivability argument, it doesn't introduce new physics and so doesn't prevent you from treating self-aware system differently, if you want. But if you need more, I gave the links.

> I like it because it’s the answer to the question “why does X feel like something?”

Why does staring into a white wall feels like something? What part the experience of staring into a white wall can't you implement without your mental processes realizing you are observing your internal state? And if all you need is just a connection of your inner neurons to your outer neurons to your eyes, than rocks also have connection of inner atoms to outer atoms - why outer atoms influencing inner atoms doesn't count as inner atoms observing outer atoms?

The point is that narrowly defined self-awareness is not obviously necessary for mental processes in humans intuitively labeled as conscious, but broadly defined self-awareness makes 20-line python programs (and arguably rock) conscious - it also has a lot of explaining to do, as you say.

Expand full comment
Phil H's avatar

I think the distinctions you’re making there are too fine and mixed up. Look, here’s one of the ways Chalmers formulates the hard problem:

“Why should physical processing give rise to a rich inner life at all?”

The ability to observe yourself answers this question. It gives rise to a rich inner life because a single external stimulus can create multiple perceptions: perceptions of the external stimulus (qualia) plus perceptions of your own inner states. That’s what richness is.

Expand full comment
Signer's avatar

Chalmers wouldn't say that the ability to observe yourself answers this question. He would ask why would multiple perceptions would feel like something at all, why wouldn't they feel like zombies with self-awareness would feel - like nothing?

And I still don't get what's supposed to be rich about experience of staring into white wall.

Expand full comment
The Ancient Geek's avatar

>Helpfulness doesn't matter as much as correctnes

But there's no direct way of establishing xorrextness, so we are left with proxies like simplicity consistency ...and usefu lness.

Expand full comment
The Ancient Geek's avatar

The hard problem can be solved by abandoning physicalism, and there is more than one way to do so. Palestinian is not a unique solution.

Expand full comment
Signer's avatar

It's uniquely best broadly physicalist casually closed non-epiphenomenal solution that preserves most important things about consciousness which wins on simplicity over equivalent solutions. What solution do you think is better?

Expand full comment
0xcauliflower's avatar

What would these p-zombies say when they talk about dreams?

Expand full comment
darwin's avatar

>Suppose the observer shows the p-zombie a picture of a rose, and the p-zombie describes it as red. If the observer asks the p-zombie to recount how their reportable mind came to know that it was red, what might they answer?

I feel like this is sort of conflating the two types of P-zombies, the traditional P-zombie who is the only P-zombie in a 'normal' society (P1), and the here-proposed P-zombie in a society where only P-zombies have ever existed (P2).

Of course P1 would engage with the terminology and discourse of it's qualia-feeling society and report that it has qualia too; that's a premise of the thought experiment.

But why would 'how did your accountable mind come to know that it was red' be a common everyday question in P2 society, such that it needs a simple few-word answer?

I can imagine that maybe P2 society develops neuroscientists who care about this question (I don't think that's a given actually, neuroscience grows out of psychology which grows out of philosophizing about mental states and why would their society do that in the first place? but I can imagine it). But I don't see why the average P2 laborer working at a box factory spends enough time asking 'how do you know that is red?' that they need to make up single-sentence words like 'qualia' to account for it.

See, in the original P1 thought experiment, it's a premise that the P-zombie acts exactly like everyone else, and it's plausible because humans mirror behaviors o those around them. But here you are abandoning that premise, and the possibility of mirroring non-P-zombies which makes it likely to occur anyway, and I think at that point there's actually good reason to think P-zombie society would evolve pretty differently.

Expand full comment
JerL's avatar

Very much agree with this. I think it would be interesting to find examples where actually existing humans don't have snappy answers to questions of the form "how did you accountable mind come to know that [X]" for various properties X, and see what people's intuitions are on whether or not we have qualia about X. Someone above gave an example sort of like this, of 5-d space. I also think subitizing might be a partial example here: I can sometimes "see the fiveness of five objects", but I don't think of this as me seeing "the qualia of something being fivefold".

Expand full comment
Quiop's avatar
2dEdited

Would the following serve as a concise statement of your position?

"If any entity (i) believes it possesses an internal representation of a red rose, and (ii) lacks detailed knowledge of the structural basis for its own representational capacities, then such an entity will have a tendency to describe those capacities in terms of 'qualia.'"

Expand full comment
Jay's avatar

I think the challenge these discussions encounter stems from a form of trying to have your cake and eat it too.

The flaw in the P-Zombie thought experiment is the proposition that there would exist a creature where everything about it is exactly the same but it wouldn’t have qualia. Meaning all of the atoms and molecules and cells and neurons are identical to a qualia possessing human. But if this was the case my argument is that it would be conscious. Such a P-Zombie could not exist in this universe. You want to remove consciousness/qualia but keep everything else and you can’t, but this is where all the other arguments in this article and others are derived from and why at least in this case arrive at zombies talking about qualia. If you want to remove qualia you have to actually change something about the zombies or about the universe they are in.

You can try to point this out by applying the same thing to other aspects. How about a blind zombie, but everything about it is the exact same. They still have eyes, the optic nerve is intact, their brain’s visual processing center works fine, everything is exactly the same as their visually capable counterpart except they can’t see. I can create an absurd number of these examples. If you want to argue that you cloned me down to the atom and every atom, cell, and molecule in my clone works perfectly identical to mine but my clone is somehow blind you have to argue that something has changed. Consciousness is no different.

The PZ thought experiment never needed to use zombies, especially today. A much better version would be robots that from the outside look human and talk like humans but given how they were designed and how are made it is not hard to imagine that they are emulating and may or may not “know” or be able to explain why they say the things they do. But that’s a different discussion.

Expand full comment
FLWAB's avatar

>The flaw in the P-Zombie thought experiment is the proposition that there would exist a creature where everything about it is exactly the same but it wouldn’t have qualia. Meaning all of the atoms and molecules and cells and neurons are identical to a qualia possessing human. But if this was the case my argument is that it would be conscious.

The whole point of the P-Zombie thought experiment is to say "But why though?" Like, with the bind zombie experiment we can actually answer "Why?" if we ask why they would be able to see if all that stuff was intact. But why should an exact copy of you be conscious? Because you are? But we don't know why you're conscious, so we don't know why the P-Zombie should be either.

Expand full comment
Jay's avatar

I might be misunderstanding you but the blind example isn’t explained if everything is identical.

And the experiment intending to say “but why though?” Is my whole point. It fails to give an answer because you can’t change nothing and get a different result, you need a thought experiment where you change something (which is why I offered robots). The p zombie fails because it doesn’t actually lead to any conclusions about what is different, especially when we are forcing that nothing is different.

It’s failing, in my view, points to the idea that consciousness is inseparable from physical world, physics, and matter, be it panpsychim or whatever the solution might be.

Expand full comment
FLWAB's avatar

I understand that the blindness in the blind example isn't explained if everything is identical, which is the point. We can point to the optic nerves and whathaveyou and say "If you have these cells, arranged in these organelles, in a living creature, then sight is going to work because when photon A hits receptor B it creates electrical potential C...", and if we understand the entire system then we understand that it would be a contradiction in terms to have the same system but a different result.

Yet we can't do that for consciousness. There is no logical reason that electrons moving through neurons should produce experiences. You can't look through an MRI and say "Ah, see, thats the spot where the action potential activates an experience". All we can do is say that when activity happens in certian parts of the brain most people report having certain experiences. But *why* should changes in voltage cause an experience? We can see why a photon striking a receptor cell in the eye must create a certain reaction, but why should that reaction result in the experience of seeing something? Is experience even necessary? And the P-Zombie thought experiment says "No, experience is not necessary for the brain to do what it does and the organism to act the way it does."

Imagine that someone theorized that when a photon hits a receptor cell an action potential is created that travels down the optic nerve *and* in a locked room the walls change color from red to blue. Someone could reasonably look at that theory and say "Why include the locked room bit, it is not at all necessary to explain what happens in the nerve cells." And they would be right! By a similar token, if we say "The signal travels down the optic nerve into the visual processing center and causes these physical reactions, *and* an experience of vision is produced" someone might reasonably reply "Why include the experience bit? It is not at all necessary to explain what happens in the nerve cells, or how the organism behaves." It's as superfluous as a locked room changing color. Yet we find ourselves in the metaphorical position of living in the locked room, and noticing that the walls *do* change color. We do have an experience of vision, even though that experience is superfluous to understanding how the brain processes vision. You could imagine someone not having an experience of vision while being physically identical to yourself, in the same way you can imagine someone having a visual system that works without a locked room changing colors. The only reason we would ever posit the hypothesis that experience exists is because we can directly observe it.

Expand full comment
Oleg S.'s avatar

Hi Scott,

I think we learned a lot recently about visual qualia by working with diffusion models. I think they are much closer to how humans actually remember information than your bitmap analogy. Remembering image = storing image embeddings + storing image metadata (prompt). I'm not sure how relevant this is to p-zombie arguments, but to me qualia are "irreducible internal representation of image / color that no one outside have direct access to", which is exactly image embeddings.

Expand full comment
David Spies's avatar

I think you and Eliezer agree on everything?

You both agree Chalmers is dumb and the idea of qualia as being this extra-physical special thing is dumb.

Eliezer goes on to say, "But if I have to define qualia as a real thing in category-space given no extra-physicality, I'll define it like _this_ and conclude that we have it".

You go on to say "But if I have to define qualia as a real thing in category-space given no extra-physicality, I'll define it like _that_ and conclude that we don't have it".

Expand full comment
Adrian Doan's avatar

It's not clear to me this argument is coherent. You start by assuming a "race of p-zombies who are exactly like humans, except [...] they don’t have conscious experience." Aren't you just assuming then that "qualia are in some sense extra-physical"?

Expand full comment
Padraig's avatar

Does the p-Zombie know it's a p-Zombie? It seems totally consistent that the p-Zombie behaves like any other human, including answering questions *as if* it were conscious. ChatGPT is already capable of this sort of response. (This is basically the plot of Blade Runner?)

Expand full comment
SorenJ's avatar

So the part here that you believe is potentially novel (to your knowledge) is the extra piece where you say, "we can't assume that P-zombies would talk about qualia -- let's try to derive the fact that they do." ?

My first instinct here is that this is easy to do: all observable external behavior of humans is the result of the laws of physics being run forward in time. The currently known laws of physics are sufficient to explain all human behavior, and the current laws of physics make no reference to consciousness.

Expand full comment
TakeAThirdOption's avatar

>I’m not even sure which direction to update on this. If you don’t need consciousness to claim to have qualia, this is good news for epiphenomenalism and other positions where consciousness doesn’t interact with the physical world (and therefore cannot cause our claims that we have qualia).

I find this wording to indicate confusion:

"consciousness doesn’t interact with the physical world"

It’s not about whether that is true or not, its about whether it males sense or not. Would anyone say "hardness doesn’t interact with the physical world"?

Why are so many people talking about consciousness as if it were a thing? It’s a property of a thing. Humans *are* conscious (of something (during certain times)) and saying they possess consciousness (of something during certain times) is just a different way to say that, it does not mean that they posses it like they possess lungs.

And so the question of whether "consciousness interacts with the physical world" is really "does a system that is conscious act differently than it would if it wouldn't be conscious -- and in what other properties would it be different if it wouldn't be conscious?"

That's a question that at least makes sense.

Expand full comment
Edwin's avatar

Feels like if a p zombie is able to reason about experiences, rather than just answering “because it is red”, then they do have qualia and aren’t a zombie

Expand full comment
tomdhunt's avatar

I think the answer to this dilemma is that "entity that behaves exactly like a human but doesn't have qualia" is simply a contradiction, and having qualia is in fact causally necessary to humans behaving as they do.

Expand full comment
The Ancient Geek's avatar

So, are qualia physical?

Expand full comment
Ryan W.'s avatar

1. I find the explanation of the theory of a schizophrenic mind very interesting. I haven't heard this before.

2. This is orthogonal to your point, but... I could draw a rose faithfully if I were looking at it. But I could not replicate it from memory. If we envision several layers of neurons where the bottom is 'every cone and rod that is firing', basically raw pixelated data, and the top level is the statement 'this is a red rose,' with little additional detail, my memories seem confined to the top layer of neurons. It feels like I can see edges and details in my minds eye, but when I try to 'focus in' on those details it's like getting a google maps error saying 'information not available.' I have to deliberately work to include those details as their own 'top level neural net memory' if I want them.

3. Also orthogonal to your point: I don't think that the cone-pigment complex is called rhodopsin. That's the rod pigment complex. The cone pigment complex might be called photopsin?

Expand full comment
Tanner's avatar

Yes, it should be the cone opsins (specifically the L-cone [and possible the S-cone] opsin activity in comparisons to the M-cone opsin activity). We get color information by comparing the relative activity levels of the different cones. Rhodopsin is (as the name implies) for rods, which are primarily used for low-light, scotopic vision and provide brightness information (though they can mildly influence hue perception under mesopic lighting conditions, but are still dependent on cone signal dominated pathways).

Expand full comment
Noah Martin's avatar

You may be interested in the Meta-Problem of Consciousness, which you seem to be creeping towards; check out this paper by Chalmers: https://philpapers.org/archive/chatmo-32.pdf

The gist of the issue is that a p-zombie would report on consciousness exactly like we would even to the point that a p-Chalmers would start discussing the problems of consciousness. But if the physical brain is causally complete--as even most dualists believe these days--then this part of human psychology can be explained in purely physical, scientific terms: the reasons why we report consciousness is an "easy" problem and does not involve hard consciousness at all, but rather facts about information processing, like what you thought of here. Because we report consciousness for purely physical reasons, this means that it would be a **massive coincidence** if our reports happened to be accurate about the existence of non-physical consciousness; consciousness exerts no influence over the reports, so its crazy that they can end up accurately representing it, just as it would be a massive coincidence for a medieval peasant to perfectly describe the antipodes, without having ever had any contact or information about the place.

From this, Chalmers formulates an illusionist (Daniel Dennett's view) debunking argument:

1. There is an explanation of our phenomenal intuitions that is independent of consciousness.

2. If there is an explanation of our phenomenal intuitions that is independent of consciousness, and our phenomenal intuitions are correct, their correctness is a coincidence.

3. The correctness of phenomenal intuitions is not a coincidence.

_________________

4. Our phenomenal intuitions are not correct

Of course, other people say that (1) our reports do accurately line up with consciousness, (2) this would be an impossible coincidence, (3) thus its not a coincidence: (4) it proves that God must have lined things up. This is roughly the basis of the "Psychophysical harmony" argument that Bentham's Bulldog is such a fan of.

Expand full comment
0xcauliflower's avatar

Thanks for this great comment.

Expand full comment
Kai Teorn's avatar

"But these still fail to explain how and why we so obviously experience consciousness, not just in the sense of there being a mysterious redness of red, but in the sense where there’s “someone” “there” to appreciate it."

I think a similar technique would work for this as you just did for qualia. E.g.: “My reportable mind perceives a something-something that conveys a irreducible sudden intuitive sense of there being a someone to appreciate it.” Or something :)

To me, though, it is more interesting how self-deprecating you felt the need to go in your footnote. I don't think qualia or consciousness will ever be "solved" in the way e.g. Fermat's theorem is solved, but the social aspect of this whole thing is already illuminating.

Expand full comment
Shai Natapov's avatar

>I have trouble thinking of this packet as anything other than already organized in a 2D grid.

I believe this has been studied empirically and proven wrong. When people are exposed to an image of a rose for a few milliseconds they don't immediately perceive the entire image. Each 'packet' contains very little information. For example, the color of the rose (but not any background colors) or the shape of one of the petals. Predictive processing creates the illusion that we are perceiving entire image (all the petals, multiple colors) at once.

I learned this in a book on consciousness by a scientist who studied vision but I can't remember the name...

Expand full comment
dionisos's avatar

> But it doesn’t fully defuse the intuitive inelegance of these positions, where it’s a baffling coincidence that we both claim to have qualia, and actually have them.

There is a way to explain it, which I think it quite convincing : Meaning and qualia is interconnected.

What the p-zombie say, doesn’t have meaning for the p-zombie, because they have no mind, there are just structural correlation between their words and their model of the word.

Add qualia to the mix, and now you have the same internal structure, but the words are associated with their meaning in the mind.

So, there is no coincidence, the correlation between qualia and our expressions about it, is caused by the fact that it is the qualia which give meaning to these expressions.

Expand full comment
ingx24's avatar

I think the correct move here - and what someone like Chalmers would say, at least starting in the mid 2000s, as opposed to the 90s - is that the p-zombie isn't actually "representing" or "using an internal coding scheme" in the same way that we do. In the case of the p-zombie, those are convenient shorthand for whatever physical activities are going on inside the zombie, but it's entirely possible to tell the story using purely physical language without making any reference to cognitive terminology. This is roughly the idea behind a distinction Chalmers makes in his paper "Consciousness and its Place in Nature", under the "Type-A Materialism" section (https://consc.net/papers/nature.pdf): the distinction between *functional* representation (which is just a matter of displaying the correct behavior) and *phenomenal* representation (which involves conscious experience of the object being represented). Chalmers's early work drew the line of non-physicality at the purely phenomenal (where cognitive/representational properties were still physical), which led to a lot of paradoxes like what you're describing; by the mid-2000s his position evolved to where he now considers the cognitive and the phenomenal to be a single package so to speak. (See "The Representational Character of Experience", https://consc.net/papers/representation.pdf)

On top of this, the idea of a zombie is taking the physicalist's commitment to physical causal closure as a premise: *if* you accept causal closure, *then* the epiphenomenalist idea of a zombie is what you end up with. But this leads to the strange idea that our experiences *just happen* to line up (by psychophysical law) with the correct environmental stimuli and behavior, which contributes to the kinds of paradoxes described in this post. Chalmers himself has moved away from epiphenomenalism for that reason (according to a 2017 Reddit AMA that I don't have the energy to look for at the moment). If you instead accept something like Russellian panpsychism or interactionist dualism, you have a more sensible picture where the p-zombie has "causal gaps" in its functioning that would normally be filled by phenomenal properties, and so the phenomenal feel of redness ends up properly being causally responsible for our reports of phenomenal redness in the real world (whereas in the zombie world, the reports of phenomenal redness are caused by something else, or nothing at all).

Hopefully this clears things up; this has been something I've specialized in (non-academically) for the better part of 13 years now.

Expand full comment
0xcauliflower's avatar

Thank you for this comment! There are a lot of baffled confidently uninformedly wrong comments here. Always is with this topic!

Expand full comment
ingx24's avatar
2dEdited

I genuinely think at this point that the zombie thought experiment does more harm than good - people get so hung up on the specifics of the thought experiment that they miss the underlying point it's trying to make, which is actually an extremely simple and obvious point that - according to some famous studies - is a point that the majority of children are able to understand, and is likely something that materialists have managed to train themself out of. That point being: The structures and qualities associated with consciousness simply don't show up in a physical account of the world. They're not something you could even in principle see or touch.

Expand full comment
TakeAThirdOption's avatar

> The structures and qualities associated with consciousness simply don't show up in a physical account of the world.

That's like saying that in a description of the mass distribution in a certain space the distribution of electric charge doesn't show up.

Of course it doesn't, if you choose not to talk about it.

But both are connected. Just like consciousness and mass. Or do you believe that where there is no mass there is someone conscious of anything?

Only things with mass are conscious. Or alife or hard or much other ways of being.

Expand full comment
ingx24's avatar

I mean... yeah - mass and charge are distinct properties from each other, just like physical properties and mental/phenomenal properties are distinct from each other. It may very well be that only objects with mass are conscious, but consciousness still remains a distinct property nonetheless. That's all the zombie argument has ever purported to establish - that consciousness is a further property beyond physical properties.

Expand full comment
Daeg's avatar
2dEdited

This seems even weaker than the zombie argument in that it’s really question-begging. How do we know that the structures and qualities associated with consciousness don’t show up in a (adequate, complete) physical account of the world? Beyond just stipulating that they don’t, what I think the zombie argument purports to add is “because we can imagine one without the other”. But that just seems like bad evidence to me — we can imagine a lot of accounts of how the world works that turn out to be wrong.

Expand full comment
ingx24's avatar

It's definitely possible that any "adequate, complete" physical account of the world would include the structures and qualities associated with consciousness, but I don't think such an account would count as "materialist" or "physicalist" in the traditional sense - you'd still have *additional* structures and qualities compared to what we have now, and the average materialist would not want to admit those. You could call it materialist if you want, but at that point the argument is purely semantic. Chalmers goes over this in his paper "Consciousness and its Place in Nature" (https://consc.net/papers/nature.pdf) under the "Type-C materialism" section.

The zombie argument was never meant to be stand-alone "evidence", all by itself. Of course such a conceivability argument needs to be supplemented with reasons why conceivability is a guide to possibility in this particular circumstance (while conceivability=possibility fails in other cases). Chalmers has spent a huge part of his career working out why conceivability of zombies entails their metaphysical possibility. The paper I linked above goes over a simple version of this argument (under "The Two-Dimensional Argument Against Type-B Materialism").

Expand full comment
Daeg's avatar

I haven’t read all of the relevant literature, but actually think I’ve read that particular paper before, though it’s possible I didn’t understand the crux of the argument against my objection. Could you please give a flavor of why conceivablility entails possibility in the case of p-zombies even if it doesn’t in other cases?

A few desiderata: 1. I really don’t think it matters what the average materialist would or wouldn’t admit. 100 years ago, no materialist knew how anything could be both a wave and a particle, yet it looks like something is, so materialists would have been right to hold out hope for a materialist explanation of the properties of light. Every materialist now agrees they don’t know *how* to incorporate consciousness into a physical theory, so arguing “well, you couldn’t possibly!” is just going to reduce to dueling intuitions about what materialism can make good on. What bugs me is that Chalmers acts like he is giving arguments for why his intuition is better even if you don’t share it, but every time I poke at argument, it just reduces to the intuition itself.

2. I really really don’t think “metaphysical possibility” is the relevant standard of possibility because the question of whether consciousness can be decoupled from physics is a question about our actual world and how it actually works. If they can be decoupled in some other metaphysically possible world, but not in ours (because there’s some natural-law-like relation between them in our world), then as far as I think most participants in, and observers of the debate are concerned, that would amount to materialists being right and Chalmers being wrong. As far as I can tell, arguments about what’s “metaphysically possible” a) amount to intuition duels, and b) seem evasive with respect to the actual mechanistic question about how consciousness and brains actually work in the real world.

Expand full comment
ingx24's avatar

I'm going to tackle these in reverse order, because I think there's some very very serious misunderstandings going on here.

2. Metaphysical possibility, quite literally, *is* the standard being used by *every participant in this debate*. Your view here - where consciousness can be decoupled from physics in some other metaphysically possible world, but not this world because of the natural-law-like relation - is literally, 100% Chalmers's view in his 1996 book. The *entire argument* between Chalmers and his critics amounts to disputes about metaphysical possibility, which is what the question of materialism vs. dualism is *about*. Metaphysical possibility is a well-known, well-established standard in philosophical debates like this - the counterfactual of other possible worlds is meant to give insight into "deeper" features of our world that go beyond just natural laws. These counterfactuals are relevant here because the debate is between people like Chalmers who think consciousness is *distinct* from physical properties (but still in a law-like relation with it), vs. people who think consciousness *just is* a physical property (materialists). The counterfactual of the zombie world is meant to show that consciousness is *distinct* from a physical property, because if it were identical to a physical property, the zombie world would not be metaphysically possible.

1. I think you're seriously misunderstanding the standard definition of materialism here, and your example of light is the exact sort of analogy Chalmers has given for his view of consciousness. His entire framework is that consciousness will need to be an additional fundamental property, alongside things like mass and charge. If you want to call that "materialism", that's fine, but at that point it just becomes a verbal dispute (and to be fair, definining what counts as materialism/physicalism is itself a tough issue in philosophy). The version of "materialism" that Chalmers is arguing against is the current form where all that exists is what he calls "structure and dynamics" - structures of objects in space, and laws specifying how their position/speed/etc changes over time. If you want to add more structures, qualities, and so on (the ones associated with consciousness) to this picture and still call it "materialism", that's fine, but at that point your only disagreement with Chalmers is terminological.

Now onto your first paragraph: The argument for why conceivability implies metaphysical possibility for zombies is somewhat complex and technical, but the fundamental idea is that, in *every other case* of conceivability failing to entail metaphysical possibility, the problem can be traced to one or both of the concepts involved not being fully transparent. For example, it is commonly held that "Water is not H2O" is conceivable but not metaphysically possible, because water is necessarily H2O in all possible worlds. Upon inspection, what's going on here is that the concept "water" is not transparent - the concept "water" just refers to whatever causes our water experiences (clear, tasteless, odorless, common in lakes and oceans, etc). Something can look like water without actually being water; and so when we conceive of a world where water is not H2O, what we're *really* conceiving is a world where something other than water fills the lakes and oceans. But in the case of consciousness, almost uniquely, the concept is fully transparent - anything that "feels" like consciousness is, by definition, consciousness. So in this case, unlike others, we can genuinely infer metaphysical possibility from conceivability... unless our *physical* concepts are not transparent, in which case it could be that physical entities have something related to consciousness beyond appearances (which leads to panpsychism, which some people call a version of materialism - again, this is a terminological dispute).

Expand full comment
The Ancient Geek's avatar

Do physics equations make you feel qualia?

Expand full comment
Kenny Easwaran's avatar

This sounds a lot like how I would interpret Dennett on “the intentional stance” - it’s useful to talk about people having qualia even if qualia don’t really exist.

Also, I thought that in The Conscious Mind, Chalmers says that p-zombies are outwardly indistinguishable from conscious people, and since conscious people talk about their qualia, p-zombies must as well.

I have to admit, I haven’t actually read either of these books myself (I’m not exactly that kind of philosopher, though I’m getting more adjacent) but I seem to recall something like the latter point from John Perry’s summary of The Conscious Mind, when I first learned of p-zombies from him nearly 30 years ago.

Expand full comment
Connor Bowers's avatar

It's easy to confuse the mental process of recognizing with consciousness because they are so linked. In Stephen Grossbergs book Conscious Mind Resonant Brain he proposes that resonance is both what conditions experience and what our mind uses to recognize things. Ogi ogas has good summaries of the theory on his Substack. According to the theory, when we see a rose, the visual what module (the brain structure for visual recognition) remembers or creates a mental object that resonates with our mental image. Only the resonance is experienced, with the patterns encoded determining the experience. That's why we don't experience our mind searching memory or processing the raw visual data. Those unconscious processes don't use resonance. Enteroception is also a kind of resonance/recognition, so we can experience and communicate knowing our mental objects. We can recognize that we're recognizing and seeing a red rose. But just because resonance is critical for consciousness and recognition in our minds doesn't mean consciousness and recognition are the same thing. We can have strange non-representative experiences that aren't based on recognition. Like states of deep meditation. Qualia computing is a great blog for examples of strange states of consciousness and speculation about the resonant phenomena behind them. Conversely recognition doesn't need to use resonance, like how LLMs do it. They even have very limited enteroception.

If you imagine a p-zombie with a resonance-based mind like ours but no consciousness, would it be able to articulate what it's experience is like? I think if Grossberg is correct the answer is kind of. It would have mental images and objects coupled by resonance, and be able to recognize and communicate about them. But would it come up with the idea of consciousness? Would it ponder the mystery of the redness of red? Would it seek answers to the mystery of being? No. What phenomena would compell it to?

Clearly, consciousness is playing an active role in our minds and reality. Resonance is a promising theory for what conditions consciousness. But how does consciousness influence us? How much does the function of our mind depend on consciousness? I don't know. Promising theories in this area are even sparser.

It's hard to do philosophy of consciousness when we have so little technical understanding of how consciousness works. But we can start with understanding how the mind works (and perhaps how it integrates with consciousness) to avoid mistaking mental phenomena for consciousness. Recognition may be tightly integrated with qualia in our minds, but that doesn't mean any being with recognition is indistinguishable from a conscious being.

Expand full comment
Stephen Pimentel's avatar

> ... even if we start by assuming they don’t have it.

I have no quarrel with the argument following this initial assumption. But it's important to note that this assumption may posit something that is physically impossible.

For example, suppose some (future, refined) version of IIT is correct, and conscious qualia are invariably associated with particular degrees of integrated information processing, then the assumption would embody a impossible condition, given the (not yet understood) laws of physics.

In logic, from a contradiction, any conclusion can follow. Likewise, if you assume a condition that is (unwittingly) contrary to (not yet understood) laws of physics, then subsequent reasoning will not actually demonstrate anything about the world.

Expand full comment
Anna's avatar

> It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence.

This is the funniest sentence, from someone who I presume actually means it. (No insult intended...I'm just genuinely amused by the disjunction.) Your p-zombie has an *inner* part to it which contains *conceptual* elements, but somehow lacks inner experiences?

I think you are caught up on the difference between what an algorithm feels like from the inside and how it looks on the outside. (Remember that part of the Sequences? https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb/p/yA4gF5KrboK2m2Xu7)

Our brains run the algorithm "turn this complex picture into the reddest of red roses". If we look at the brain from the outside, we can see all sorts of things about about the algorithm—rhodopsin photoreceptors, edge-calculating neurons, Broca's area, the evolutionary usefulness of data compression, etc. If we experience the algorithm from the inside, of course, all that happens is we see a red rose. That's the whole point of the algorithm.

Because the algo-from-inside looks so different from the algo-from-outside, it's pretty easy to look at the algo-from-outside and think that it might be present without producing the algo-from-inside. But that's not possible; either the algorithm is present, or it's not.

I'm not necessarily saying that p-zombies aren't possible. If someone coded a video game with a two-lines-of-code NPC who ran around saying "I"m conscious!" all the time, and nothing else, I think that's pretty close to a p-zombie. But that's a far cry from hypothetical beings who are running the same algorithms as us yet somehow lack qualia.

Expand full comment
AnthonyCV's avatar

I agree that this is what p-zombies would do, but IIRC that's not the sources of EY's improbability claim. The source for that, I think, was that the process mechanism which generated humans, evolution, would not have had reason to select for humans that talk about qualia in a world where none possessed qualia. I suppose theoretically evolution could have generated humans with qualia and then discarded them in favor of p-zombies, if such a shift were sufficiently fast and easy? But this would seem to require some measurable difference between the two, which is against at least the spirit of the claim of p-zombiehood.

Expand full comment
MicaiahC's avatar

Huh! Registering that I'm surprised that the footnote actually worked well enough to stop the type of smarmy "this is just flurghschneider when he wrote Rock me Amadeus on Betelgeuse zeta" response. Other than, I presume, the deleted comment.

Expand full comment
grumboid's avatar

I guess my perspective is that the whole p-zombie thing is circular reasoning.

The argument (summarized by Claude) seems to be: "if it's conceivable that something could be just like a human but lack consciousness, then consciousness cannot be purely physical."

But the precondition for "something could be just like a human but lack consciousness" is that consciousness isn't purely physical.

Isn't this like saying: "if it's _conceivable_ that bloxors are greeblic, then bloxors must be greeblic"?

Expand full comment
Taleuntum's avatar

As I've read your comment I picked up a book on my desk, held it above my desk and then, I let it fall. After a fraction of a second, its fall was stopped by my desk.

Now one could imagine an alternate world (with its own full time dimension) that is exactly like ours in every respect except my desk does not exist. In this world, my book stopped in the air after falling exactly as long as in our world. This is conceivable.

Did this assume that my desk is not purely physical? If not, why did the zombie argument assume that consciousness is not purely physical?

In other words, what is conceivable is a very low bar, you don't even have to follow physical laws. To decide what's conceivable, you don't start thinking about preconditions and stuff, you simply attempt to make a specific mental move, the action of conceiving of that given thing in question. If you can do the conceiving successfully, then that given thing IS conceivable. (For a negative example: A triangle with four vertices is not conceivable.)

In particular, I think it is pretty clear that a "human completely physically identical to us but lacking phenomenal experience" is conceivable to the overwhelming majority of people. The way I'd attack the zombie argument isn't here. Instead, I think that it's not true that just because something is conceivable, it's metaphysically possible: we could just be making some kind of mistake when we do that mental move.

Expand full comment
InterestedReader's avatar

The form of the argument is valid, but I just totally do not buy the example, and would therefore argue that in fact this argument supports Yudkowsky.

Your examples show that these hypothethical beings would talk about higher level concepts like red, which in itself is not a qualia. Saying "I see red" has as much to do with qualia as saying "I see this object as a table"; "I experience red as a slightly happy color" is qualia, but I don't see any reason to assume the people without qualia in your post would say this (or even more qualia ladden statements like "I am happy").

Expand full comment
Tome's avatar

What I always find most fascinating is the ~10% of commentators on such topics who adamantly reject that they experience qualia or subjective experience of any kind. My default assumption is that they're confused / trolling / haven't thought hard enough the problem / take conscious experience so for granted that they fail to imagine what its absence would mean.

But maybe there really exists a class of people who behave almost exactly like me, except they genuinely don't have subjective experience, and will express that if questioned on the matter?

My last ditch effort to convince them that they are conscious would be as follows. I propose a new law that anyone holding this position coherently under sustained questioning can be branded an "outmoral" and e.g. tortured for fun because their (absence of) experiences carry no moral weight. If their only objection is some practical one about how such a system would be inefficient for society, OK then I believe them.

Expand full comment
Donald's avatar

Our utility functions are allowed to care about any part of reality we like.

I don't think "subjective experience of qualia" is that well defined. But if it is, and if some people lack it, I can care about those people anyway. My utility function can be anything I like.

Expand full comment
Tome's avatar

I care about some second order effects: they might make an unpleasant noise, other people might get the wrong idea, etc. But I certainly wouldn't care about subjective-experience-lacking people for their own sake.

And the actual point is that *they* shouldn't care about themselves for their own sake. "I object to your law because I find the sensation of pain very unpleasant" shouldn't be something someone who really holds this position would say.

My prediction is that people making this claim would in fact object strongly to my plan, and use a lot of language that sure seems to contradict their earlier assertions.

Expand full comment
TakeAThirdOption's avatar

This whole p-zombie idea is sooo stupid.

It's like saying, imagine there is a house that is exactly like yours except it has no roof.

Bullshit. There are no two houses in the whole universe, and will never be, that are identical except for one having no roof.

For some mysterious reason people tend to forget, though they know it in their bones, that the properties of a thing are connected.

The house without a roof will also have a different weight, and different air inside, and much more differences on top of those, because, well, it has no roof.

For that simple reason alone, I would never dream of a person being identical to another one except of never being conscious of anything.

Expand full comment
FLWAB's avatar

How much does consciousness weigh, such that a non-conscious version of you would weigh less? What about its body would have to be different to be non-conscious? We can conceive of how a house would have to be different from the same design but missing a roof, what is different about the same design of person but missing the consciousness?

Expand full comment
TakeAThirdOption's avatar

Well, for example, a human missing a brain would help very much for it never being conscious of anything.

There are many hints that a lot of what a human brain is doing is its owner being conscious of something.

Expand full comment
FLWAB's avatar

Does everything that has a brain have consciousness? Do you stop having a brain when you're unconscious?

Expand full comment
TakeAThirdOption's avatar

> Does everything that has a brain have consciousness?

Of course, just like everything that has a motor is hotter than x Kelvin.

> Do you stop having a brain when you're unconscious?

Of course, just like a car that isn't moving stops having a motor.

Didn't you know?

Expand full comment
Ape in the coat's avatar

> Start by imagining a race of p-zombies who are exactly like humans, except for two things. First, they don’t have conscious experience. Second, they don’t necessarily report having conscious experience; if we want to claim that they do, we’ll have to derive this fact from first principles.

By exactly like humans do you mean that they do the same things for the exact same reasons and all the physics works completely the same like in the original zombie-world mind experiment? Or that they perform the same behavior as humans but for some different reasons? Like what Eliezer calls zombie-master? Say they are very good LLM-agents?

In the first case it's not clear whether the thing is concevable so the whole mind experiment ends before it even started. In the second case everything depends on the specifics of the algorithms that controlls our zombies. They can both report and not report qualia. The part of the algorithm that is responsible for producing sounds just gets the text from an LLM that has received it's prompt based on the output of image recognition software which does indeed works with bitmap representation, for example.

Expand full comment
JerL's avatar

"If they can lack certain forms of qualia without anyone noticing..."

But they did notice! When prompted to write about smell, they behaved differently than someone who had qualia of smell! It's true that it took a specific, targeted question to elicit that, but it was still easy for this person to compare smell (for which they had no qualia) to the other senses (for which they did) and notice something different! This person isn't a p-zombie but for smell, because they are explicitly saying they differ from others on this axis!

I think a more interesting question is: how important is the comparison to other senses? It seems this person only reflected on their lack of smell by contrast to taste, sight, etc.. That is, the right question to ask a partial p-zombie is to elicit a comparison to other senses for which they *do* have qualia. If you can't get a p-zombie to realize their lack of qualia by comparing to other senses, does that mean there is no question that could elicit the distinction? Or would it just have to be a more subtle question?

Expand full comment
SMK's avatar

Frustratingly, I feel like I just don't understand this argument.

Consider the following passage:

"But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms. So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing."

Why isn't this just begging the question, when it uses the word "experience"? Yes, of course if the p-zombie has "experiences" then it will talk about qualia, but it will also *have* qualia (no)?

But why wouldn't it just talk in terms of "knowledge" instead of "experience"? Consider:

"But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms. So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate knowledge of color without requiring any further thought or processing."

Now, you could say this just pushes the question back -- what is "knowledge" "like"? But it's not "like" anything. It's a state in the zombie's information-processing unit that causes it to utter certain true statements. It seems to me perfectly plausible that the zombie would think and respond in such terms, and would simply be confused by efforts to probe into what it was "like". And that it would be correct -- there is a highly compressed small number of bits representing "red" which its visual cortex very quickly processes the incoming visual packet into, without ever causing awareness along the way.

The same confusion rearises in the next paragraph:

"If the p-zombie says this - “My reportable mind receives the color information as a 2D grid in which each pixel conveys a irreducible sudden intuitive sense of being the correct color” - then what’s the difference between that claim versus “I experience the mysterious redness of red”?"

Again, yes -- if the zombie talks about "intuitive sense", then that may sound like qualia talk. But what if it said "immediate knowledge" or even "intuitive knowledge"? I don't see why it would have to be.

My impression of the argument, then, is that it trades mostly on the fact that *in humans*, these two things always go together, so that it is equally natural to use words that do (or, respectively don't) have qualia-laden connotations; and then, using the more qualia-laden form to describe the zombies' "experience" (or description of their mental processes), concludes that they have qualia too.

But I think it's a sophisticated kind of equivocation. When trying to make these conceptual distinctions, we must rigidly distinguish between the conceptually-distinguishable-even-if-always-in-our-experience-connected, and I feel* that this post does not do that.

*see?

Expand full comment
SMK's avatar
2dEdited

Here's another (and separate) argument for why I think the post is wrong. Wouldn't it imply that it is impossible for blindsight patients to describe their experiences?

Expand full comment
B Civil's avatar

> If the p-zombie says this - “My reportable mind receives the color information as a 2D grid in which each pixel conveys a irreducible sudden intuitive sense of being the correct color” - then what’s the difference between that claim versus “I experience the mysterious redness of red”?

The difference to me lies in the word “experience“; experience is not the same as perception and deduction. It is a different token of expression if you will. It implies a different sort of relationship.

The word red is also just a token of exchange in an information technology. It is the word we use to make distinctions between different wave lengths of light that our body is capable of distinguishing between.

As for the rest of it. Let me pose this as a thought experiment. We live in an n-dimensional world and every human being is a separate dimension. It is a dimension that others have no access to beyond what that person shares from their dimension. This doesn’t make it metaphysical. It just means accepting the idea that we are bounded information systems each of us, and inaccessible to others except through comparison and negotiation. There are still physical processes involved. On a biological level, we have a lot in common. Without a watching rational mind, we would not question this. It would be as though how a tiger knows another tiger. Qualia are purely an artifact of our biological existence. As in my example of each of us existing in our own unique dimension, they are both profoundly common and profoundly distinct at the same time.

Expand full comment
Ming's avatar
2dEdited

This is a very interesting argument and I want to be hesitant before I dismiss it! That being said, I'm not sure it actually changes much. You can imagine having the p-zombie dunk their hand in hot vs cold water and having them say "this is hot" or "this is cold" without saying "the average molecular speed of this water is faster" - it, like everything else, is just data that needs some name for the sake of communication. At the very least, I don't think there's anything particularly special about color - you could ask a p-zombie how they perceive anything and at some point they'd just have to say that they do. I don't think "it looks red" and "it gives me red data/it is red" are very different answers. I don't necessarily buy that just because the p-zombie would say "it seems red" they would also say things like "it gives me an internal experience of redness". I think we're more used to blurring that line with color than with other forms of input, which is part of why it seems surprising.

All that said, I don't know!

Expand full comment
mimi's avatar
2dEdited

> Start by imagining a race of p-zombies who are exactly like humans, except for two things. First, they don’t have conscious experience. Second, they don’t necessarily report having conscious experience; if we want to claim that they do, we’ll have to derive this fact from first principles.

> These p-zombies talk to each other (like humans do),

Would they? Why do you think so? I used to love thought experiments, but by now, each time I read a new thought experiment, I kind of fail at the first step. I know this is boring and boorish of me, but consider the position I'm in.

I can't just imagine the new world and then take one part away, just like I can't imagine a universe "just like ours but with no gravity"- Planets wouldn't exist then!

Like the planets, so with talking amongst each other. Perhaps our intricate language (relative to other animals) is only this intricate because we have a conscious experience. Or perhaps we have a conscious experience because we have that intricate language. You cannot state "everything is as before except for A" and then go "as before, B is true" without also showing A and B are not logically connected.

Expand full comment
Donald's avatar

You say that these beings don't have consciousness.

But, these beings are talking about consciousness. And the definition of the word they are using is such that they are conscious by their own definition.

So. Lets define C_1 = has this reportable/nonreportable mind structure. (The definition the beings are using)

And define C_2= has this reportable/nonreportable mind structure, and is also wearing a blue hat. (An example of a definition that you might be using in the definition of the problem)

We can now look into the universe from outside, see a lack of blue hats and say "This is a universe full of C_2 zombies".

The beings inside the universe say "I have C_1 consciousness".

Both are correct. The beings in this world have C_1, but not C_2. There is nothing for an external observer who knew the position of all atoms to be uncertain about here.

Expand full comment
Argentus's avatar

You schmuck. Schmoe & Schmendrick don't even discuss consciousness.

Expand full comment
Some Guy's avatar

I just write all this down to “mechanistic processes must feel like something on the inside.”

Expand full comment
NegatingSilence's avatar

You know about qualia from your first-person experience. P-zombies are a thought experiment to get you to notice that no physical observations contain or can confirm experiences (which is the "non-physical" part). Suspecting that other beings have it because they talk about it doesn't make it physical.

Almost all the commentary I see on this begs the question. Yes, if you assume a metaphysical conclusion you will reach that conclusion.

Expand full comment
NegatingSilence's avatar

The actual possibility of p zombies is secondary. It is about the logical possibility of them. It's to raise your awareness that you cannot have direct evidence of others' experiences, you can only infer. This is a profound mystery of being a person.

If gravity were absent it would change something. If consciousness in others were absent you would make the same physical observations.

Expand full comment
James of Seattle's avatar

[speaking as a computational functionalist with no credentials] Maybe the fundamental basis of consciousness is pattern recognition. Pattern recognition can be recursive, so you can have some mechanisms that recognize “edges” and others that recognize “texture” and others that “color”, and these recognitions can be presented together to a higher order set of recognizers which then recognize “petals” and “stems” and specific (recognized) colors associated with each of those, and these recognitions could be presented to a higher order set of recognizers which might include one for “rose” with the associated color “red”, and this recognition could be presented to another set of recognizers (and so on, up to …) a set of recognizers for choosing what words to say, such as “red roses. I hate red roses”, which might get presented to recognizers for actions which would be bad, such as saying that when being given a gift of red roses.

*

[just maybe]

Expand full comment
Doug Mounce's avatar

Mr. Rogers used to sing a little song about how you are all one piece. J. J. Gibson's famous experiments on perception illustrate this in how your sense organs participate in the data collection rather than simply transmit information for intelligent interpretation. The unique part about the brain, for example, if I think about it as a sense organ, is that it has the unique feature of noticing all its evidence through other organs (almost!). Brains also seem to directly experience the experience of experiencing!

Expand full comment
HahTse's avatar

I'm sorry, I just cannot take any argument as serious that starts with "imagine something exactly like humans, but without the thing that is essential to being a human".

P-zombies are idiotic. Qualia are a delusion. They are magical thinking, desperately trying to reinject mysticism into science.

Expand full comment
The Ancient Geek's avatar

If there are no qualia we are are all p zombies.

Expand full comment
HahTse's avatar

No.

We are conscious.

We do have unique experiences.

But it's because each of our brains is subtly different, because brains are hugely complex, analog, messy, sometimes broken, biological machines.

Chalmer’s arguments (“I can conceive of it, and thus it must be so”) are a version of very tired, very old theological arguments that are now regarded as fundamentally stupid even by theologians. E.g. the “God is imagined as a perfect being, perfection must include existence (or it isn't perfection), thus God must exist.” argument.

The whole debate is born out of a desperate need to somehow be special, to have “something ineffable” going on. It's magical thinking.

There is no super-natural. If something has an effect on the natural world it can be measured, recorded and finally experimented on. If it doesn't have an effect on the natural world it doesn't exist (or it is, at least, completely irrelevant to it and might as well not exist at all and should thus - per Occam's Razor - be disregarded).

And to get back to Yud’s arguments: if a super intelligent alien race were to observe us and hear us talk about how we possess “qualia”, it would label us as religious nutjobs and not invite us for Sunday brunches.

Expand full comment
TheBlackReiter's avatar

It would seem to me that p-zombies, not having qualia, would not be able to discover qualia, and would be stuck at whatever 'philosophy of qualia' level was introduced to them from the outside. They would still be able to say "I see red" but not able to 'picture' redness without pointing at an outside red object entirely external to themselves.

Expand full comment
Gunflint's avatar

… and because of the lattice of coincidence is always operational my library app now lets me know that “Mind and Cosmos” just became available.

“It may be frustrating to acknowledge, but we are simply at the point in the history of human thought at which we find ourselves, and our successors will make discoveries and develop forms of understanding of which we have not dreamt.”

Expand full comment
Donald's avatar

Imagine a world where calculators exist, but there is no such thing as multiplication.

If you press the keys "2 X 3" then it fires this transistor which [ long description of how calculator works] that produces the number 6 on the screen.

Imagine a world where there was no life. No elan vital. No mysterious essence of living-ness. There was just some strands of DNA that split down the middle.

What you have is a plausible description of a mechanism that the human brain might use. This mechanical description doesn't contain the mysterious essence of consciousness. Either consciousness doesn't exist. Or "consciousness" refers to some non-mysterious mechanism. In the latter case, this is a fairly good guess at what that mechanism might be.

Expand full comment
Richard Y Chappell's avatar

> "This argument confuses me. It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence. Just using an encoding scheme that matches it perfectly and causes them to describe it the exact same way that we do?"

Distinguish the properties of physical-red and phenomenal-red. Assuming idealism is false, the latter is not found in the world, just in our minds. Physical-red is a property of the world that creatures (including your imagined p-zombie variants) can encode and track via internal informational states, and report as registering "red" in the world.

Whether a creature conceives of this registration as also involving *inner* perception (as of phenomenal qualia) will presumably depend upon whether they have higher-order mental states that can take the first-order "sensing red"-state (and not just the red object out there in the world) as its object. Philosophers of mind talk about the "constitutional" (or "quotational") theory of phenomenal concepts. The idea is that an underlying phenomenal state of seeing-red (or a trace memory thereof) needs must be a literal component of the *phenomenal* concept of redness, in order to explain the internal functional role this concept plays in our mental lives.

So far, this is all neutral between physicalism and dualism. If you're interested in recent philosophy papers that explore this in more detail, you might enjoy this pair of papers by my wife, Helen Yetter-Chappell:

(1) 'Dissolving Type‐B Physicalism' - https://philpapers.org/rec/YETDTP-2 - shows how a creature could be designed, with suitable encoding of their phenomenal concepts, to *not* find any "explanatory gap" or "hard problem" of consciousness. They would immediately infer the phenomenal-physical truths via introspecting on their own brain states (which their distinctive encoding enables). Physicalists should take these creatures to be more "ideally rational" than humans (for purposes of judging ideal conceivability). Dualists, of course, should take them to be reasoning fallaciously: implicitly conflating the neural correlates of consciousness with the thing itself.

(2) 'Dualism all the way down: why there is no paradox of phenomenal judgment' - https://philpapers.org/rec/YETDAT - explains, in detail, why epiphenomenalists shouldn't find any of this remotely troubling.

Expand full comment
qbolec's avatar

I find the arguments about p-zombie difficult to read as I have trouble tracking which parts are assuming the thesis. I am not sure how one can prove they can exists in text without somehow assuming they do, but then I am not sure how to judge this kind of argument.

I feel it's like:

Problem:

Can there be a set of things like real numbers, which you can add, subtract ect in ways isomorphic to real numbers, but aren't the real numbers because they miss the intrinsic value of being real numbers?

Solution:

Consider set S which is like real numbers except the intrinsic value, whatever it means. Assume it is isomorphic with real numbers, so it is isomorphic with real numbers. And S is not real numbers, it is S. Qued.

And my brain is like: this is the same image??

Expand full comment
Dweomite's avatar

It seems to me that you can either say "the reporting mind having an irreducible intuitive sense of the color red" is NOT enough to count as a qualae, in which case your argument fails to establish that p-zombies have any reason to talk about qualia...

or you can say it IS enough to count as a qualae, in which case you've proven not only that p-zombies talk about qualia but also that they actually HAVE qualia, contradicting your premise and making this a proof-by-contradiction that you can't even imagine p-zombies in a self-consistent way.

Neither of these options strike me as good news for epiphenomenalism in any way, shape, or form.

In order to interpret this as good for epiphenomenalism, it seems to me that you need to commit an equivocation fallacy where you interpret this intuitive sense of redness as a qualae for one part of the argument but non-qualae for the other part.

Expand full comment
Dweomite's avatar

Do the epiphenomenalists offer any causal account for why they, themselves, talk about epiphenomena, given that (by their own theory) the epiphenomena cannot be the cause? It seems like their own theory implies that their output is uncorrelated with truth or evidence, and so by the usual rules of polite discourse they shouldn't be saying it in the first place.

Expand full comment
Sol Quy's avatar

If you’re interested, I wrote a short story a few years ago from the “perspective” of the p-zombie, in a world in which p-zombie detectors are invented. https://open.substack.com/pub/solquy/p/122122-sentience?r=lkfy&utm_medium=ios

Expand full comment
Ape in the coat's avatar

> How is color information communicated in this 2D grid? Since this is a p-zombie who doesn’t have “real experience”, one might naively expect it to be something like a bitmap, with each pixel containing the coordinates of the color in an RGB color space.

> But imagine presenting the p-zombie with this image:

> …and asking them to tell you what it shows, with a time limit of 100 milliseconds. Since the p-zombie has only the skills a regular human could have, it would fail: interpreting a bitmap like this must be done laboriously by hand.

> But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms.

I think there is a confusion here that we can solve with a simpler example. Consider an image recognition software, connected to a camera. The video feed of the camera is interpreted as multiple bitmaps and then a convolutional neural network does matrix multiplication. In the end the system outputs True if there was a red object recognized by the network or False, otherwise.

When you show the camera an image of a red rose, the system outputs True. But what happens when you show it a bitmap of a red rose - in the exact encoding that the neural network is trained to recognize? It outputs False, of course! The camera interprets this bitmap as a *different bitmap* without any red-encoded elements. Now if you could circumvent the camera and input the bitmap directly as an input of the neural network, the system would output True. But that's a different story.

As you see such behavior doesn't really require redness to be some irreducible primitive in the encoding of the system.

Expand full comment
Vermora's avatar

"" Start by imagining a race of p-zombies who are exactly like humans, except for two things. First, they don’t have conscious experience. ""

Start by imagining an object moving at 42m/s, except it doesn't have any velocity.

With a deep enough understanding of consciousness, the premise is immediately logically impossible.

Expand full comment
Schneeaffe's avatar

Weird that noone brought it up yet, but this part is straightforwardly false:

>But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms. So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing.

Even a p-zombie that really did represent colour as RGB values, still could not do this task at speed. He would need to first recognise all the digits in the image (that by itself is far more processing work than he would do just seeing the encoded image), and then he would need form them into groups of six and associate that symbol group with the RGB value (like associating the string "red" with the colour, a task that goes reasonably fast after you where trained on it for years as a child) 64 times.

This is like saying that any thought that you think faster than you can read it, must not be represented as language internally. Including if youre in first class elementary school and can just barely read at all.

Expand full comment
Loris's avatar

At a certain point, it seems like the modern equivalent of how many angels can dance on a pinhead.

A slightly longwinded digression:

There's a thought experiment where a magic being gives you two boxes. You always get to open the first, which contains either nothing or a million pounds/dollars/whatever. And the second always contains a hundred pounds/dollars, you can open it if you want. And the being knows ahead of time if you'll open the second because it can see the future - and if you do, it puts nothing in the first box.

And people tie themselves up in knots about whether to open the second box because by the point you open it, the amount in the first box is fixed.

But this seemed to me to miss the point that such magical beings which can see the future don't actually exist. It's very easy to make confusing paradoxes if you accept arbitrary axioms.

So I think it is with p-zombies, except with a bonus helping of disagreement over definitions.

The article's argument is pretty clear. The notional p-zombie must have some sort of perception, so it can actually function. But by definition, the p-zombie doesn't have qualia... except that isn't that just what we said it had?

We don't really know what conscious is. It seems pretty clear to me it's not all or nothing. If you're not a dualist (that is, apparently, a 'physicalist'), it's an emergent property of the system, and p-zombies either don't exist - or at least, whether they do depends on exactly how you're defining these complex concepts which we don't properly understand, and maybe it actually doesn't really matter?

If you /are/ a dualist, then you can say whatever you like, but you've guaranteed yourself not only this but also many other impossible philosophical issues, so I hope you're happy with the bargain.

Expand full comment
Taleuntum's avatar

Your digression is called the Newcomb-problem and it is pretty well-known around these parts, so you can probably just mention the name and people will understand.

However, the being is not assumed to be magical or that it can see the future, it is only assumed that it can _predict_ your future choices with high confidence and has done so on numerous occasions before for other people very similar to you.

Imagine me letting go of a book in the air. I don't see the future yet I can predict with high confidence that it will move downward. If your position is that there can't ever be a being that can predict a human's future choices with high confidence, you are implicitly assuming that a book and a human are fundamentally different type of objects, which is kinda woo.

Expand full comment
Loris's avatar

Hmm. I didn't know what it was called, thanks for that.

When it was first described to me, at around the web was becoming an accessible thing, it was given as 'certain', hence 'seeing the future'.

I see the wikipedia article has a subsection on that variation.

I humbly suggest that the behaviour of a human playing the Newcomb paradox game is likely to be less predictable in practice than gravitational attraction, and I don't think that's at all controversial. Living things /are/ different - less predictable - than inanimate ones.

What does 'high confidence' mean to you?

Suppose we set up an experiment with 100 primary schoolchildren. The boxes have one sweetie and either zero or many (let's say 100). The children are told the rules, and allowed to discuss strategy, but there is no training of the children beforehand to 'guarantee' they make a particular choice. (If you want to do the experiment with adults, you can provide the funding.) Do you think there exists a system which could predict all 100 children's decisions correctly?

Expand full comment
Taleuntum's avatar

No, but I believe that there could exist one in the future. In real life the assumption of high confidence prediction probably requires a brain scan both for adults and for children, but I think we also shouldn't exclude the possibility that a very intelligent alien would be capable of it even without a brain scan (and in any case, its not specified in what way the being in the hypothetical predicts, they might use a brainscan).

EDIT: correction: I think it probably requires a brainscan for _most_ adults and for _most_ children, but even a completely normal human would be able to predict my choice on Newcomb's problem just by reading my previous comments I made over the years.

Expand full comment
Alice Wanderland's avatar

Do your intuitions change if you put an LLM into the position of the player choosing between 2 boxes, and the human as the all-knowing predictor?

Because that game you *can* actually play now. And I did.

(Here’s the post:

https://open.substack.com/pub/aliceandbobinwanderland/p/solving-newcombs-paradox-with-ai)

The main issue with newcomb’s problem isn’t that the magical beings which can see the future don’t exist. That’s why it’s a *thought* experiment. To think about the counterfactual where they do exist.

If the fact that things don’t exist were the main problem with accepting arbitrary axioms…then you might as well say the same for mainstream chess?

Who arbitrarily decided the king could only move 1 square? Why not 2 squares? Or 1 square + the weird L shape thing the knight can do? Or any other number of other possible move patterns?

Expand full comment
Loris's avatar
18hEdited

I would say it's more that they can't exist. At least, if we allow the test subject a reasonable amount of time to decide. Scanning their brain while they make the decision would be cheating, akin to those robots which watch their human opponent's fingers when playing scissors paper stone. I consider ...ooh, let's say half an hour's delay after a brain scan would be a reasonable delay. During this period (after the scan), the rules are explained to the player, and they get to see several earlier attempts.

Chess with different rules is a perfectly reasonable endeavour. People do that all the time.

I notice a footnote in your (very long) article that you have a carve-out for consulting a randomness generator. That's cheating, it's a get-out-of-jail-free excuse for every (detected) failure. You could just put nothing in the variable box every time and claim they made up their mind randomly.

Expand full comment
JerL's avatar

If you say that a perfect predictor of human beings *can't* exist, then I think you either need to give an account of what it is about human beings that makes them unpredictable in this way (as distinct from, e.g. LLMs), or run the risk of *you* being the one asserting magical things: that unlike all other classical physical systems, which can be simulated given sufficient computational power, human beings don't have that property.

Expand full comment
Loris's avatar
4hEdited

In brief, quantum effects.

I'm not claiming that consciousness is quantum, only that the basic processes underlying it uses ordinary matter subject to those effects.

Suppose that the predictor device measures the exact state of the brain - every subatomic particle's position and speed. Well, it can't; physicists know that already. But suppose it could. My understanding is that even then, it would still not be able to predict behaviour with absolute precision, because at the sub-atomic scale, some events occur unpredictably. Radioactive decay, that sort of thing.

If you were to open up just one box and find nothing, and the predictor's excuse was "an atom of C14 could theoretically have decayed in your brain so you were using a randomness generator", I think it would be reasonable to feel cheated.

Neurons rely on chemical reactions; each reaction is made of individual events which either occur or not - unpredictably. Yes, these are very small and the results are stochastically predictable, however neurons either 'fire' or not at any particular moment, the decision is 'knife-edged'. Small changes there will quickly propagate through the neural network until the brain state as a whole is not predictable.

Expand full comment
Leppi's avatar
1dEdited

Newcomb's paradox is badly defined, all of the discussion is because the axioms are not clearly defined in the problem.

You have to accept either:

1. There exists a being that can perfectly predict your actions with (close to) perfect accuracy, in which case you can only hope to be the kind of person that one-boxes. There is no real choice involved. You either are a person who one-boxes in which case the being predicts you one-box, or you are a person who two-boxes, in which case the being predicts you two-box.

2. You have a real choice, and can change your mind after the being has placed money in the box. This contradicts the possibility that there can exist a being that can predict your actions perfectly.

Also, Loris clearly made a mistake by bringing up Newcomb's paradox if the goal was to discuss p-zombies :)

Expand full comment
Taleuntum's avatar

If I understood correctly, this is TAG's position too (https://open.substack.com/pub/astralcodexten/p/open-thread-384?r=fci61&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=124267612), but I disagree.

In short I would say that:

- you have to accept 1 if you accept the assumptions of the problem. (with the modification that the being _could_ exist instead of currently exist)

- the word "choice" still applies here, and you can choose to be that kind of person. it's best to not see this problem as something abstract, and instead imagine that it is actually happening with you and try to decide your action in that frame and then report what you would choose.

Honestly, I filled my p-zombie discussion needs for the year, so another topic is nice.

EDIT: Maybe this example would illustrate it:

I'm hungry so I truthfully say: I can choose to eat an egg or an apple. This is a completely normal sentence used by tons of people, I didn't make any claims about the universe. If it turns out that unbeknownst to me even before I made my choice I was already the kind of person who will choose egg, I don't care particularly. I and society will still use the word choice in sutation like this both in speech and in internal deliberations about our choices.

Expand full comment
Leppi's avatar
1dEdited

I agree that you can choose to be this kind of person. Free will exists. But if you accept 1, then there is no paradox. If you make a choice to change your mind after the money is placed in the box, then by axiom 1, the being will already have predicted that you will change your mind.

My point is that all the discussions on this problem is due to some people assuming axiom 1 and some assuming axiom 2. Once you write those out, it is clear that there is no real paradox. I agree that assuming axiom 1 seems required for the thought experiment to work. However, it is not clear that such a perfect predictor is actually possible.

Personally, I lean toward the predictor being possible, and I don't really see any contradiction with free will. I believe that:

1. Outcomes in the world are mechanical, i.e. predictable (there may be some randomness at a quantum level, but those does not matter much for outcomes at the scale we care about, i.e. if you throw a ball at a wall it always bounces back)

2. Free will exist, in the sense that real choices are made inside your brain based on processing inputs (e.g. your memory, experience, sensory inputs etc.), and giving appropriate outputs (actions) after processing those inputs (conscious thought). This happens at a different level from underlying physics, and there is no real contradiction with a mechanical world view. This also mean that another outcome is not really possible. The world does not work like a computer game where you can save the game and make a different choice.

(Even) more speculative, randomness at a quantum level may propagate to give different "timelines" on a long scale, however, the possibility space on short scales must clearly be very small. I think it is very unlikely that your brain will suddenly come to a very different conclusion because of randomness at a quantum scale, so this will more like a butterfly effect on long timescales, due to very slight differences in outcome.

*edit Crucially, this is all speculation. Perhaps we live in a simulation, and our qualia or consciousness is really a being making choices from outside what we experience as the physical world. Something similar to the Matrix. Or something completely different. In that case. Axiom 1 (at least from the perspective inside our world) is not really possible, and axiom 2 applies.

Expand full comment
Taleuntum's avatar

I think we agree on like 99% of things discussed, but I will hone in on the 1%:

Semantics, but I don't agree that free will exists or more precisely when rregular people say the words "free will", they mean libertarian free will which I don't think exists. At least to me the words "free will" are not like the word "choice", the former does not exist, the latter does and I predict that future society won't use the former (except to deny it) and will use the latter.

Second, I don't think the debates about Newcomb's problem are because some people accept your 2nd axiom. My reason for this is that there are lots of people who argue for taking both boxes. If they were proponents of your 2nd axiom, they wouldn't argue for two-boxing, they would instead reject the whole problem as they can't accept its assumption (at least they should)!

Third, I don't think quantum randomness changes much, because a predictor with enough information could predict when quantum randomness may change the choice and we can consider a modification of the original Newcomb problem: if the prediction is dependent on quantum randomness, then it won't put anything in the first box. With this modification we can again consider which is the better choice.

Expand full comment
Leppi's avatar

You are right, I think we agree on 99%, and this is where we are not completely in agreement, or at least I'm not sure what you mean.

I'm not sure what you mean by "libertarian free will". I challenge you to explain what free will means if not that you can make choices. For me, I had an epiphany when I realized that there was no contradiction between a mechanical world view and free will - in fact I think a mechanical predictable world is rather a prerequisite for free will. These ideas are not at all mine originally, and it was well explained somewhere, but unfortunately I forget where I read it.

Anyway, I think people often get confused and think that free will must mean that there could have been a different outcome. But this implies that we could somehow stop the world, make a different choice and get a new "timeline" like in a computer game. And that again implies that we (qualia, what is controlling our choices) must be outside of the world.

I agree on Newcomb. When people accept axiom 1 (arguably implicit in the problem description) and chose one-boxing the typical objection is that you could change your mind after the money is placed. This implies that axiom 2 is true, but then the problem description should rather be rejected. I think this is much more clear when the axioms are clearly stated.

Agreed on quantum randomness, we could also just say that the predictor has a very small margin of error (0,00001% error rate?) due to quantum effects, and the thought experiment is functionally unchanged.

Expand full comment
B Civil's avatar

> At a certain point, it seems like the modern equivalent of how many angels can dance on a pinhead.

Exactly. It passes the time, does it not?

Expand full comment
The Ancient Geek's avatar

>Start by imagining a race of p-zombies who are exactly like humans, except for two things. First, they don’t have conscious experience. Second, they don’t necessarily report having conscious experience; if we want to claim that they do, we’ll have to derive this fact from first principles.

You only have to do that under Yudkowsky's Zombie world framing, which Chamers doesn't use.

It's relatively easy to see why a computational duplicate of a person would claim to be conscious: if the original did, a functional duplicate would, whether it is actually conscious or not. Zombie Chalmers would claim to be conscious Zombie Dennett would claim to be a p-zombie like the original. Dennett.

Expand full comment
Leppi's avatar

This implies that an exact duplicate without the qualia is possible, which implies that the qualia does not arise from the process itself, but is completely separate. Otherwise, it seems any computational duplicate of a person would have qualia simply from being such a duplicate.

Expand full comment
The Ancient Geek's avatar

You don't have to accept the actual possibility of the hypothesis to explore the implications. Even Chalmers doesn't think zombies are really possible.

Expand full comment
Pjohn's avatar
1dEdited

Scott's ”The P-Zombies can do everything humans can do" looks like a trap, to me. I could understand saying ”This creature is identical to a human in every way except for consciousness - can it do everything a human can do?”* and I could understand ”This creature can do everything a human can do - does it necessarily need to be identical to a human in terms of consciousness?” but I can't understand saying ”This creature is identical to a human except for consciousness plus also somehow it can definitely do everything a human can do”. It seems like saying ”This machine is identical to a car in every way except it doesn't have an engine, but it can definitely do everything a car can do” - _of course_ assuming this would lead to confusion!

*("...and if not, can we deduce anything about consciousness, given the specific stuff it can't do?”)

Y'know those experiments where somebody is hooked-up to some sort of brain scanning contraption gubbins thing (..do, please, stop me if I'm getting too technical..) and told to report when they'd made a decision, and the contraption can tell the experimenter what their decision would be long before they themselves are aware of having made it, and how this sort-of suggests that our brains might just mechanically work everything out and our consciousnesses might just a sort of post-hoc spectator?

We could think of a P-Zombie as a being without that spectator: it would seem at first glance to be able to do everything a human can do, because the spectator isn't really doing anything, just spectating - but if the spectator-interface isn't a 100% perfectly-leak-free one-way valve (let's say that usually the mechanical part takes its inputs from the regular senses etc. but under some rare circumstances, like maybe during focused introspection, it can take some sort of input from the spectator), there would seem to be a class of things that humans could do that P-Zombies couldn't: any process that required input from the spectator. I think that ”Explaining why a rose feels red” might be a marginal example of a thing just barely within this class, and ”Scratch-deriving the Hard Problem of Consciousness purely from introspection alone” would definitely be within the class.

Expand full comment
Nebu Pookins's avatar

Scott,

You semi-recently wrote about how a lot of people "became conscious" at ages like 3 to 6, but some people apparently didn't become conscious until their mid 20s.

You also wrote that book review on the Bicameral Mind, where the author argued that humans in general were not conscious until the Bronze age.

This seems to raise the possibility that not only are p-zombies possible, but actually p-zombiehood might be the default state for humans unless they get the exposure to the right stimuli (which was apparently too rare to matter before the Bronze age, and which a lot of modern humans don't get until well into their adulthood). In other words p-zombies might be all around us.

Other comments have brought up LLMs, and if we accept the axiom that LLMs are not conscious (i.e. I want to set aside that whole half of the possibility tree for now), this demonstrates that conscious is not necessary for a lot of things that we probably previously thought was necessary. For example, conscious does not seem to be necessary for "doing science", because we observe that LLMs are capable of "doing science".

Expand full comment
B Civil's avatar

> You also wrote that book review on the Bicameral Mind, where the author argued that humans in general were not conscious until the Bronze age.

Julian Jaynes…it’s a good argument.

Expand full comment
lyomante's avatar

said earlier but i don't like p-zombies because its paranoia masking as a thought experiment; if there are things that are functionally identical to humans but lack a quality that is internal but its impossible to derive the difference from

behavior its closer to paranoia.

you are walking in the park and see briefly see two people standing with their eyes closed. one is savoring the breeze, one is sleepwalking and stands still a moment. how do you tell which is which if you limit it to that glimpse?

if you are reductionist enough its impossible to prove anything. you can only judge things by present action and qualities, if you posit things where you can't tell, of course it will seem like being awake is the same like being asleep.

Expand full comment
gurugeorge's avatar

I'm inclined to think that they would, and they would know how to use "red" because their experience would have qualities that they could label "red" that we would also label as "red" when we have our red qualia, even if those subjective qualities were completely different if laid side-by-side in God's view.

In terms of any public discourse (e.g. scientific, philosophical) what matters is consistent and communication-facilitating language use, and if people and (e.g.) sentient machines were to use words the same way then the peculiar quality of their subjective experiences would cancel out.

I think machines would experience the world, but I'm not sure they would parse the experiences they have of the world in the same way subjectively. i.e. they would use "red" the same way, but they would resolutely apply it to the external object, they wouldn't hive off a thin subjective film (as it were) and say that that is something radically private and accessible only to the haver, and call THAT "qualia."

********* (Optional ramble)

But as an Externalist I would say that, wouldn't I? :)

I dunno, it's a hugely tricky business and I've been round the houses back and forth several times on all these issues in the past 50 years or so. I think Chalmers is definitely on to something when he talks about something unaccounted for in physicalist philosophy, but I don't think he quite gets it either.

I think it's probably better to split all consciousness-talk into two very definite things, one being the public intersubjective sense (Dennett's heterophenomenology, even), where anyone can talk quite coherently about being conscious of and avoiding thrown rocks, just as he can observe other entities avoiding thrown rocks. There, it's quite clear that data are going in, being interpreted by a dry or moist machine, and behaviours are triggerd, including verbal eructations. Doubtless a sophisticated-enough machine will have a "machine state" that it can intepret as "my (entity here at x,y position) experience." (BUT WAIT: that machine state is only "faux private," it's actually accessible to others, whereas the private consciousness I want to talk about further down is inaccessible, which is Chalmers' point.)

But that's the boring sense of "consciousness." The interesting sense of consciousness is the one that's paired up with the "why anything at all?" question, it's "why is there an I that's aware of anything at all?" The sense that Chalmers is getting at, the sense he thinks people like Dennett are tone deaf to. The sense that Advaita Vedanta talks about, where there's something about your consciousness that's been the same quality-less "thing" all your life, right here, perceiving things, metaphorically speaking, a lamp that lights up what would otherwise be a dark, i.e. unperceived, existence. Deer could avoid tigers, and robots could even join in the mix, but all that could happen quite without anyone having a radically private experience of it.

And for that sense of consciousness, that interesting sense, that RADICALLY private sense, I think it makes more sense to frankly admit that it has no place in the physical world, and is purely a religious, spiritual thing, purely the little chip of God in us, the "Christ in you" - basically God canvassing His creation in a multitude of ways and seeing that it's Good. The radical privacy that's the mark of consciousness in the interesting sense, the radical privacy that's missing from all the creatures whose consciousnes and conscious behaviour in the public sense is open to everyone, that is God's privacy and intimacy suffering His own works.

Furthermore its presence (the presence of consciousness in this interesting, private sense) is what turns potentiality into actuality in the Aristotelian/Scholastic sense. Out there in the "dark" world all that machinery is chungling along without anything being aware of it; only those bits that are graced with God's presence take on actual form and beauty. Kind of an aesthetic version of the "Prime Mover" idea maybe?

Expand full comment
Matt Schick's avatar

"I’m not even sure which direction to update on this." Please DON'T update!

There are different kinds of thought experiments. Some use a setup that is possible (e.g. "imagine you're in a falling elevator"), others use a setup that is not possible (e.g. "imagine you're traveling at twice the speed of light"). Typically, we use the not-possible kind when we want to demonstrate that it creates a contradiction, thereby showing/proving that the setup itself isn't possible.

I'd suggest that what you've actually done here is something along the lines of showing that p-zombies (as you refer to them in your title) *can't exist*, because if you assume that they *can* exist, then you end up with a contradiction: they refer to (talk about) qualia and consciousness, in a way that's indistinguishable to how we talk about those things, despite their not having qualia or consciousness.

Now, I believe the whole purpose of your post was to demonstrate that this is *not* a contradiction. But even if you think you've succeeded in demonstrating that, it doesn't change the fact that your entire setup is almost certainly of the not-possible kind to begin with. You should not be updating based on any conclusions drawn from/within a not-possible thought experiment.

[And if you were to truly believe that your thought experiment is *not* impossible, then I suppose you really ought to be updating on the fact that you're probably living in a world populated by p-zombies. If it looks like a p-zombie, walks like a p-zombie, and talks like a p-zombie, then it's probably a p-zombie. If you're not ready to update on that - and I don't think you are! - then I don't see why you'd bother updating on the other thing.]

Expand full comment
The Ancient Geek's avatar

>I think they would have to say something similar to what we say: “My reporting mind received a packet of visual data, and after examining/analyzing this packet, I was able to tell that the rose was red.”

I think they would say that "I said the rose was red because I can see it"...since that is what most humans would say. That you have qualia as mental proxies for properties isn't naively obvious, particularly in the case of colour quala..the default ontology is that you are just seeing things as they are.

Qualia don't seem strange, naively. For the naive realist, "red" is an entirely objective property of ripe tomatoes, etc. To the naive realist, it's thought that ls weird and insubstantial. The early mind-body problem (eg Descartes) was about thought

It takes some education (eg the Dress Illusion) to understand that qualia aren't objective, and some more to understand what a physical property is supposed to be .

>But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms. So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing.

And that's either physically comprehensible , or it isn't, which is where Mary of Room fame comes in. I agree that a mapping onto a higher dimensional space happens, but not that it is necessarily a sufficient explanation of qualia. A full, predictive , reductive explanation of qualia should not just tell you that one abstract space gets mapped to another, it should tell you what specific qualia are.

The spectrum inversion argument illustrates that point: two people with relatively inverted spectra, ones red is the others green, could have the same qualia space, as a structure.

Being intrinsic, not relational, is one of the traditional properties of qualia.

>This argument confuses me. It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence

I don't know how p-zombies work, either, but you can still state the Hard Problem in other ways, notably the Marys Room argument.

The zombie argument is best steelmanned , not as an argument for the real possibility of zombies, or the naive imaginability of zombies..but about the sophisticated conceivability of zombies. If you know all physics, but can still conceive, without contradiction, p-zombies, then physics doesn't determine that a functionally conscious entity will have phenomenal consciousness.

But *that's* basically the Mary's Room argument. So I would rather forget about the p-zombie argument, and just use Mary's Room.

But, having said that, the Mary argument only gets you as far as an epistemological non physicalism.

Expand full comment
Taleuntum's avatar

The problem with the zombie argument is that it wants to have its cake and eat it too by strategic conflation of meanings:

It's easy to accept the "naive imaginability" of zombies (iirc what Chalmers calls prima facie conceivability), but then why would metaphysical possibility follow from that?

On the other hand, it's reasonable to accept that metaphysical possibility follows from "sophisticated conceivability" (iirc what Chalmers calls ideal conceivability), but then how do you prove that the zombies are actually conceivable in this sense? We are not ideal agents in a perfect reflective equilibrium.

Mary's room is a much more honest thought experiment, it's clear from the start that it leans on your intuitions about qualia. However, you still can't get to epistemic non-physicalism, you only get a much weaker conclusion: not all knowledge can be acquired from books/language. Mary learning new information is perfectly consistent with Keith Frankish's theory that qualia are quasi-phenomenal, ie purely physical (but possibly complex, or gerrymandered) properties that introspection misrepresents as being phenomenal.

Expand full comment
Tarpitz's avatar

'So maybe it’s the best news for illusionism and deflationist physicalisms, which have to explain why we talk about qualia even though there “is” “no” “consciousness”'

Everyone has this bass ackwards. It's the best news for metaphysical idealism, solipsism and functionalism about the material.

Minds and qualia (at least mine, at least presently) just do exist, for all the old Descartes/Nagel reasons. There is something which it is like to be me, and hopefully you.

What does it even mean for a chair to actually exist? How would you distinguish that world from one where it didn't?

Expand full comment
wubbles's avatar

Raymond Smullivan has a delightfully twisted bit in "This Book Has No Title" (but maybe it's a different one of his) where a man resolves to commit suicide by becoming a p-zombie. However, during the night his wife poisons him with the drug that turns people into p-zombies, and in the morning he goes off and injects himself with the drug anyway. Thus showing a bit of an issue with the idea of p-zombies.

Expand full comment
Whest's avatar

At risk of being dismissive, what is the point of the P-Zombie as a thought experiment? It seems like Chalmers intends it as a proof of the existence of qualia, but you have to presuppose the existence of qualia to even engage with the proof at all. Like, ask a physicalist to "imagine someone who functions exactly as you do, except that they lack qualia" and they'll probably stop you right there. A physicalist would reject the premise.

Expand full comment
SurvivalBias's avatar

Yes! Now can we just make the obvious logical step that there is nothing "mysterious" about "qualia", it's just how information-dense data packets feel on the inside, and move on from this whole silly discussion?

Expand full comment
Stuart Armstrong's avatar

I've argued that physical systems can operate in a way that is akin to "experiencing qualia": https://www.alignmentforum.org/posts/rmBS5nTJh6pxERWEu/short-summary-of-mairy-s-room

I'd roughly argue that, in this case, if the p-zombie accurately says: “My reportable mind receives the color information as a 2D grid in which each pixel conveys a irreducible sudden intuitive sense of being the correct color, **furthermore this triggers involuntary effects in the deeper parts of my mind that are similar to what happens in the minds of qualia-experiencers**”...

...then the p-zombie actually experiences qualia.

Expand full comment
Mark Foskey's avatar

You write, "It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red,...." My intuition is different. It seems to me that you have just explained qualia.

You wouldn't expect the various components of your brain to register that conceptual primitive as an abstract code in the way we process external abstract codes. That seems very homunculusy to me. Instead, I would expect your brain to engage with these primitives in a very system 1 way, for which "experiencing it" is a pretty good description.

Contra Chalmers, I would say that his superintelligent particle-location-knower would know enough to know that the minds possessed qualia. By watching the internal processes of the brain, the location-knower would be able to see the patterns of firing representing redness, and see how it led to emotional and conceptual associations, and recognize that as a quale. Chalmers would say that having a series of propensities is different from *really experiencing* something, but I don't see what he bases that on.

Expand full comment
Elohim's avatar

I don't think qualia exists. If a superintelligent bein knew all the positions of all particles in a large region of the Universe then he should be able to simulate the nervous sensation of seeing a particular color. For example, see Big Picture by Sean Carroll or this review by Sabine Hossenfelder: https://backreaction.blogspot.com/2016/05/book-review-big-picture-by-sean-carroll.html

Expand full comment
Leppi's avatar

> I don't think qualia exists.

That is a strange take, I think it must imply either:

1. You actually don't have qualia yourself. I.e. you don't experience a self that is observing the world.

2. You mean something different than the usual meaning with the word 'qualia'.

Expand full comment
pondmaker mudscryer's avatar

This is a complex philosophical argument to justify the idea that other things aren’t conscious when actually it’s simpler to just reason that everything that exists has qualia inherent to its specific kind of consciousness

Expand full comment
stewbasic's avatar

Scott, what would you point to as the most convincing argument against physicalism? In case my terminology is not quite right, I'm looking for an argument that "physics, as we currently understand it, cannot conceivably be sufficient to give rise to all aspects of the human experience."

Expand full comment
St. Jerome Powell's avatar

How about a third option: that these supposed p-zombies who have an irreducible mental primitive of ineffable redness, actualy do have qualia, and are aware of it, and that's all there is to it?

Expand full comment
Ted's avatar

Philosophical zombie concept is meaningless and should not be used. It's just defining in the paradox from the beginning. If married bachelors would marry, whwre would they find their wives?

Expand full comment
Ted's avatar

Philosophical Zombie is a meaningless concept that uses paradox in the definition to create hype. It's like we made a word for married bachelors and discussed their spousal problems.

Expand full comment
dlkf's avatar

There is no philosophical position I find more grating than "there's nothing mysterious about consciousness, you're just confused." I disagree with Bentham's Bulldog about basically everything, but he will always be in my good books for the line "Assume that you are not Dan Dennett; you are conscious."

Expand full comment
George Collier's avatar

I asked a supposed P-zombie ChatGPT how it knew that a rose in a picture was red.

Here is the reply “I determine the color by analyzing the actual pixel data from the image you uploaded. When I look at the image, I can “see” the dominant colors by processing the RGB (red, green, blue) values in the photo. In this case, the petals of the flower have very high red values and low blue/green values, which means the flower appears in shades of red—specifically a deep, rich red typical of a mature rose.

I also take into account lighting, shadows, and saturation, which help me distinguish between similar hues (like red vs. burgundy or crimson).

So while I don’t see in a human way, I do analyze images based on color data, patterns, and contrasts—kind of like a painter with a digital eye”

Expand full comment