There’s a long-running philosophical argument about the conceivability of otherwise-normal people who are not conscious, aka “philosophical zombies”. This has spawned a shorter-running (only fifteen years!) rationalist sub-argument on the topic. The last time I checked its status was this post, which says:
1. Both Yudkowsky and Chalmers agree that humans possess “qualia”.
2. Chalmers argues that a superintelligent being which somewhow knew the positions of all particles in a large region of the Universe would need to be told as an additional fact that any humans (or other minds possessing qualia) in this region of space possess qualia – it could not deduce this from mere perfect physical knowledge of their constituent particles. Therefore, qualia are in some sense extra-physical.
3. Yudkowsky argues that such a being would notice that humans discuss at length the fact that they possess qualia, and their internal narratives also represent this fact. It is extraordinarily improbable that beings would behave in this manner if they did not actually possess qualia. Therefore an omniscient being would conclude that it is extremely likely that humans possess qualia. Therefore, qualia are not extra-physical.
I want to re-open this (sorry!) by disagreeing with the bolded sentence. I think beings would talk about qualia - the “mysterious redness of red” and all that - even if we start by assuming they don’t have it. I realize this is a surprising claim, but that’s why it’s interesting enough to re-open the argument over1.
Start by imagining a race of p-zombies who are exactly like humans, except for two things. First, they don’t have conscious experience. Second, they don’t necessarily report having conscious experience; if we want to claim that they do, we’ll have to derive this fact from first principles.
These p-zombies talk to each other (like humans do), and an outside observer might notice that they report on some levels of mental processing, but not others (like humans do). For example, they might fail the infamous PARIS IN THE THE SPRINGTIME test, reporting only one THE rather than two. The observer would conjecture that the p-zombies’ speech is produced by a part with access to high-level processing (after the Paris sentence has been rounded off to its more plausible alternative), but not low-level processing (the base-level sense-data including both “the”s). Thus, the observer would reinvent the idea of the “conscious” vs. “unconscious” mind. This isn’t surprising or a contradiction of our premise - this is a different sort of “conscious” (easy problem) than the one we agreed the p-zombies lack (hard problem). But it will be linguistically awkward, so let’s call this distinction the “reportable” vs. “unreportable” mind.
Suppose the observer shows the p-zombie a picture of a rose, and the p-zombie describes it as red. If the observer asks the p-zombie to recount how their reportable mind came to know that it was red, what might they answer?
They wouldn’t answer “The light triggered the rhodopsin-based photoreceptors in my eye, the signal was transmitted to my brain, and it eventually reached the speech centers and made them say the word ‘red’”. After all, we hypothesized that the p-zombies don’t know anything humans don’t know, and most humans don’t know what “rhodopsin” is. In fact, we can imagine a primitive tribe of p-zombies who don’t know any biology - they don’t even know what the brain is - but who still have to be able to answer this question. Although these words are a correct description of what’s happening to the p-zombies neurologically (just as they would be a correct description of regular humans), there has to be some other answer about what they would tell us when we asked.
And they wouldn’t answer “IDK, my mouth just moved and formed the syllables ‘this is red’”. Normal humans can easily tell the difference between a voluntary action and an involuntary spasm (eg if your limb jerks because of an electric current or a seizure). In fact, this faculty is so profound that its failures contribute to conditions like schizophrenia; when someone loses the ability to interpret their speech as self-produced, they start formulating hypotheses like “the CIA put a chip in my brain that controls my actions”. Since the p-zombies can do anything humans can (including distinguishing voluntary vs. involuntary actions, and getting schizophrenia) they must be able to report something other than “my mouth moved but I can’t say why”.
I think they would have to say something similar to what we say: “My reporting mind received a packet of visual data, and after examining/analyzing this packet, I was able to tell that the rose was red.”
Could they describe this packet of visual data further?
The packet can’t just be a verbal description of the rose, like “There is a rose in the scene. It is red.” After all, if the p-zombie can do anything that humans can do, it can use the packet to draw a somewhat faithful reproduction of the rose, including details like how many petals it has, their orientation relative to one another, and the exact way that the stem bends. It would take a novella worth of words to describe a rose in such detail (consider how many words it would take to describe a complex image so that someone who read the words could draw it as faithfully as someone who really saw the image). So the packet must be a rich spatial representation of the rose’s edges, colors, size, et cetera. Given the speed with which the p-zombie could calculate distance (eg “the center of the rose is further from the first leaf than the first leaf is from the bottom of the stem”) and turn it into a 2D sketch, I have trouble thinking of this packet as anything other than already organized in a 2D grid.
How is color information communicated in this 2D grid? Since this is a p-zombie who doesn’t have “real experience”, one might naively expect it to be something like a bitmap, with each pixel containing the coordinates of the color in an RGB color space.
But imagine presenting the p-zombie with this image:
…and asking them to tell you what it shows, with a time limit of 100 milliseconds. Since the p-zombie has only the skills a regular human could have, it would fail: interpreting a bitmap like this must be done laboriously by hand.
But the visual field is a bitmap thousands of times bigger than this, and the p-zombie can interpret it within 100 ms. So the pixels must be presented not as RGB color coordinates, but in some kind of rich color language that produces an immediate experience of color without requiring any further thought or processing.
If the p-zombie says this - “My reportable mind receives the color information as a 2D grid in which each pixel conveys a irreducible sudden intuitive sense of being the correct color” - then what’s the difference between that claim versus “I experience the mysterious redness of red”?
This argument confuses me. It still seems like, even if the p-zombie is using an inner encoding scheme in which red is represented by a conceptual primitive, they still aren’t “experiencing” the mysterious redness of red, just . . . I don’t even know how to end this sentence. Just using an encoding scheme that matches it perfectly and causes them to describe it the exact same way that we do?
I’m not even sure which direction to update on this. If you don’t need consciousness to claim to have qualia, this is good news for epiphenomenalism and other positions where consciousness doesn’t interact with the physical world (and therefore cannot cause our claims that we have qualia). But it doesn’t fully defuse the intuitive inelegance of these positions, where it’s a baffling coincidence that we both claim to have qualia, and actually have them. So maybe it’s the best news for illusionism and deflationist physicalisms, which have to explain why we talk about qualia even though there “is” “no” “consciousness”.
But these still fail to explain how and why we so obviously experience consciousness, not just in the sense of there being a mysterious redness of red, but in the sense where there’s “someone” “there” to appreciate it.
To fend off the inevitable accusations - I’m not claiming to be the first person to ever think of this, I’m not claiming I’m an autodidact genius who is better than real academic philosophers, I agree I am scum and not worthy of kissing the boots of anyone with formal credentials, please don’t kill me. I’m just saying I personally don’t know of anyone making this exact argument before, and I think it’s interesting and worth talking about even without clearing the bar of spending weeks reviewing every philosophy paper ever written until I figure out that it’s similar to an idea in Schmoe & Schmendrick 1972. Also, if I did do that, you would obsess over some way it’s subtly different from Schmoe & Schmendrick 1972, accuse me of misinterpreting them, and get mad anyway. Still, if you know of prior work on this topic, let me know and I’ll edit it in.
Share this post