412 Comments
Comment deleted
Expand full comment

Technically, people with locked-in syndrome have authored entire books using winks, but yes, it is useful to imagine this isn't possible for the sake of this question.

Expand full comment

Especially given the weight given to trusting reports as a fundamental principle of study.

Due to a minor screw-up, I've been given doses of muscle relaxant by IV sufficient to shut down all voluntary and most involuntary movement, without the dissociative drug intended to prevent me from being conscious during the process. Couldn't breathe, move an eyeball, etc.

I was self-aware enough to understand that this frankly terrifying state wouldn't last long, and to exercise some measure of control over my building panic.

Expand full comment

No, Dehaene is using the ability of reporting as a working definition that applies in standard situations, not as universal definition that should apply everywhere. He believes that some locked-in patients are conscious, and others are not. There is a whole chapter in which he describes his work on how to discriminate between the two.

Expand full comment
founding

I think you answered your own question! The reports of people that have recovered seem like _decent_ evidence of the same being true for others.

Expand full comment
deletedMay 14, 2022·edited May 14, 2022
Comment deleted
Expand full comment

>This "Bayesian-at-all-levels" theory does seem to be more parsimonious (it doesn't postulate a "sampling" mechanism in the brain) and more evolutionarily advantageous (since Bayesian reasoning would seem to work better in all cases).

Perhaps I am misunderstanding your point, but it seems to me that deciding on one interpretation with certainty would be advantageous over Bayesian reasoning in many situations, specifically those requiring decisiveness and confidence. For instance, in physical or social combat, where overconfidence defeats underconfidence.

Expand full comment
Comment deleted
Expand full comment

Isn't having a very strong prior ultimately equivalent to Dehaene's single interpretation? Even if the conscious brain makes an interpretation, it isn't permanent and is subject to additional sensory input, which could be thought of as much less frequent Bayesian updating.

Expand full comment

For the evolutionary perspective, a possible explanation could be computational cost/speed. Communication to agree on one most probable state (or to sample appropriately from the possible states) must be quicker than agreeing on a full probability distribution which contains much more information (multiple states + probabilities). And sampling ten times per second seems superior in most situations to getting one probability distribution once or twice per second. Especially considering than when more than one state has large probability, you should get a state rapidly changing back and forth, alerting you that something fishy is going on (this actually happens for some optical illusions).

Expand full comment

How Science Is Trying to Understand Consciousness:

https://www.youtube.com/watch?v=Xetgy2tOo9g

Expand full comment

The weakest part of this review is Appendix A, which goes into a good discussion of composition starting from Dennett's "multiple drafts" model. But composition, while interesting, simply doesn't address the "hard problem," which is: "why are there any qualia *at all* in conjunction with the relevant neural activity?"

We know perfectly well that there *are* qualia in conjunction with the relevant neural activity. One can claim that the qualia "just are" said neural activity, but that is simply to abuse or misunderstand the concept of qualia.

I would much prefer than scientists say "I'm not going to discuss that," or "that's outside the scope of my study," than to fudge the concepts and pretend to answer a question that they haven't.

Expand full comment
Comment deleted
Expand full comment

"You claim to have qualia, but we can't confirm."

You don't need to confirm *my* qualia. You need only note your own – which I bet you have, since you read my comment.

"So you've got no reason to think qualia aren't an emergent property of a biological arrangement of matter."

I made no claim about the relation of qualia to their correlated neural activities, other than to observe that they are distinct kinds of things. The claim that they are "emergent properties," if true, would hardly dispel the hard problem.

Expand full comment

How can I confirm my own qualia? Serious question.

Expand full comment

They don't need confirmation. They are literally that which you directly perceive.

People get weirdly hung up on the word "qualia," like it means something special. It totally doesn't.

Say you open your eyes and see a red object. That's qualia. Period. That's it. No one is claiming that it's some special something else. Your perception of red may not be correct in the sense of corresponding to the wavelengths of light reflecting off the object. Maybe you took a psychedelic that is distorting color perception. That's fine. Doesn't matter, because we are making no claim about the relation of the qualia to anything.

You need merely note that you directly confirm its existence because you are seeing a red object.

Expand full comment

Ok I definitely can see the colour red. In what way am I experiencing it that a computer cannot?

Expand full comment

Not being computers, you and I don't know the answer to that. That's why I didn't make any claim about that.

Expand full comment

You should read mary's room

https://en.wikipedia.org/wiki/Knowledge_argument

if you can answer your own question you've answered the hard problem

Expand full comment

Ok, but you express the hard problem as, "Why are there any qualia at all?" But there's no mystery as to why I see a red object: it's because there's a red object before my eyes. Of course there's some technical explanation as to differential light scattering, lenses, rods, image processing and so on, but at base I perceive what's in front of me.

Circling back to the beginning, the writer says, "a perception or a thought is conscious if you can report on it." I can report seeing a red thing, so I'm conscious of it. How can I report seeing a red thing? That, I think, is the easy problem. But there's nothing else which requires an explanation, because as you correctly recognise, the "quale" of seeing a red thing is nothing other than opening your eyes and seeing a red thing. There isn't some further element ("what it is like to see red") of which we can ask, "Why does this exist?"

Expand full comment

"The 'quale' of seeing a red thing is nothing other than opening your eyes and seeing a red thing. There isn't some further element ('what it is like to see red') of which we can ask, 'Why does this exist?'"

No, 'quale' refers precisely to 'what it is like to see red,' which is what I have been consistently referring to as the subjective, first-person experience. The mere fact of there being a red object in the room is not sufficient, because then I might have an experience of such even when I was not in the room, which I normally don't.

Expand full comment

I claim that I don't have any qualia other than what Appendix A predicts I would have.

I have been using a model similar to Appendix A for over a decade and I have never in that time felt like I was "experiencing" something that needed additional explanation. I perceive things, I create a narrative of self based on those perceptions, I process language and write paragraphs like these. I don't "feel redness with such clear presence that it must be something other than a conscious processing of various electric perceptions."

I have no idea what you're talking about when you say that your qualia are distinct from their correlated neural activities. I can maybe make sense of that as a signifier/signified distinction (equivalently a word/object or map/territory distinction) because qualia are how we encapsulate the totality of said neural activity? From that perspective qualia only come into existence when we try to describe our neural activity.

Continuing to discuss the "hard problem of consciousness" feels stupid. Like God, I can't technically prove that it doesn't exist, but the onus is on you to prove that there's anything here worth talking about.

Expand full comment

If you open your eyes and see a red object, that's qualia. Period. That's it. No one has claimed that it means something special or different from that immediate experience.

If you claim not to have qualia in that simple sense, then you are making a very odd claim, one that differs not only from my direct experience, but also from the ubiquitous intersubjective reports of others.

If you believe: (A) subjective experience is causally produced by neural activity, that's fine. I haven't objected to (A).

- If you believe that that (A) means there is no hard problem, then I think you haven't understood the hard problem.

- If you believe that (A) actually explains why the relevant neural activity should give rise to first-person experience, then I think you're wrong. (A) doesn't explain that at all; it merely presents the relation as a fact.

Expand full comment

I don't know if it's intentional, but you're consistently phrasing things in ways the evade the crux of disagreement.

> If you believe: (A) subjective experience is causally produced by neural activity, that's fine. I haven't objected to (A).

Practically everyone save for the full 'pre-ordained harmony' substance dualists could agree with some level of neural activity causally producing subjective experience, including the property dualists and the panpsychists.

The question is whether that that link is *necessary* or not: if you have logical supervenience, then p-zombies are not logically possible and you're breaking from the standard formulation of the Hard Problem in favor of an easier one. If instead you're arguing that neural activity does *not* logically entail subjective experience, then you're making an affirmative case beyond definitions. There's always been finger-pointing as to who has the burden of proof there, but it's fundamentally a disagreement amenable to physical cognitive research.

> - If you believe that (A) actually explains why the relevant neural activity should give rise to first-person experience, then I think you're wrong. (A) doesn't explain that at all; it merely presents the relation as a fact.

Analogous problems have existed, and been resolved. Would you say that biochemistry "gives rise" to life? Or does it simply present the relation as a fact?

Expand full comment
May 17, 2022·edited May 17, 2022

Pretty sure it's not intentional? Sincerely trying to respond to folks as clearly as I can.

Expand full comment

I can't tell if (A) includes "and we understand how that causation works" or literally just means "a causal mechanism exists."

I don't know what "the hard problem of consciousness" means to you, especially since you seem to define qualia as "the ability to perceive colors" which to me means that cameras have qualia.

I don't see how you can read the post we're commenting under and think there's anything left to explain besides the nitty gritty of exactly which parts of the brain regulate blah blah blah. The broad mechanism by which a mechanical brain produces the sensation of subjectivity is solved, and if there's a remaining question I'd love for you to state what it is.

To make as clear an empirical claim as I can muster: With access to a more polished version of GPT-3 (basically able to pass Turing test consistently) I could personally program a conscious mind who would experience subjectivity essentially how I do. If asked whether it could look at a text file and have a quale of reading the words, it would say yes, and it wouldn't be lying any more than I am when I claim to have a quale of reading your comment. All I'm doing is looking at the words, processing them semantically, putting their content into the global neuronal workspace, storing them to memory, and then recalling that memory and processing a response (periodically printing more thoughts to the global neuronal workspace as I do). That's what being alive feels like to me. Processing data, having thoughts, storing and retrieving those thoughts and processing them. And to be clear, when I say "what it feels like to me" I mean that the thoughts I've stored in working memory all refer to various steps in that process. I can't find any thoughts which don't refer to steps of that process, and I've been checking for 10 years. All those steps are explained in this article, and so I output to this text box "my brain is solved."

Expand full comment

"I can't tell if (A) includes 'and we understand how that causation works' or literally just means 'a causal mechanism exists.'"

I mean that, even if we were to map the correlation between neural activity and subjective, first-person experiences to the level of reliable and richly detailed if-then relations, this would not eliminate the hard problem, which is the question of why such experiences should accompany said neural activity.

Expand full comment

Stephen replied before I could. But there's an irony that you are both claiming that qualia are an emergent property, but at the same time you are also questioning their existence. That's the trouble with emergent phenomena—they're *emergent* and not predictable from the underlying systems. But don't worry. Life is an emergent phenomenon from chemistry, and we still don't have a consensus definition of what constitutes life. But if I were to claim life is just an illusion, people would laugh at me. The same goes for consciousness. ;-)

Expand full comment
May 16, 2022·edited May 16, 2022

It's hard to talk about because the problem of consciousness has been explained away (https://www.lesswrong.com/posts/cphoF8naigLhRf3tu/explaining-vs-explaining-away) but people continue to talk about it so we need a word for the thing.

If I say "mind reading isn't real, it's just people getting lucky and reading body language" you could similarly protest that I contradict myself. Is mind reading a thing that charlatans do or is it non-existent? If you know a good way to deal with this ambiguity I'd love to hear it

Expand full comment

I'm probably not a good person to ask. I've had too many weird supernatural type things happen to me over the years that I've come to provisionally accept that weird supernatural shit happens. But it's not something I can reproduce at will. And I certainly couldn't reproduce it in sterile lab environment!

As an aside, a friend who's a high-energy particle physicist says he's encountered lots of irreproducible signals in his experiments. I asked him if he tries to understand them—because something obviously happened to create that signal. He says he (a) doesn't want to waste his time chasing down something that's probably irreproducible, and (b) that can't be explained by the theoretical framework he's working in, and (c) he doesn't want to get laughed at by other physicists on his team. So, even in the hard sciences, there's a lot of unexplained shit happening around us, but we just try to filter it out and/or explain it away.

Expand full comment
May 14, 2022·edited May 14, 2022

Well said! But I'll take my criticism one step further and say that Dehaene is regurgitating Daniel Dennett's old reductionist hokum. Dennett was the original multi-drafter—but 30 years after positing it, his theory is still not falsifiable with today's technology. What they're really trying to do is to build a theory to remove the observer from the experiment, but what they've come up with is pseudo-science. "Given what we know about consciousness, if we assume the multiple drafts model, then we should naturally expect the subjective perception of a first-person observer." And it all sounds very profound—until we realize that it's an illusion of understanding that they're offering us. And in the end, we're no closer to understanding why we experience our qualia and can continually regard it with a sense of the self.

Expand full comment

Yeah, physicalist discussions of the hard problem of consciousness are always bound to be uninsightful. Better to just not address the subject.

Expand full comment

Most of the hardness in the hard problem of consciousness comes from the fact that when philosophers say the word "qualia", they think they're communicating, but most of the time they actually aren't, because they haven't defined the term in a way that means anything. Imagine meeting an alien named Zark, who is a Zorblaxian from the planet Zorblax. The Zorblaxians have minds that work pretty much the same as human minds: If humans have qualia (whatever those are) then the Zorblaxians do too. But they don't necessarily share human words or concepts about how the mind works. Challenge: How do you explain "qualia" to Zark? How do you define "qualia" to an alien, without appealing to circular reasoning, or defining it in terms of other hazy and undefined concepts?

Expand full comment

The problem is neither with philosophers nor the word "qualia." None of this stuff about explaining the word is to the point.

When I open my eyes and look about, I have the subjective, first-person experience of seeing colors. This is surely related to various patterns of neural activities. Few deny that. But just as surely, I deny that that the subjective experiences "just are" the neural activity. The two things differ in kind.

"Qualia" is merely a word for those subjective, first-person experiences. If you too have such experiences – and I bet you do – then no further explanation of the word is necessary.

Expand full comment

Right. The OP's attempted refutation also applies to something like "sadness"; are we to say "sadness" doesn't mean anything?

This inability to articulate something without reference to shared experience is not a sign that words like "qualia" are meaningless, but rather a part of why there's a Problem of Consciousness at all: if you have experienced qualia/sadness, you know what it is. If you haven't, no one can communicate it to you.

Expand full comment

"The Zorblaxians have minds that work pretty much the same as human minds: If humans have qualia (whatever those are) then the Zorblaxians do too."

You can assume that Zark is just a regular dude, not some kind of extremely rigour-demanding philosopher. Pointing to shared experiences is fine. So are definitions that rely on examples: Give him a few examples, and Zark is perfectly capable of using induction to figure out what you probably mean.

Expand full comment

In that case my email program has "qualia". It perceives and reacts to signals. It even looks at some of the signals and decides that they are spam. (It's usually correct.)

I think you need a more precise definition. Perhaps you think "subjective, first-person" covers that, but I can interpret that in a way that covers my email program. This clearly isn't what you want to mean.

Expand full comment

You're treating this as a dispute about the usage of the word "qualia." It's not. The fact that, when I open my eyes, I see patterns of color, has nothing to do with how we treat the word "qualia." There are two additional facts that I don't directly know but sure would be willing to bet on: 1) when you open your eyes, you also see patterns of color; 2) your email program, regardless of its behavioral sophistication, does no such thing.

These are the salient facts. Arguing about behavioral qualifications for the word "qualia" is entirely missing the point. Qualia are not external behaviors.

Expand full comment

This seems to be the core of disagreement. I disagree with the combination of 1 and 2. I think that if 1 applies, then 2 is false.

(Of course, there are a lot of technical aspects in which the qualia of myself and of my email program differ. But I don't see any reason to assume that there is a fundamental difference.)

Expand full comment

Fair enough. As I said, 1 and 2 are my "bets," not things I claim to directly know. And my confidence in 1 is *much* higher than my confidence in 2. So perhaps we aren't as far apart as one might think.

Expand full comment

What is the basis for "your email program, regardless of its behavioral sophistication, does no such thing"? I *think* that you're presuming that people don't have an (as yet ununderstood) method which "sees the patterns of color", but I don't understand why, or possibly in what way, what a person does is intrinsically different from what an embodied computer program does. Or even in what way one could suspect it was different.

Expand full comment

The statement you quote is in reference to literal email programs, not arbitrary embodied programs. I said I am willing to "bet" it is true in the case of literal email programs, not that I claim to know it is true for programs in general.

Expand full comment
May 14, 2022·edited May 14, 2022

Me: Zark, qualia are "Subjective first-person experiences"

Zark: Right, so when I experience the colour red, I can remember it afterwards, I can have a train of thought about the colour red, and how exactly this colour red I'm seeing right now looks. I can compare it to the colour green I saw a few minutes ago. My whole self is aware that I'm seeing the colour red, and I can reflect on that. That is what it means to experience something. But the experience is also subjective and first person. "First person" is simple. These thoughts and reflections and experiences are happening in Zark's head and Zark's head alone. If they were happening in someone else's head, they would be someone else's qualia, not Zark's. "Subjective" is a little more interesting. Maybe multiple people can see the same red object I'm seeing. From an information theory perspective, we all have the same information about that object's colour. But Zark might be reminded of the red Zarglewibbler owned by his brood-mother, and filled with fond memories of her, while other Zoroblaxians seeing the red object would probably be having completely different thoughts and emotions in response. Qualia, then, refers not to the fairly objective information one senses about the environment, but more to one's subjective internal thoughts and reactions to that information. The interpretation of the data, more than the data.

Expand full comment

That description seems in no way relevant to the hard problem of consciousness. You're just saying that when Zark sees red it's different from when I see red, both because the neural activity takes place in different locations and because our brains are different in subtle ways that affect how we process that data. Okay, granted. Is that all people mean by qualia? If so, why does it come up in conversations like this?

Expand full comment
founding

I would guess this is just the most 'natural' way for people to describe their 'subjective first-person experience'.

I think one confusing 'thought experiment' is: imagine person A sees a 'red' rose and sees A-qualia-red – the qualia of 'red' for person A – but when person B sees the same rose, they see A-qualia-green. That seems like a puzzle that only 'qualia' could solve! But really, even 'color' is way more complicated – but also _structured_ – so it's a bit like the Chinese Room experiment in that it loses basically all of its force if you spell out all of the explicit details, e.g. how long exactly (or even approximately) it would take a person in the Chinese Room to process the input and calculate the output.

Expand full comment
founding

This is good – but not apparently enough to bridge the gap.

It probably _feels_ like there's something left over that your comment doesn't cover. I think that's intuitive! But this part in particular is the kind of thing that made me skeptical of 'qualia':

> From an information theory perspective, we all have the same information about that object's colour. But Zark might be reminded of the red Zarglewibbler owned by his brood-mother, and filled with fond memories of her, while other Zoroblaxians seeing the red object would probably be having completely different thoughts and emotions in response.

Expand full comment

This sounds like Wittgenstein's passage about the beetle in the box. We've each got our own box and we say we've got something in it, and we seem to learn to use the word "beetle" for the thing in the box, but he seems to suggest that there isn't a way we attach the word to the thing in the box, rather than just for the act of talking about the box generally.

Expand full comment
founding

> When I open my eyes and look about, I have the subjective, first-person experience of seeing colors. This is surely related to various patterns of neural activities. Few deny that. But just as surely, I deny that that the subjective experiences "just are" the neural activity. The two things differ in kind.

That doesn't make any sense to me.

I _could_ deny that me doing _anything_, e.g. speaking, is 'just the neural activity. I _could_ deny that destroying my brain would cause my subjective first-person experience to stop too.

But I don't think any of that is obviously true, and I'm (very) skeptical that any of them are.

Arguments via introspection seem particularly brittle. If you accept that it's even convincing, or any evidence at all, then surely you also have to accept that someone would be try if someone disagreed with you.

I will admit that 'qualia' is intuitively appealing – because that's the same part of me that can report its supposedly first-person subjective experience! That _is_ what it _feels_ like.

Expand full comment

Of course, there are definitions of "qualia" that are satisfactory to many, so there is.no objective problem of meaninglessness. And of course, someone can use "doesn't mean anything" to mean "doesn't mean anything to me", to mean "doesn't fit within my world view".

Expand full comment

It's good news such satisfactory definitions exist. Would you be willing to paste/link to the definition you think would be most likely to make Zark understand qualia? Part of the point of the exercise is to explain the concept to someone who doesn't yet have it as a part of their world view, but you can assume that Zark is willing to expand / change his worldview. Plus, even if Zark's worldview doesn't allow for ghosts, it should still be possible to explain the concept of ghosts to him and have him understand.

Expand full comment

Anyone , including you, can find definitions by googling "definition qualia". Someone who was even remotely open minded would have already. someone who is completely biased on the subject will never accept any definition as "satisfsctory", because then they might be in danger of changing their minds.

Expand full comment

Okay, that returns the string: "the internal and subjective component of sense perceptions, arising from stimulation of the senses by phenomena"

Seems like a legit definition, and one that even Zark could understand. Zark's interpretation of this is going to be pretty similar to his interpretation of Stephen Pimentel's "first person subjective experience" definition, so I won't repeat it here.

Once a group of people agreed on this definition, they could certainly proceed to have a useful and productive discussion about the nature of qualia and whether they necessarily require some extra-physical component. My point was mainly that most people don't bother to do this before starting the discussion, and as a result they often end up using the word "qualia" in different senses, and as a result end up talking past each other.

Expand full comment

That kind of thing happens fairly often..but do you have any evidence that was happening in this discussion?

Why the "ghosts"?

Expand full comment

I don’t want my reply to sound snarky but you are making the problem sound harder than it actually is…As Stephen replied to you, qualia is subjective first person experience. It can be pain, it can be anger, it can be seeing a colour red, hearing, it can be anything and that is why it is important in philosophy of mind.

We are not the same yet we can communicate with each other and have roughly the same experiences…Both of us can see the red colour that is objectively red (we have the same anatomy of eye with corresponding brain regions responsible for vision) and we can talk about it, but our conscious experience differ because we are different individuals, we have different qualias.

To you red means something slightly different to me, you have different memories associated with it, you have seen red in different situations etc. That is what makes you experience slightly different then mine yet we both agree that we see red colour. You see? Zark is fine, but Zark is different species from another planet and so on…we can discuss it, it might help to solve the problem by bringing a different perspective to the problem but we still don’t know what consciousness really is…It is a set of neurons somewhere in the brain as Descartes thought? Now we know it is not located anywhere in the brain.

Contemporary neuroscience thinks that consciousness is created by distributed processing in many neural networks that is highly probably nonlinear and not necessarily temporaly binded. Consciousness is an integrative process of various features like thoughts, memories, attention etc.

I suggest you read more about neural correlates of consciousness and get a quick recap of how CNS works. It might be helpful because otherwise you can get lost in esoterical debates about aliens :) Peace!

P.S. You can start with From sensation to cognition by MMMesulam.

Expand full comment

Correct me if I'm wrong, but it sounds like your criticism of philosophers here is that they're not able to define "qualia" in any way that makes sense in terms of the objective physical world (it can't be defined in mechanistic terms or in terms of a physical process). But this is exactly what the hard problem is -- our subjective experience of consciousness (qualia) cannot be defined in a coherent way in the physicalist model of reality. Since qualia are the one thing we can be 100% sure exist, the inability of the physicalist model to explain them (or even define them) seems like a flaw of the physicalist model, not a flaw of the concept of qualia.

Expand full comment

Sounds like only philosophers can be "100% sure (it) exists" while physicalists can't. Doesn't that imply that it only exists in some paradigms but not in others?

Expand full comment

Is the problem here a confusion between the questions "Why consciousness?" and "How consciousness?"

Dehaene seems content to answer "Why consciousness?" It turns out that consciousness is a very useful quality for animals, you can't have certain forms of complex behaviour without it, so evolution reaches into its bag of tricks and builds brains with consciousness. Easy-peasy!

This is somehow satisfactory but happily sidesteps the question of "How consciousness?"

Expand full comment

The distinction is between "the physical states or processes associated with consciousness", which is what Dehaene addresses, and "consciousness per se", i.e. the fact that when those physical states or processes occur there are also things like subjective experience, knowledge, and intention.

These are fundamentally different questions. It's like the difference between "what are the physical laws of our universe?" and "why is there something rather than nothing?". No amount of mathematical detail will ever tell you why or how (or whether) it is that the things described by the equations are actually real.

Expand full comment

To discuss that, you need to define "consciousness per se" in operational terms. How could one decide whether it is present or not? Accepting it without a good definition leads straight to "philosophical zombies".

Expand full comment

The whole idea is that it cannot be defined in operational terms. I know that I have "consciousness per se" because I experience it directly. It IS my experience.

But I could never decide whether it is present or not in any other person or physical system, because it is not logically entailed in any collection of pure physical events.

It will always be possible for me to discuss anything you do as the product of mindless atoms doing their thing, and yet I know that for me that description would be incomplete. I really do have experiences and intentions and so forth. You know these things about yourself, but can never know them about me in the same way. That is the essence of the "hard problem". We know that experience exists, and yet it can never be logically entailed by a physical description of the world.

Expand full comment

I accept that "consciousness per se" is an abstraction that is emergent and cannot be reduced to atoms, but if you can't define it operationally, then I've got to assume that your concept is something you haven't understood.

Expand full comment

I may be misunderstanding what you mean by "operationally". What would you accept as an "operational" definition of conscious experience?

I think the basic essence of the problem is in your phrase "cannot be reduced to atoms". That sounds like a pretty straightforward synonym for "cannot be explained or described in physical terms".

Expand full comment

I strongly agree. This sentence alone betrays a misunderstanding of where the difficulties with qualia lie: "Classically, we would say that "I" or **"the brain"** experiences the color red. This is also called a quale, plural qualia."

All I can easily say of qualia is that *I experience them*. Does my brain experience them? Does it generate them? Do they arise out of it as an inevitable consequences of computation? I don't know, but I know the first fact is that qualia exist.

"The hard problem of consciousness assumes that this experience is something that goes beyond neural activity." Not that experience goes beyond neural activity, but that our evidence of it comes before neural activity.

Hypothetically, evidence exists which could be presented to me to convince me that brains do not operate via eletrochemical neural activity at all, just as you could convince me the moon is green or the sun is a big pile of firewood. (Though I have an extremely low estimate of the likelihood of seeing such evidence.) On the contrary, there is no conceivable evidence which could convince me that I do not possess subjective experiences.

Expand full comment

Yes, very well stated.

"The hard problem of consciousness assumes that this experience is something that goes beyond neural activity." This statement is confused and imprecise, particularly with the words "goes beyond." The hard problem makes no "metaphysical" assumption about what experience is or how it relates to neural activity. It merely observes that experience is definitionally distinct from neural activity in that they are different kinds of things.

Expand full comment

Perhaps we could rephrase the problem as: "Why is there at least one subjective observer in the universe?"

Can we ever hope to come to something approaching a solution? Well, we know a lot more than we did a couple of thousand years ago. The fact that my consciousness appears to be intimately tied to the physical state of neurons inside my brain (rather than, say, that chair over there) seems pretty darn suggestive.

Expand full comment

A pretty compelling case can be made that qualia can exist without a subjective observer. Certain contemplative practices insist on this, i.e., that the experience of being a subject is made of qualia and can in fact be dissolved.

Expand full comment
May 21, 2022·edited May 21, 2022

>It merely observes that experience is definitionally distinct from neural activity

This is the part of "hard problem" that I have never understood.

Egyptians defined the "Sun" as God Ra racing across the sky in a boat. (Later, a beetle.)

In reality, in the physical universe, the Sun has always been what it is: "It is a nearly perfect ball of hot plasma,[18][19] heated to incandescence by nuclear fusion reactions in its core, radiating the energy mainly as visible light, ultraviolet, and infrared radiation. " (Wikipedia.)

The two definitions (Ra, a nuclear fusion plasma) certainly describe two distinct, different kinds of things. There, however, is no "hard problem of the Sun": why working through theory of nuclear fusion doesn't feel like seeing the mysterious workings of a falcon-headed god Ra, and says nothing about his dealings with Set and Apophis!

For some reason, there however is a hard problem of consciousness, usually argued as, "I can read a description of a mammal, its brain and its neural activity, but I can imagine a description of an equivalent being, neurons firing and all, but I also can imagine it doesn't feel like anything! It doesn't describe my *feelings* of *feeling* like navel-gazing! It could be as well be a p-zombie without navel-gazing feelings! Gotcha, physicalists!"

Is there any reason to believe there could be human-recognizable "qualia" without human neurons or more generally, multicellural life recognizable to us? The scientific answer is obviously no: that is all we have seen. There is a good indication our distant relatives octopuses have substantially different qualia already (they have part of their brain distributed in their arms).

Neural activity in the brain in general feels different to the neural activity itself than the neural activity that happens when the said brain is reading a description of its internal workings. It doesn't mean the description of internal workings to be any less accurate.

Expand full comment

You're misreading 'definitional' as, in this case, relating merely to external description. Your example of the sun *does* relate merely to external description, but that's why it's not analogous.

It's not a matter of my saying so, with a subsequent verbal description, that make it such that, when I open my eyes, I have a subjective, first-person experience of seeing a red cup. That's just a fact, a feature of reality. The hard problem is the question of why my visual processing should produce this subjective, first-person experience.

And no one is saying goofy things like "gotcha, physicalists!" Except you. You said that.

Expand full comment

>Your example of the sun *does* relate merely to external description, but that's why it's not analogous.

Sure, it is not exactly the same. But I am willing to argue a person who took the Egyptian mythology as a description of reality would be dissatisfied with explanations of modern astronomy and physics.

>The hard problem is the question of why my visual processing should produce this subjective, first-person experience.

Why not? We can induce subjective, first-person experience by (metaphorically) tickling the right neurons. Tickling other substrates, such as rocks, doesn't make them to tell about their subjective experiences.

Our ancestors found themselves interacting with and manipulating colorful objects and many other things. And so it happened they ended up with sensory and neural machinery for doing exactly that. Ancestors of roundworms like C. Elegans did not (or at least its machinery is much more limited, to the extent it can't have experiences as finegrained as "I see a red cup").

The scientific answer is that if put together a certain kind of biological cells in particular way, it exhibits behavior compatible with the subjective experience humans call conscious. Hopefully one day the science can tell us the exact patterns what kind of architecture produce only "unconscious" (or lets be charitable, less conscious) processing and some others (more) "conscious" processing. Maybe we can create non-biological machines that exhibit also similar patterns.

All of these are empirical questions. Where is the hard problem, then?

>And no one is saying goofy things like "gotcha, physicalists!" Except you. You said that.

I was paraphrasing the expressed attitude of Chalmers, Nagel's infamous bat and many others as I remember it, with some snark added on my part (shaked, not stirred) :) But beowulf888

https://astralcodexten.substack.com/p/your-book-review-consciousness-and/comment/6566097 and

LarryBirdsMoustache https://astralcodexten.substack.com/p/your-book-review-consciousness-and/comment/6567352 here provided some immediate impetus to write in the snarky part.

Expand full comment

>But I am willing to argue a person who took the Egyptian mythology as a description of reality would be dissatisfied ...

It doesn't matter what that person would think, because the situations are not logically analogous. You're insisting on the conceit that subjective, first-person experience is analogous to an Egyptian myth, but it's not.

>Why not? We can induce subjective, first-person experience by (metaphorically) tickling the right neurons. Tickling other substrates, such as rocks, doesn't ...

You're simply repeating the hard problem and then saying "but I don't think it's a problem, because I'm satisfied with mapping the empirically discovered correlations." It's fine for you to be satisfied with that. But your satisfaction neither answers nor invalidates the question of why subjective, first-person experiences should arise from these particular neural activities.

Expand full comment

This sums up what I would have like to have said about Appendix A better than I could have hoped to. Pretty much every response I've seen to the Hard Problem has tried to reduce it back to the "Easy Problems" without noticing the change, to the point where I wonder if something is getting lost in translation when we talk about subjective experience.

Expand full comment
deletedMay 14, 2022·edited May 14, 2022
Comment deleted
Expand full comment

Subjective experience is not a prediction of physics.,.and is not reducible (or.has not been reduced) to.pnysics.

Expand full comment

There's no basis for assuming there shouldn't be subjective experience. You've answered this in a very roundabout way; as in you take subjective experience as an axiom, which is fine too, I think.

Note that I never made any statement about magic, fairy dust, the soul, or anything else, just that the hard problem isn't satisfyingly answered by trying to relegate it to the "easy" problems.

Expand full comment
deletedMay 14, 2022·edited May 14, 2022
Comment deleted
Expand full comment

Where's this "burden of proof" shit coming from? This is an open question, not an advocated position. You saying "the experience of red is caused by those neurons firing", when referring to subjective experience, has no explanatory power. Qualia isn't a theory, it's a descriptive term for individual components of subjective experience.

Expand full comment

Maybe something is getting lost in translation. I'm pretty far over on the "there is no Hard Problem in the first place" side, but I honestly haven't done a lot of reading on it; do you (or anyone else) have any recommended reading for "actually there is a Hard Problem" arguments?

Expand full comment

David Chalmer's The Conscious Mind (1996).

I can't emphasize enough that the hard problem does not presuppose any "metaphysical" account of qualia. In particular, neither Chalmers nor I are dualists. Neither of us believes in "magic mind stuff," whatever that would be.

I mention this because there seems to be a psychological block in thinking about the problem. Some people are so afraid of dualism that they engage in motivated reasoning and begin denying the obvious, such as "I am seeing patterns of color right now."

The key observation is that the subjective experience of, say, seeing patterns of color differs in kind from the neural activities which which it surely correlates. That observation, in itself, presupposes nothing about the "metaphysical" status of said experience.

Expand full comment

I don't see the problem. One is an external observation, the other is an internal observation. They seem, to me, to be observing the same event with different instruments at different resolutions (and with different reliability). There's no reason to expect that the same language would be either used or appropriate. When instructing a person to head North, it depends on which way they are facing whether you tell them to turn right or left (or, of course, something more complicated).

Expand full comment

There's no problem if you don't see physics as an exclusively correct map of everything. But a lot of people do.

Expand full comment

Well, I see physics as a correct map (barring corrections), but not an exclusive one. It's lousy, e.g., at describing social hierarchies, though it can do quite well at predicting the direct results of physical actions. (It could, in principle, describe the social hierarchies to a degree limited only by limitations in knowledge and ability to calculate, but it would need such an immense amount of data that this approach is totally unreasonable.)

Expand full comment

> David Chalmer's The Conscious Mind (1996).

What's the title of chapter four?

Chalmers *absolutely* is a property dualist, and this is a serious metaphysical commitment - he isn't positing a separate "stuff" to play host to it, but the standard "magic mind" behavior is all still there. It's just counting on arguments about logical possibility to shield it from falsifiability.

Expand full comment

Two points:

- Regardless of what Chalmers himself favors (and he tries out multiple possibilities), one does not need to be a property dualist to accept his core account of the hard problem. I, for one, am not committed to property dualism.

- Dualism of properties is nothing like Cartesian dualism of substances. Many physical systems have more than one property. Wave/particle duality in QM is just one example.

Expand full comment
May 15, 2022·edited May 15, 2022

> - Regardless of what Chalmers himself favors (and he tries out multiple possibilities), one does not need to be a property dualist to accept his core account of the hard problem. I, for one, am not committed to property dualism.

I don't see how you can accept Chalmers' formulation of the Hard Problem in order to reject eliminative materialism without embracing some form of dualism, essentially by definition. Qualia is explicitly posed as a non-physical property with no causal downstream interaction with the material world (hence the p-zombies), and if you can't make that commitment you're not talking about the same Hard Problem.

> - Dualism of properties is nothing like Cartesian dualism of substances. Many physical systems have more than one property. Wave/particle duality in QM is just one example.

Wave/particle duality is a case of different starting assumptions leading to different interpretations within a shared epistemic framework, and I can think of half a dozen such cases. (My personal favorite is the opposing derivations of viscosity starting with either continuum fluid mechanics or gas kinetics.) They live or die on their applicability - and notably, QM doesn't rely on *either* interpretation to be binding when it comes time to run the probabilities. They don't offer much cover to a metaphysics completely unconcerned with falsifiability.

Expand full comment

The wave and particle pictures can be translated into each other, so it's more analogous to dual aspect theory than property dualism.

Expand full comment

>Wave/particle duality in QM is just one example.

I, for one, am quite confident wave particle duality is more about our monkey imagination's failings to describe behavior of sub-atomic elementary particles (that we have never seen by naked eye) in terms of ordinary macroscopic stuff, than about any substantial dual nature of universe.

Expand full comment

Chalmers is a property dualist.

Expand full comment

Chalmer's personally favored solution is unimportant. It's orthogonal to understanding the hard problem.

Expand full comment

It might be best for you to restate exactly what you think the Hard Problem is, because Chalmers definitely bases it in and extrapolates it to conclusions you're disavowing.

Expand full comment
founding

I am seeing patterns of color right now – but it's entirely a product of neural activity. There's nothing _extra_.

Expand full comment

Qualia is nothing but a word for any subjective, first-person experience. It implies no claim one way or another about its causal relation to neural activity.

"it's entirely a product of neural activity."

That's a claim on your part. One with which I have no quarrel, by the way, although the experience is different in kind from the neural activity with which it is correlated.

Expand full comment
founding

Your comments (in this thread) are often very confusing.

> the experience is different in kind from the neural activity with which it is correlated.

'Qualia' are different "in kind from the neural activity with which it is correlated"?

It seems like you're claiming that we/you have an experience of qualia? Is that experience itself a 'quale'? Are there hierarchies of qualia?

Expand full comment
May 14, 2022·edited May 14, 2022

I think Stephen Pimentel's comment has you covered, but I'd add in I think it's related to the Stephen Hawking comment "What breathes fire into the equations" about physics. This book does a good job of describing a lot the information processing components of subjective experience (what you can turn off to make certain parts go away, for example), but doesn't seem to recognize subjective experience as an experienced thing rather than a predictive model.

Basically (this is for example, forgive factual errors for a moment), we can look the way the neurons fire to make us see red, know how to turn it off and on, understand the way it moves in the brain, but knowing those things is inadequate for understanding why we have the experience of seeing red or subjective experience at all.

Expand full comment
founding

No one could ever solve the Hard Problem _without_ somehow reducing it to an easy (or easier) problem!

It sure seems like some people just might want to NOT solve the problem at all, i.e. reduce it some approximate/explicit definite thing made of parts. That seems more like picking 'worship' than 'understand'.

Expand full comment

That's an assertion without any evidence. Thinking "subjective firsthand experience" is not the same thing as "observed neurological processes" is not any kind of magic, it just means the problem is potentially philosophy rather than science.

Expand full comment
founding
May 17, 2022·edited May 17, 2022

Which assertion do you think I made "without any evidence"?

I think that a problem being "philosophy rather than science" is much more likely evidence that philosophers are confused (and that they confused themselves) than that there's really a 'problem'.

What do you mean by "observed neurological processes"? What are those "processes"? Who or what is observing them (and how)?

I don't see any mystery, or 'room for philosophy'. Our brains/minds consist of something that (sincerely) _reports_ 'qualia' (i.e. first-person subjective experiences). But I don't think there any good reasons to think that those experiences are somehow (even partially) independent of our brains.

Expand full comment
May 17, 2022·edited May 17, 2022

"No one could ever solve the Hard Problem _without_ somehow reducing it to an easy (or easier) problem!" This was asserted without evidence. There's a good whack at the problem by Jonluw in response to the original comment, which while I take no position on it, shows the difference.

I mean "externally observed neurological processes", i.e. anything a neuroscientist would look at.

You're confusing the statement "direct subjective perception is a different thing from externally observable neurological processes" with "qualia is independent of the brain", which isn't an assertion I am making. There's no confusion here, other than that some science minded people forgot that there are questions inherently unanswerable in some frameworks because they aren't testable.

Stating again, because I've said it many times already and I'm sick of repeating it, I AM IN NO WAY ASSERTING THAT QUALIA IS INDEPENDENT OF THE BRAIN.

Put the hard problem in the realm with questions like "Why I am I 'me' and not someone else" and "Why is there something instead of nothing". If you think those questions aren't worth considering, this is just another to add to that pile.

Expand full comment

A simple answer is that there aren't any qualia. Dennett successfully destroyed the notion imo.

Expand full comment

"Successfully destroyed the notion" implies, to me, that said notion was so comprehensively refuted that there is no longer any serious debate, and that few, if any, members of the relevant field hold to the notion.

This is clearly not the case for qualia. Dennett challenged the concept, perhaps, but destroyed? No.

Expand full comment
founding

I agree – I agree with Dennett too, but not that Dennett "destroyed the notion" – for everyone anyways.

Hell – maybe 'qualia' is like 'mental imagery' and there really is some kind of extra phenomenological experience that only some people enjoy. With 'qualia' tho, that's really hard (for me) to believe – _if_ 'qualia' really is just 'a first-person subjective experience'. P-zombies and 'really conscious' beings are indistinguishable – by definition! You could be a p-zombie! I could be one! There's no way we could know!

(Can p-zombies even 'ask themselves' whether they have 'qualia'? If they could, wouldn't they (incorrectly) report to themselves that 'yes, I have qualia'?)

Expand full comment

A p-zombie could perform the neural computation associated with "asking yourself whether you have qualia". But due to the lack of qualia, nobody would have the experience of having that thought.

Expand full comment
founding

Sure – but what would it 'feel like' to a p-zombie itself? It would _sincerely_ report having 'first-person subjective experiences', and it would report the same to itself. Just because you're convinced that you have qualia isn't in fact any evidence that you really do – and shouldn't be even privately (to yourself).

I think it's very telling that the 'qualia defenders' keep slipping into views/perspectives that they (often) explicitly disclaim.

> But due to the lack of qualia, nobody would have the experience of having that thought.

So, a p-zombie, which, by definition, is entirely indistinguishable from a person with qualia, would necessarily be a "nobody", i.e. lacking a homunculus to 'really' experience anything.

And yet p-zombies would – just like you and everyone else – _claim_ that they have first-person subjective experiences!

Expand full comment

Properly understood, "successfully destroyed the notion of qualia" is equivalent to "proved that we are actually all p-zombies", which he definitely did not do.

Expand full comment
Comment deleted
Expand full comment

I think what you're proposing here is not that qualia do not exist, but rather that it is metaphysically impossible for there to be a physical object that behaves like a human (or human brain, or whatever) without qualia.

Expand full comment

No, there are many ways of arguing for qualia without bringing in zombies.

Expand full comment
founding

How can you convince yourself that _you're_ not a p-zombie?

If you _were_ a p-zombie, you'd still think that you weren't.

(You'd definitely still argue about 'qualia' and p-zombies on the Internet exactly as you've done!)

Expand full comment

Thank you for stating this so plainly. This position is exactly what is entailed by denials that there is any "hard problem."

Unfortunately for such denial, as I look at this screen and write these words, I am most certainly having subjective experiences of colored patterns, which is precisely the meaning of "qualia."

Expand full comment

This is just a silly motte and bailey from the qualiaist camp. On the attack they assert that qualia have all sorts of special properties, on the defense they fall back to "qualia just means I have subjective experience".

Expand full comment

What kind of special properties do they assert? Reading through this thread, I see only the latter claim.

Expand full comment

Bingo. There is no assertion of special properties. Only the utterly mundane, utterly evident ones.

Expand full comment

"One can claim that the qualia "just are" said neural activity, but that is simply to abuse or misunderstand the concept of qualia."

The denial of physical supervenience is assumed as part of the definition of qualia. Contrast with things like "life", where after centuries of acrimony and fuzzy definitions we've arrived at a place where pretty much all the holdouts against physicalism are explicitly theological.

Expand full comment

The standard reply is that it *seems* that you have qualia, but you don't actually. Just like it *seems* that someone has an explanation for a behavior when they confabulate something to explain what they did in a case where some unconscious effect is at play.

Expand full comment

To say this is simply to misunderstand qualia, which are merely subjective, first-person experiences. In other words, the "seeming" *is* the qualia.

While qualia can be badly mistaken about how they refer to the rest of the world, they literally cannot be mistaken about their own existence as subjective, first-person experiences.

Suppose you open your eyes and see a red elephant across the room. Suppose you have also taken a high dose of a psychedelic an hour before, and there is no red elephant across the room. The qualia is "false" in its reference to a red elephant across the room, but it still actually exists *as qualia.* Whatever their relation to the rest of the world, qualia are literally self-evident as experiences.

Expand full comment

If qualia are defined as things about which it is conceptually impossible to be mistaken, then there are no such things. There is not a layer of experience that is self-evident as we think there is - just witness those optical illusions where you are presented with an image of a chess board with a shadow, and two squares that *seem* to be presented as different colors are actually presented as the same color, or where the snakes that *seem* to be presented as moving are actually not presented as changing position in any way. The mistakes we make about these sorts of things are more subtle than the mistakes we make about objects in the world, but we are mistaken about our own experiences all the time.

Expand full comment

No, you're still not getting it. The optical illusions in those example *are* the qualia. The fact that they falsely represent the chess board or snakes is irrelevant. No one is claiming that qualia cannot be mistaken about how they relate to the rest of the world. That was the point of my example of the red elephant. Qualia absolutely *can* be mistaken in that sense. They simply can't be mistaken about their *existence as qualia.*

Expand full comment
founding

> While qualia can be badly mistaken about how they refer to the rest of the world, they literally cannot be mistaken about their own existence as subjective, first-person experiences.

Are you claiming that qualia are active agents? Is each quantum of qualia a consciousness entity? "_they_ literally cannot be mistaken about their own existence"? The qualia are the things that know of their own existence?

That reads like a weird 'homonculus' theory – the qualia are tiny people in our 'minds' that 'really' experience everything.

Expand full comment

No, I'm not claiming that qualia are active agents. There's nothing like a homonculus involved.

Qualia is just a word for experiences that one has. One cannot be mistaken about the fact of presently having an experience, although one can be badly mistaken about the things to which the experience refers, as in optical illusions, etc.

Expand full comment
founding

I believe that you're sincere that you _feel_ like you're "most certainly having subjective experiences of colored patterns".

But what does 'qualia' add to your feelings? And what's the point of positing some Hard Problem to be (maybe) solved?

If I can (or should) believe you when you claim that you 'have qualia', it'd only be polite to grant the same courtesy when communicating with The Chinese Room, or anything else that writes/speaks/communicates using a 'first person perspective'.

Expand full comment

I'm more certain of seeing colored patterns on a screen right now than I am of your existence – by far. In fact, I'm perfectly certain of it. The screen may be an illusion. In fact, everything I see may be an illusion, brain-in-a-vat style. But this would not detract from the experience, only from some other reality to which it ostensibly refers.

'Qualia' is just a word. It doesn't "add anything" to experiences. It isn't supposed to. It's just a word for experiences.

Expand full comment
founding

Okay – but given that there's no way for me, or you, to know that The Chinese Room _doesn't_ also have 'qualia', it doesn't seem like bringing them up is helpful or useful for understanding anything.

Expand full comment

Let's assume that the Chinese Room has qualia. The hard problem presents itself unchanged: How does the shuffling of pieces of paper in the Chinese Room produce in the room a conscious awareness of qualia?

Expand full comment

No. Qualia clearly exist. They are the internal perception of neural actions (the instrument observing itself). They just aren't any hard problem.

Expand full comment

What does that mean? Clearly we experience qualia and that needs to be explained. Saying “it’s just neurons, or electrical currents or chemicals” is like saying that pain doesn’t exist for the same reason, so no need to treat it.

Expand full comment

We do treat pain with chemicals though, that's a really strong hint that there's nothing special going on there, isn't it?

As I noted above, the "we experience stuff so qualia are definitely super duper real" is just an equivocation. Qualia are ascribed all sorts of properties that do not follow from "I experience stuff" (separability, commensurability, uniqueness, etc). So for example you get people talking about not a particular experience of red, but _the_ quale of "redness", which at the same time is supposedly ineffable! Without any of those properties it's just a pointless idea that adds nothing beyond the basic materialist view.

Expand full comment

Adding to this point:

We can induce qualia by stimulating neural activity via transcranial magnetic stimulation (TMS). I have mentioned the experiment that we can induce the feeling of floating at the ceiling by TMS. We can also induce other perceptions by TMS, depending on the area that we stimulate (and how we stimulate).

Though as far as I understand, proponents of the hard problem of consciousness wouldn't be surprised by this point. It seems that the disagreement comes later.

Expand full comment

Right, I think; see my comment above.

Expand full comment

I don't think the ability of chemicals to treat pain has anything to do with the point he was making. The point is that saying "qualia don't exist; it's just patterns of neurons firing" (or whatever) is like saying "pain per se doesn't exist; it's just neurons firing" — okay, that's what it is, we all agree on that... but non-existence doesn't follow.

Similarly, qualia being entirely physical and created through physical means isn't denied by most people in this discussion, just as pain being entirely physical and treatable by physical means is not disagreed with by anyone who asserts that pain is real and can be experienced — no one is saying there's something magical or "special" (as in extra-/im-material) here, as far as I can see.

Expand full comment

Exactly. Hand waving away qualia as not real is like hand waving away pain as not real. I don’t get the argument that something isn’t real if it can be cured by chemicals either. Surely if something can be treated by chemicals it has to exist prior to being cured. Otherwise what exactly is being cured.

Obviously pain is in fact largely understood, a signal from affected nerves to the brain, but other qualia aren’t. Hand waving any of this away as non existent isn’t an answer.

Expand full comment
founding

I think there's a big difference from 'some qualia aren't well understood' to 'consciousness is a Hard Problem'.

There's a big difference between 'scientific ignorance' and 'philosophically confused'.

(We're not philosophically confused about gravity at the scale of our solar system, or the basic physics/chemistry of stars like our Sun, or that the relevant physics, chemistry, and mathematics is _pretty_ good for understanding these kinds of things. There are mysteries, but they're really very tightly bounded.)

Expand full comment
founding

What's confusing to me is that even tho I think the Hard Problem is just a confusing 'mystery', I don't disagree that qualia or phenomenological phenomena 'exist'.

But I absolutely think some of the participants in this thread _have_ denied that qualia are 'just' neural activity, i.e. there _is_ (or so they claim) some 'extra' ingredient missing from neurology alone.

Expand full comment

The ability to treat pain with chemicals shows that pain is not completely non physical without showing it is completely physical.

Expand full comment

Leaving aside any Occamian considerations for the moment, do you have a theory of how matter interacts with supernatural entities? If the whole thing functions in a systematic manner, then the interactions must be governed by laws similar to those of physics, no? And we should be able to determine them experimentally.

Expand full comment

Pain has a qualitative aspect and an aversive aspect, and it can have the aversive aspect with or without the qualitative aspect. (During my bout of covid last week I was introspecting quite a bit on my sensations at the worst moments - although I could sense the fever, and was experiencing some strong aversiveness, the aversiveness wasn't actually directed at the fever or anything else that felt like pain, but was just there.)

Expand full comment

Yes. Even if the conscious mind lacks a Cartesian theatre, or Central scrutiniser, each module could have its own qualia.

Expand full comment

why wouldnt there be qualia? creatures evolved to feel things and we have big complicated brains that allow us to have complex subjective feelings rather than just pain or hunger

but surely if something as simple as weed or a good nights rest or catching a cold makes your qualia change it cant be that hard of a problem

Expand full comment
May 15, 2022·edited May 15, 2022

I'm curious why you think saying the qualia "just are" the neural activity is abusing terminology. To me, this particular insight was the key to constructing an account of consciousness that adequately resolves the hard problem in my opinion.

Would you care to elaborate?

Expand full comment
May 15, 2022·edited May 15, 2022

- Qualia are subjective, first-person experiences.

- I learn, through third-person study of the world, that bodies have nervous systems supporting complex neural activities.

- I note that these two things differ in kind. [Crucial point to ward off super-common misunderstanding: this is not an assertion about causality! At this point, I am presupposing nothing about the causality to be discovered. The causal relations may turn out to be very strong, weak, or some complex story in between. The point about "differing in kind" is *orthogonal* to discoveries about causality.] While my experience of seeing something red may turn out to strongly correlate with particular patterns of neural activity, and while I might posit a tight causal model in that connection, the experience itself is a different kind of thing than the neural activity.

Expand full comment

Thank you for stating this so clearly. This makes it very plain where our thinking differs. I wish to claim your reasoning is mistaken, and I believe this specific mistake (when not being uncovered) is a very common source of confusion and conflict in discussions of the hard problem.

The error lies in your assertion "I note that these two things differ in kind". In order to claim that nervous systems differ from qualia in their kind/nature, you need to examine the nature of a nervous system. You say you learn the nature of nervous systems through third-person study of the external world. However, I assume you will agree that it is in principle impossible to truly access the external world. This is well-established philosophy.

If you were to open my skull and look at my brain under a microscope, you would see a bunch of branching gray cells. But these cells you see *are not my actual neurons*: they are images formed on your retina, by photons bouncing off or through the object.

All you have access to are sensory experiences taking place within your own mind. You fundamentally can not access my nervous system in its true nature. No matter how advanced the experimental instrument. All you can do is find different ways of projecting a representation of my neurons into your own mind.

How then, looking at my nervous system, can you say that it is different in kind from my mind? You have not truly seen my nervous system, any more than you have seen my thoughts.

Expand full comment

You are correct in stating that all third-person, scientist study of the external world, such as that of neurons under a microscope, must pass through first-person qualia in order to be known.

This observation in no way eliminates the distinction in kind between qualia and neural activity.

It could turn out to be, for example, a fact about the world that conscious experience is somehow inherent in certain patterns of neural activity. (People who deny the hard problem often assert that this is the case.) My response to that fact would be, fine, but that doesn't eliminate qualia; it merely makes them somehow inherent in certain patterns of neural activity. They're still different in kind. Also, that outcome actually just is property dualism, not materialism. The qualia haven't gone away.

Expand full comment

I am not denying the existence of qualia. On the contrary, it is my contention that qualia and matter are two different words referring to the same thing (and of the two words, qualia is the more appropriate one). Obviously I can't produce empirical evidence of that, but I think my argument above pretty clearly establishes it's not something you can rule out.

You dismiss my argument, but I don't see your reasoning. It seems to me that you are just restating "qualia are different in kind from neural activity". So either I'm not getting your argument, or you're not getting mine.

I'll try to rephrase my argument a bit. Please read this charitably, because I'm going to put a fair bit of effort into trying to explain an internally consistent worldview which is different from yours, so it will take some effort to parse it.

It is a well known problem in philosophy that you can not access an object's intrinsic nature. You can only ever create a representation of that object within your own mind. Many people forget this, and come to think that when they look at - say - a cup, what they see is the actual cup. But the map is not the territory! The actual cup - the noumenon projecting an image on your retinas - remains out of reach. We assume such a noumenon exists (after all, we're not solipsists), but there is no way for us to conceive of its intrinsic nature. We can only describe it in terms of the phenomenal experience it conjures in our minds when we interact with it.

Again, this is a well-established philosophical problem. We see all of reality second-hand, a simplified representation made of qualia. The noumenonal reality which these qualia simulate, we can not directly access. I don't recall if it has a name, but let's call it "the noumenonal problem" for now.

It bears pointing out that even our theories of physics are *maps* of this reality. No matter how detailed we make the map, it is still a map. When we're talking about this matter it is worth clarifying whenever we use some noun: do we mean the noun to refer to the map-object, or the territory-object?

Then there is the mind-body problem. "How is this 'mind' I experience related to that 'material body' I see in the mirror?". Another difficult and long-standing question in philosophy.

One solution to the mind-body problem is Russelian Monism. And a particular beauty to this solution is that it kills two birds with one stone. The essence of it lies in looking at the mind-body problem, and the noumenonal problem, and noticing that they are as made for one another. Twist the problems around a bit, and they fit together like puzzle pieces.

Let us return to the cup for a minute.

We agree that when you look at a cup, you see "a cup". What you see is a phenomenal object: the map-cup. A rectangular-ish area of color-qualia with a curvy shape on one side (plus some depth data). But based on this experience we also posit the existence of some noumenonal (external) object: the territory-cup. The territory-cup is what is currently projecting the map-cup image into your phenomenal experience (through your eyes via the electromagnetic field).

Importantly, the map-cup is not at all identical to the territory-cup (if you disagree with that point, please state so explicitly). At the most banal level you can't even see both sides of it. But more deeply, you can't see the "substance" it is made of. Because the substance the map-cup is made of is your mind. Whereas the territory-cup is made of "ceramic". Territory-ceramic, that is. The intrinsic nature of this substance is unavailable to you. You can perform experiments and quantify its physical properties but those are just statistics adding detail to your map-ceramic.

Now swap out the cup with something else. Let's take my brain.

We agree that when you open my head, you see "a brain". This is a phenomenal object: the map-brain. A round-ish area of squiggly gray-pink-qualia (with some depth data). Based on this experience you also posit the existence of some noumenonal object: the territory-brain. The territory-brain is what is currently projecting the map-brain image into your phenomenal experience. Importantly, the map-brain is not at all identical to the territory-brain. The map-brain lacks details, sure, but the important part is that it's not the same substance. The map-brain is made of your mind, whereas the territory-brain is made of some substance that you can only perceive indirectly in the way you are doing now.

So what is this substance which you can't access directly? In the case of the cup, I can't give you a satisfying answer. It's just territory-ceramic, and it's something we'll never know the nature of. But in the case of my brain I can give you a very plausible and elegant answer:

It is my mind!

This solves both the noumenonal problem, *and* the mind-body problem, completely without violating physics or invoking some strange dualism.

My mind is what really exists, out there in the real world. When light hits my mind, it bounces off it and enters your eye, and squiggly gray-pink-qualia appear in *your* mind. A squiggly blob is a very poor representation of all my thoughts and feelings, but we have already established your phenomenal experience is just a crude representation of the noumenonal world, encoding a few pertinent physical properties such as size, position, and light reflection properties...

Does this make sense to you? If not, I would really like to know which points are causing friction. It is an entirely internally consistent perspective, but it's very difficult to communicate it to someone who is not already familiar with it.

Expand full comment

"It is my contention that qualia and matter are two different words referring to the same thing (and of the two words, qualia is the more appropriate one)."

In this case, (some) matter would inherently have the properties of qualia, which is property dualism or similar. This is fine. I'm not dismissing that as a theory.

But this does not eliminate the hard problem of consciousness. Rather, it's just one possible solution to the hard problem of consciousness.

Expand full comment

Well, that started off good but ended pretty badly.

Expand full comment

"the experience itself is a different kind of thing than the neural activity."

Do you think that the pattern of computer activity that produces a picture on a monitor is similarly different from the image itself?

Because this seems to be very close to the heart of the disagreement here.

Expand full comment

Can you elaborate on what exactly you're saying here, and/or what it is meant to prove?

That is: obviously, no one believes that "the image on a monitor" and "the pixels that make up the image" are two different things. If, however, you perhaps mean "the program that contains the instruction to produce the picture" and "the picture once displayed physically on the monitor", then those seem obviously to be two different things to me.

Expand full comment

I think it is- producing an image requires a display, which is different thing than a computer. Any format for encoding images for displays, takes into account the display’s specific engineering. In that sense, it is physically and behaviorally dualist.

Expand full comment

For this analogy the monitor needs to be lumped into "computer activity" (maybe I should have just said "electrical activity"?). Since it's being compared to the activity of the whole brain.

Expand full comment

"I would much prefer than scientists say "I'm not going to discuss that," or "that's outside the scope of my study," than to fudge the concepts and pretend to answer a question that they haven't."

To be fair, this is what Dehaene did (he did not discuss the point), and he was heavily criticized for it.

For my own thoughts in Appendix A, what I learn from the discussion (not completely unexpected):

There is one half of the population who is absolutely convinced that the hard problem of consciousness is a real and important philosophical problem, and this is so obvious to them that they are bewildered that others may not share their opinion. To them, Appendix A does not add anything new to the discussion of qualia because it only gives some technical details on the underlying neural activity.

Then there is the other half of the population who is absolutely convinced that the hard problem of consciousness is nonsense and simply doesn't exist, and this is so obvious to them that they are bewildered that others may not share their opinion. To some of them (so I hope), Appendix A adds something interesting to the discussion of qualia because it gives some important technical details on the underlying neural activity.

Expand full comment

Thank you for the clarification about Dehaene. That's helpful. And I should note that I did find the discussion of composition interesting (just not relevant to the hard problem).

Expand full comment

Thank you Stephen, you did a great job defending the right side ;)

Expand full comment

Your dichotomy doesn't seem to be quite right. I think that there is a currently unanswered question of how exactly is the first-person subjective experience created, but it's only currently relegated to philosophy because of our insufficient understanding of the brain activity which results in that experience. Philosophy is fundamentally incapable of answering questions of this kind without proper empirical investigation, and why people nevertheless find those efforts interesting is something I've never quite understood.

Expand full comment

Suppose we had in hand a rich, detailed, precise, empirically derived model of how neural activities correlate with first-person subjective experience, far better than what we have today.

There would still remain an unanswered question of why these neural activities correlate to first-person subjective experience.

Expand full comment

Suppose we would eventually be able to create a digital brain (and upload your mind there) inside a robot body, which would be able to produce experiences subjectively indistinguishable from those of your flesh-and blood brain and body, and we would understand every part of its construction and programming to the same extent that we understand current computers. Would you still claim in that situation that there remains some unsolved philosophical problem?

Expand full comment

Yes, I think so, in the general sense that there's a lot of complex behaviors of computers that we don't understand.

Expand full comment

"create a digital brain (and upload your mind there)"

To stipulate that this is genuinely possible (i.e., that what would end up there really is "my mind" in a first-person sense) is already to presuppose relations between consciousness and physical systems that we do not in fact know to be true. Hence, I don't view this thought experiment as useful.

Expand full comment

Well, unless and until we're able to do that, I'd say that our models and understanding aren't good enough, so philosophers are welcome to it in the meantime.

Expand full comment

I'm surprised at the answer here. Is your objection that there is always a possibility that we do not perceive some intermediary factor between physical activity and experience -- i.e. such scenario wouldn't invalidate the question, only transform it by the technical aspects of upload/download (assuming possible)? Or is the problem solely with the assumption of possibility?

For me such scenario is one of the ways that could satisfy my hunger for The Hard Consciousness Question with a few caveats:

1. you would additionally need a download along with memories,

2. the experience would have to not differ much on the subjective "free will" aspect,

I think the sun/Ra example throws some light on the issue. The knowledge about the belief of Egyptians is a concept-sun and can be used to make predictions about the behaviour of Egyptians that physical model is not useful for. There are other physical concept-suns different from fusion-reactor like the solar system history or electrical sun models (http://viaveto.de/plasmaversum-der-film.html). Only concepts have strict arbitrary boundaries (solar wind range, gravitational well size, the age of the sun, emitted photons range), not the territory-sun. With my current understanding of physics the territory-anything is the same as whole universe. Now that I have written it down it reminds me of an idea that I had that reductionism/non-reductionism is a false dichotomy. There is no scientific experiment to decide about it, because the distinction belongs to the map. Alan Watts once said that the world is neither material (that's a concept) nor spiritual (that's also a concept) it's this <clapping sound>.

Expand full comment

How would you know? Only the uploaded person would be able to tell they have subjective experience. Subjective experience is a challenge for science.

Expand full comment

Well, in one sense, I'll know only once I'm uploaded myself. But in another sense, similar to how I'm perfectly happy to trust your report that you have subjective experiences, I don't see how other uploads and even some advanced enough AI would be different in principle. That science is currently struggling with this is for me simply a symptom of the field's infancy, so an air of lofty mysticism is to be expected.

Expand full comment
founding

I agree, but don't think this is as obvious as your comment seems to imply.

> Philosophy is fundamentally incapable of answering questions of this kind without proper empirical investigation

I would guess the above is _controversial_ among philosophers!

Expand full comment

> I would guess the above is _controversial_ among philosophers!

Naturally. What isn't? :)

Expand full comment

I don't understand why it is unusual to expect a subjective experience to arise from sensory input (which from the word go is subjective, not objective.) I would expect that qualia are formed from the chemicals in particular areas of the brain acting in specific ways in concert, in response to the neural input; the reason we have qualia is because we have specific chemicals that create electrical potentials in dizzyingly complex ways, and the qualia are a feedback loop between that electrical potential and chemical movement throughout the brain modulated by electrical activity. The reason why they're subjective is because the brain develops differently (and creates different electrical channels) from person to person (but this also affects sensory input; if the architecture of the ion channels coming from your eyes is different from someone else, your brain will receive different input from them even before the temporal lobe is reached.) Maybe there is something about the concept of qualia I am misunderstanding on a deeper philosophical level? A subjective experience as a set of redox reactions all going off at once and creating a near-unique magnetic field seems fine to me.

Is there something about "qualia" that is not encapsulated by "subjective experience", or is there something about "subjective experience" that is not "experience that is experienced differently between individuals" philosophically? And if not, why does a unique electromagnetic field (a consequence of neural activity) for a split second not qualify as a subjective experience? Thank you.

Expand full comment

(All questions non-rhetorical, by the way; I'm willing to assume that my subjective experiences (ha) have led me to miss some extremely important philosophical point on qualia.)

Expand full comment

"I don't understand why it is unusual to expect a subjective experience to arise from sensory input."

I didn't say it is unusual. I don't think it is. In a sense, it's the most usually thing in the world. I've experienced it in every waking moment since birth.

The point is rather that, in spite of all our advances in neuroscience, we have no explanation for why any material process should give rise to subjective experience. You can respond that, well, through the study of neuroscience, we have excellent evidence that some neural activities do. I don't disagree with that. But that's just a description of a tight correlation, not an explanation of why it is so.

Expand full comment

I don't have many thoughts on the evidence that neural activities do create subjective experiences- my lack of comprehension is more along the lines of that it should be fairly obvious why and how they do? Unique electromagnetic states should be capable of forming, for example, visual images in the brain. A MRI scanner does something similar- it sends out and receives back radio waves which are affected by hydrogen nuclei, and uses a Fourier transform to translate that radio signal into an image. The only differences between that and a human being seeing an apple, in my mind, are that our EM radiation receivers are at different frequencies (obviously), can differ from person to person based on architecture whereas in a MRI the receiver is generally always the same, and the signal is transformed in different ways (though I'd personally suspect visual perception uses Fourier transforms in some way I have no evidence it does.) This difference between person to person and the malleability of these neural pathways are what creates subjectivity, to my mind. I actually don't personally believe that qualia are what define consciousness (and therefore don't think that saying qualia arise from differing states of matter proves that consciousness is a physical phenomenon rather than a definition of something within our own mindset, similar to "morality") precisely for the reason that subjectivity boils down to "your brain does some electrical signaling in a way that makes the mathematics line up to make a weird electromagnetic field." I ask again, genuinely- what is the definition of subjectivity that doesn't just mean "different experiences from person to person"? And if the prior definition is the meaning, what makes "different electromagnetic environment depending on brain architecture" an unsatisfying explanation? I feel like there is something about my worldview that is causing me to not "get it".

Expand full comment

"Subjective" just means "consciously experienced," not "different experiences from person to person." If two people happened to experience red the same way, that wouldn't make the experience less subjective.

I can't comment on your electromagnetic theory of experience because I don't understand it. Is it intended to be different from standard neuroscience accounts?

Expand full comment

Not intended to be different in implications, no, just more detailed about how even exceedingly complex things (like sounds and images) can come out of electrical signals which aren't connected directly to an output device/screen that exists to show them off. Half of it is a nuclear magnetic resonance in-joke, and the other half's normal "neural signaling is electricity" fare you get from any standard account.

And I see. So subjective experiences are simply experiences that are conscious, and therefore we need to the definition of consciousness itself when trying to define what a subjective experience is; and we define consciousness by having subjective experiences. No wonder it's the hard problem of consciousness; it's a recursive definition. Do you have, perhaps, one that isn't?

Expand full comment

There's nothing recursive here and no problem of definition.

As you read these words, you are seeing curves of one color against a background of another color. There. That's a perfect example of qualia. It's not supposed to be anything special or fancy. No need to play with words.

Expand full comment

"I've experienced it in every waking moment since birth."

Have you really though? (genuine question)

I ask because I'm not sure if that's been my experience or not. I mean, I'm pretty sure I was experiencing qualia from at least the age of, say, five or thereabouts. I remember 'being me', and I remember experiencing qualia that were engendered by the real, verifiable sensory inputs available to me at the time. But before that, things are a little hazy. In fact from my perspective, to my best recollection, the first few years of my life seemed like -- nothing. And sure, maybe this is just a failure of memory, but still, I have to wonder: when I was, say, one month old, was I in fact actually experiencing qualia? I would have had what seem to be the necessary 'external' objective parts -- brain (albeit with its wiring not yet isolated), sensory perceptions, proprioceptions, desires, behaviors, neural activity and so forth -- but I'm not sure if 'I' was truly there yet; or, to put it another way, if my awareness had by that time actually coalesced into a coherent thing. So was I in fact a baby p-zombie? Which later, somehow, either slowly or in a flash, developed a sense of identity and a consciousness -- and thereby became 'me'.

And isn't that in fact more or less everyone's experience?

Or to put it more colorfully: are babies really people? Or are they living things that are destined to become people?

Perhaps what I'm trying to say here will become more clear if we consider newborn kittens. They're alive obviously: they respond, they do stuff, they even have 'personalities'. But is anyone home? Do they know what they're doing? Do they even know that they're doing *anything*? Even after they open their eyes, they take a while to learn to see -- and I suspect it's still a little while after that before they -- how to put it? -- come into their own and start to have opinions about things. Before that though, a kitten has brain activity, sure, but even if objectively it has all the makings of qualia, I don't think they have any proper awareness of them. Not because the qualia -- or rather, all the correlates of qualia -- don't exist, but because *it*, the cat itself, doesn't yet exist. And as far as I understand it, you can't have qualia unless and until there's someone 'in there' experiencing it. (Or maybe you can? Please advise!)

I think I can more or less imagine what it's like to be a cat. But to imagine what it's like to be a newborn kitten... I feel like I'd need to remove all sense of 'self' from the experience. At least to the point of becoming unselfconscious.

Anyway, it feels like something strange -- and something that needs to be explained -- is happening here. And perhaps it even suggests a good place to be looking if we want to get some kind of handle on the hard question?

Expand full comment

This is a difficult question because it involves memory and its potential lack of reliability. Strictly speaking, the only qualia of which I have *direct* knowledge are the ones I experience right now. Let's say I saw a film yesterday. I can bring up memories, perhaps quite vivid ones, of images from the film. But that's not the same as seeing those images now. Those memories may themselves be qualia of a sort, but not the same qualia as those experienced when watching the film? And their reliability as evidence of what I did experience yesterday is notoriously variable.

So, you are correct to say that I don't actually *know* that I experienced qualia as an infant. I should more cautiously say that I *bet*, based on what I consider a reasonable extrapolation, that I have been experiencing qualia in all the conscious moments of my life.

Expand full comment

For all the conscious moments of your life -- yes!

But for me the real question here is: when does consciousness begin? It's clear enough when it ends -- the body fails and consciousness ceases -- but I think it might be instructive to consider not so much what consciousness *is* (which is a notoriously hard problem) but rather when and how does it begin? For humans -- and kittens etc -- I'm not so sure that it begins at the moment (or process?) of birth. I suspect it somehow bootstraps -- probably at least partly in response to the blizzard of new stimuli and possibilities? -- some time after birth. If so, this 'awakening' might be a process that would be accessible to investigation. (Though I confess I've no idea what form such an investigation might take or how it could be done.)

Expand full comment

I agree that this is an interesting question! But it's also orthogonal to the issue I've been discussing on these threads, so I'll let it sit for now.

Expand full comment

I think what's confusing you here is that you're not seeing the intended significance of the word "experience" in the context of the hard problem.

Its a matter of ontology. The point of the hard problem is that (a lot of) people have a strong sense that subjective experiences "consist of" a fundamentally different substance from matter. In the standard materialist view, atoms are "dead" things: material billiard balls. In this context, building a brain out of atoms and having a *mind* appear when it's finished is like building a tree out of legos and having it suddenly turn into genuine wood when it's finished.

The hard problem is the fact that the materialist picture is lacking an account of how this kind of thing happens.

Of course, one solution to the lego problem is to propose that the legos were actually made of wood in the first place.

Similarly, one solution to the hard problem is to propose that the substance your thoughts are made of is actually the same substance atoms are made of. That would be a form of panpsychism.

Expand full comment

Thank you! This is actually really helpful, and the answer to the question I kept getting frustrated over. I have a tendency to typical-mind to a substantial degree and my (casual) mental model (i.e. what my mind tends to when I don't think about things very hard) has no concept of subjective experiences existing in a way that material substance doesn't, because everything seems fairly fuzzy and interacting in all kinds of weird ways in different situations if I look at the math too much.

So I was not getting the differentiation of "experience is not a subset of how things interact in general and is its own unique thing." I appreciate the explanation.

Expand full comment

Glad I could be of help :D

It's always quite surprising how different people's mental models can be. In my naive intuition, I touch a cup or whatever and it's "obvious" to me that the cup is made of a different kind of "stuff" than my thoughts. I can't touch my thoughts after all. So as a scientifically minded person, the fact that science doesn't have an account of this difference nagged on me for the longest time. Now I find Russelian monism resolves the issue to my satisfaction.

On the other hand, a friend of mine struggles to see how the hard problem is a problem at all. We see eye to eye on almost everything, but on this point our discussions always come to a halt. I still haven't quite figured out how his model works on this point.

Would you be interested in detailing this "fuzzy and interacting" mental model a bit? I'm quite curious to see if I can understand the perspective.

Expand full comment

Sure, I can try. The current way I think about it is informed a lot by the fact that I work with atomic nuclei a lot; though we think of them as spheres as a matter of convenience, they're actually just a bunch of interacting waves that happen to take sphere-like forms. Sometimes these forms aren't even really sphere-like, and that makes the nuclei act in weird ways that you wouldn't predict. These sphere-like things interact with other waves (electromagnetic fields/forces) in ways that lead them to behave in ways we call "matter"; but since they're waves, on a very small level they'll still have effects on the fields they interact with too. And electromagnetic fields are immaterial (even if electrons are not); basically, matter is having an effect on something immaterial in a way beyond emitting photons, which are fairly simple to understand. As a result of this fuzziness, it makes it not-that-weird for me to think of a cup and my thoughts as being the same sort of thing (waves interacting in different ways, based on the same forces) even if they're expressed in extremely different ways.

I had this perspective long before becoming a physical chemist, though, so I can't say this caused my view so much as solidified it. I have always had the impression (as long as I've had episodic memory formation and recall, anyway) that my thoughts are an expression of a material reality in a material (if different) way, even if they don't affect things outside of me, and this didn't have any logical basis in the beginning. So, am I using my work to justify my preconceived ideas of consciousness? Probably. But I don't think I'll ever be able to separate mind and matter in the way that many people seem able to, at least consciously.

So it's basically just defined by "well, matter and energy are the same thing interacting in completely different ways with each other, so it feels like my thoughts and my body are kind of the same sort of thing." Not fully logical- I'm no neurologist- but it seems to gel with my understanding of physics and reality.

Expand full comment

Just to make some terminology explicit, when I talk about "mind vs matter" the word "matter" refers broadly to "physical stuff", not just "matter particles". So all particles (or fields depending on perspective) would count as "matter" in this context.

Let me see if I understand what you're saying.

You think of your thoughts as existing in a kind of "mental field", and the physical processes have a one-way interaction with this field, leaving a kind of impression, partly analogously to how an electron makes an impression in the electromagnetic field?

Thus your thoughts are an "image" of the physical processes of your brain (possibly isomorphic, possibly not), manifested in some other domain than the known physical fields?

Or am I getting you wrong? Are you saying that your thoughts are identical (that is, "self-identical") to the physical processes in your brain. I.e., that they are one and the same?

Expand full comment
founding

I don't know that there are qualia and I've never heard of or read a convincing argument that there are either.

Expand full comment

If you're reading these words, seeing curves of one color on a background of another color, then you are directly experiencing qualia. That's all qualia are. Period. They're not some special philosophical thing. They're any subjective, first-person experience.

Expand full comment
founding

Okay – but 'qualia' doesn't then imply anything, e.g. that The Hard Problem _is_ a problem.

Expand full comment
founding

If 'qualia' aren't some special philosophical thing, and they're _exactly_ equivalent to 'subjective first-person experience', then we can dispense with them entirely.

But I'm pretty sure that the whole point of 'qualia' is that they are, or were intended to be, some special philosophical thing that no 'physicalist' explanation of consciousness could _ever_ explain. (That's wrong.)

Expand full comment

Hopefully you both won’t mind if I chime in. Two things: 1) That isn't what the Hard Problem is saying about qualia. To use an example you brought up elsewhere in the comments, the Hard Problem isn't saying that it is impossible to explain or know whether or not the Chinese Room--or a houseplant or an oyster—has subjective experience/qualia, (we don’t know whether or not it is possible) but in terms of “how could we make that determination” we don’t even know what an answer might look like, hence the “hardness” of the problem. 2) I’d argue that it can in no way be dispensed with, it’s pretty fundamental to ethics/morality, at least insofar as we accept that axiom that entity/objects that have subjective experience/qualia carry moral weight in and of themselves, and those that don’t do not.

Expand full comment

Exactly right.

Expand full comment
founding
May 26, 2022·edited May 26, 2022

I get that. I'm claiming that the 'hardness' of the problem is something that needs to be dissolved. The reality itself of whatever it is that causes 'first-person subject experiences' isn't mysterious; just our understanding of it.

I think the review sketches a VERY plausible solution that dissolves the mystery – qualia are just some kind of 'conscious memory'. That neatly explains, in my mind, why they _feel_ or _seem_ like some mysterious thing that can't be studied, or even determined to exist, but ALSO explains why they really DO seem to be available via (conscious) introspection AND that people can communicate info about them to others.

One big problem with what you're claiming in this comment is that other commenters are asserting, VERY strongly, that qualia obviously exist just based on observation.

If qualia really is as mysterious as you're claiming, even introspection shouldn't be able to convince anyone that qualia exists!

Like, people that experience phantom limb syndrome really DO – or so I'm convinced – _feel_ like they're still able to experience the missing limb. That doesn't imply that the missing limb is somehow NOT missing (e.g. in some kind of non-physical, non-testable way).

Expand full comment

There is a characteristic projection going on in your response. TheAnswerIsAWall did not once use the words 'mysterious' or 'mystery' in their post. I have certainly never used those words (or any synonym) in my posts.

Yet you used the word 'mysterious' three times (plus the word 'mystery' once) in your short response. The urge to refute any sense that something might be 'mysterious' seems to be central to your perspective.

I propose that you consider the possibility that you are attacking a strawman due to not having grasped the nub of the issue.

Expand full comment

Qualia are indeed exactly equivalent to subjective first-person experiences. Any reading of the primary literature would tell you this.

The hard problem is to provide an account of how subjective first-person experience is produced by (or otherwise relates to) the relevant neural activities. The hard problem does not deny (or take any position on) whether experience is so produced.

Expand full comment

"[E]ven when we have explained the performance of all the

cognitive and behavioral functions in the vicinity of experience - perceptual discrimination, categorization, internal access, verbal report - there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? *A simple

explanation of the functions leaves this question open."

Emphasis mine. That sure looks like taking a position to me.

Expand full comment

The "unanswered question" of your quotation does not preclude the answer being a causal relation.

Expand full comment

Wonderful review! I will buy the book, and look forward to reading about the details. Beyond the book, your comments were incredible interesting. Thank you so much!

Expand full comment
May 14, 2022·edited May 14, 2022

What sort of displays did the perception tests use?

Most displays for the last decade or two (thin panel displays specifically) run at 60hz. Now, on the one hand, it's entirely unsurprising that displays intended for human perception would end up with refresh rates that are only slightly above the levels that cause distracting artifacts in movement when viewed by a human.

On the other hand, when you're trying to tickle the nature of consciousness based on visual stimuli, it's suspicious when the numbers you come up with to explain them are small multiples of the common refresh rates: 30ms is very likely only 2 frames of image. Many displays would struggle to even update the physical pixels completely in that time, if the change was extreme (i.e., a scene change).

It's entirely possible to overcome these limitations in a variety of relatively easy ways; my question is, did they think to?

Expand full comment

Related: “Or imagine playing computer games with 2 frames per second, which is NOT FUNNY”

I've been playing a bunch of games from my childhood recently. It is not an exaggeration to say that many of the titles struggled to maintain 2fps, and yet were still enjoyable at the time. Less so now though.

Expand full comment

What games, out of curiosity? I'm struggling to imagine a video game - at least one that is not entirely turn-based - that would even be playable, let alone enjoyable, at 2fps or less.

Expand full comment

Arctic Fox on the atari st is probably the best example.

Expand full comment

Yes, he mentions once that he has used photo-sensors to verify that the image is really there. I don't know what they routinely do in experiments, but they are apparently aware of this issue.

Expand full comment

> Many displays would struggle to even update the physical pixels completely in that time, if the change was extreme (i.e., a scene change).

I have no idea why you think this is the case?

If a screen has a refresh rate of 60 Hz (or ~16.6 ms), it can redraw the entire display at 60 Hz.

That's what the refresh rate means.

If your claim is that it's impossible or difficult for a graphics driver to push enough pixels into the screen's buffer to update it at 60 Hz, that's also completely false.

Expand full comment

Sort the table at https://jarrods.tech/list-of-laptop-response-times/ by decreasing 0-255-0 rate, and notice how many displays are well above 16ms (many even on gray-to-gray transitions).

Expand full comment
May 15, 2022·edited May 15, 2022

On the page you linked, very few of them? There is a similar table available at https://www.rtings.com/monitor/tools/table/69321 and that seems to show the vast majority of displays are under 16 ms as well. It looks like easily >75% have a response time faster than a 60 Hz refresh rate.

EDIT: After thinking about this a bit more, I concede the point. If 25% of consumer monitors that claim to have a refresh rate of 60 Hz have a measurable response time slower than 16 ms, than I think you were right to say "Many displays would struggle". If asked before this, I would have thought that consumer monitors with slower response times than their refresh rates would have been <10%. I also think it's surprising that it doesn't seem to be specific to laptop / desktop monitors, or specific brands, or target market. Looking at this table, it seems like even if a manufacturer can get it right on one model, they can screw it up on another model next year.

Expand full comment

Boy, if all online disagreements had participants like you, the Internet would be a very different place! Nicely done.

Expand full comment

another possible issue is "response time" varies as to the kind of response; it may not mean a fully updated image, ghosting of last frame etc.

Expand full comment

I can’t speak to the particular studies mentioned in Dehane’s book, but it has been common for decades for psychologists to use devices that can expose a subject to images for a precisely known number of milliseconds. They’re not computer displays; they’re devices called tachistoscopes. If they say they exposed an image for precisely 39 milliseconds, followed by 200 milliseconds of another image, then that’s what they did.

Expand full comment

I skimmed 4 papers (1 referenced in the book, and 3 cites from that paper), and found very little to confirm or deny the equipment used. I did find citing papers that used CRT's, but that's at most suggestive that tachistoscopes aren't completely universal for such things, and I easily could have made a bad selection.

But I do get suspicious when I see oddly specific numbers of milliseconds, which happen to be small multiples of 16.6ms or 14.4ms. Not that it invalidates the overall point of the research, just that it strongly suggests (to me) that they're seeing measurement quantization due to their instruments.

Expand full comment

Interesting, I found this study [1] comparing CRTs vs LCDs in exactly this scenario, that of showing an image for some precise number of milliseconds. Figure 1 shows that a CRT monitor immediately drops off each frame, whereas a hypothetical LCD could have a much longer response time.

It looks like only LCDs have the issue where their response time isn't necessarily the same as the refresh rate -- every other monitor technology (e.g. CRT, plasma, LED) has effectively zero response time, which matched my initial expectation. [2][3][4]

[1] https://www.nature.com/articles/s41598-020-63853-4

[2] https://en.wikipedia.org/wiki/Comparison_of_CRT,_LCD,_plasma,_and_OLED_displays

[3] https://www.rtings.com/monitor/tests/motion/motion-blur-and-response-time

[4] https://www.rtings.com/tv/tests/motion/motion-blur-and-response-time

Expand full comment

Response time comparable to human perception is kind of an LCD thing. CRTs can in principle redraw images in a few microseconds, analogue oscilloscopes in the old days did this routinely.

Expand full comment

This matches my impression as well. A lot of studies use values like 16.6ms, 33.3ms, 50ms, and so on. So probably they use normal screens with 60Hz and control the number of frames for which the stimulus is there.

Expand full comment

- So our consciousness runs at most at 2 perceptions per second, and other than unconscious operations, it can not be parallelized.

This initially shocked me. How then, is reading possible? Then I checked the numbers. Google says people read at 200 to 250 words a minute, or about 4 words a second.

two words per perception. That seems pretty reasonable. I guess the right way to test that would be to see if words were remembered in chunks? Is a paragraph of 100 words remembered in 50 distinct perceptions, or one longer perception?

-Do you feel disappointed by the book?

From the review, I feel the exact opposite. All attempts to answer the hard problem of consciousness always seemed incapable of prediction to me. This seems like its showing real and interesting information about the brain.

Expand full comment

Yes, each perception is multiple words at a time at normal reading speeds. Reading one word at a time is much much slower, which is consistent with the original claim. Reading backwards is also very slow, even if the words of the sentences are written in reverse order.

Expand full comment
May 15, 2022·edited May 15, 2022

But that's got nothing to do how much information one can absorb at a time. Some people are able to get a gestalt of a page (or a scene) and derive detailed meaning from that gestalt. Just because you need to take information in smaller chunks doesn't mean all individuals have that limitation. Samuel Renshaw encountered cases of people who could look at a page of text for about a second, and who could regurgitate the information on the page. Also, there's that neurodivergent artist (whose name I forget) who is able to draw amazingly detailed representations of cityscapes by scanning the scene for a few seconds. So, that 2 perceptions per second, it it's real, doesn't put a limit on the information that human mind is potentially able to absorb in a given amount of time.

Expand full comment

In a speed-skimming test, I was measured at about 800 words per minute with about 20% comprehension. This is uncannily close to 4 words comprehended per second.

Expand full comment

It's definitely real and interesting information about the brain, but I don't think it really tries to grapple with the Hard Problem at all. Any sense that it does IMO comes from confusion between two different (but confusingly similar) definitions of "consciousness".

Expand full comment

The mode of my beliefs is on physicalism, but the hard problem is the main thing decreasing its subjective likelihood for me. I don't find explanations like the one given in the appendix at all satisfying! The multiple drafts model, for example, seems to just kick it down to the local level. It is very unclear to me why I can't have a physical system which does what a human does, but without the first person experiencer.

That's basically the p-zombie argument, but because my prior on physicalism is so high, my conclusion is we just don't know what the physical system must give rise to an experiencer yet.

Expand full comment
Comment deleted
Expand full comment

I think I wrote my comment badly. Since I think physicalism is most likely, I conclude that any time there is a set of particles arranged as a human, it necessarily has what is needed to be conscious, so p-zombies aren't a coherent idea.

Expand full comment

"Emergent" notoriously doesn't mean a lot.

Expand full comment

There's a difference between [not] knowing how a mechanism does some thing, and [not] knowing that the mechanism does that thing.

A physical system which does what a human does, talks about the first person experiencer.

Yes, this is not an explanation of why experiences exist, but it's still an arrow with the words "CLUE!" written on it in giant letters.

Yes, the clue is confusing. That's what clues do: incessantly point at a contradiction in the way we think about something, without saying precisely what the contradiction is.

Expand full comment

I think this is just agreeing with what I said right? I was saying that while I agree that first-person experience arises from physics, I'm unsatisfied by explanations of how it does so, or claims that I shouldn't need such an explanation.

Expand full comment

I've always interpreted the p-zombie argument as an anti-“it arises from physics somehow” argument, as the entire point is to posit that a perfect physical reimplementation of a human could still somehow miss the essence.

If I understand your position, you're saying that we don't know what's necessary for consciousness, but a complete physical copy should be sufficient, even if we don't understand why.

I'm saying the the p-zombie argument is inherently an argument that such a copy is not even sufficient, that we are fundamentally confused about the what, not just the why.

(To be clear, I think the p-zombie argument is a great argument _in favour_ of “it's all physics”, for the reason I gave in the grandparent)

Expand full comment

I think we're just talking past each other since I agree with everything you're saying, and already believed it before you commented. So one of us must be misreading the other.

Expand full comment

The p-zombie argument can be taken in a strong sense,.where no purely physical theory can ever explain consciousness, or in a .moderate sense where the continued.conceivably of p zombies is a symptom of the inadequacy of current explanations.

Expand full comment
Comment deleted
Expand full comment

The continued.conceivably of p zombies is a symptom of the inadequacy of current explanations.

But maybe you are not claiming qualia have been explained.

Expand full comment

My beef with the "ELIZA passed the Turing Test for some people" claim is that Turing imagined the judge in the test actually *trying* to find the robot - trying to present the subject with questions that would require human-like knowledge and creativity to answer as well as a human. His original paper on the test includes things like "Write me a sonnet" and "Do you play chess? Ok, here's a chess problem for you:"

ELIZA can't fool a human who's putting in any amount of effort. It could fool people who were looking for a Rogerian therapist, which perhaps tells us something about how to efficiently give people therapy, but the correct response to "ELIZA passed the Turing test" isn't "guess we have to redefine 'intelligence' again," it's "you didn't actually test it."

(Your review was really interesting, I just wanted to nitpick.)

Expand full comment

I was absolutely going to make this same complaint if someone else hadn't yet.

Expand full comment
founding

I think what you describe is also a good intuition pump for why, e.g. The Chinese Room, aren't as persuasive when you imagine them in sufficient detail (like how many eons it would take a single person to manually execute a 'program' to, effectively, pass the Turing test in a language that person didn't know).

It also occurred to me thinking about this while reading the comments here that not being 'embodied' might a big impediment to AI passing the Turing test.

> TT (Turing Tester) [to The Chinese Room]: Nice weather we've been having, right?

>

> [~1.0 trillion years later]

>

> Chinese Room [to Turing Tester]: I wouldn't know. I have no body; no eyes (or cameras); nor any other senses. I don't experience myself 'being' anywhere in particular. I'm, apparently, a program, written in English (of all possible programming languages!), stretching across ~300 trillion books, each containing 1,000 pages, and of very small type. I have no idea where my books are stored, or what the weather has been anywhere where they are or have been, for however long they've been there.

>

> Chinese Room: Has it been nice? Tell me about it.

Expand full comment

>For example, binocular rivalry is different in autistic people.

I see both images, I guess it's not just me.

Expand full comment

Yeah, that surprised me, too. I see both images weirdly superimposed and cannot force one or the other to be dominant. Specifically, either I see something like a double-exposed photograph (more common when binocular rivalry is due to parallax: e.g. I'm focusing on a distant object but a nearby object is also in my field of view and catching my attention), or I see a mottled patchwork of areas where one eye's view or the other is "winning" but the other view is still dimly visible through it (more common when I'm artificially presenting different views to each eye, like if I'm wearing red/blue 3D glasses but not specifically viewing content made for it).

When target shooting, I need to close my left eye while aiming or else I see two ghostly sets of sights, and I found it surprising that anyone could aim a pistol without closing one eye.

I am not diagnosed autistic (diagnosed ADHD inattentive type), but I tend to score highly on screening/trait-inventory questionaires for Aspberger/Autism spectrum.

Expand full comment

I had normal binocular rivalry until I learned to see stereograms (and defocus my eyes in general) by following the exercises in a Magic Eye book (at age 10 or so, I'm an adult now). I've had the 'double-exposed photo' effect ever since, though I can choose to pay more attention to one image than the other.

(FWIW I'm not diagnosed autistic but definitely pretty spectrum-ish)

Expand full comment

It's striking to me how few people seem to realize that thinking consciousness has a function implies overdetermination. Sure, we can't do certain things without consciousness, but if consciousness is a product of neural activity, then really what you're saying is we can't do certain things without the *relevant neural activity*.

Imagine a facial recognition software program. It detects faces in digital images. If you plug a computer screen into the PC running this program, it will display an output of the analysis with vertices and edges over the input image. Of course, we all know that this display is NOT the analysis per se, the analysis is the voltages on the computer chip (or whatever). The computer (presumably) does not need the visual 'experience' of the face, everything it does can be represented as ones and zeros. You could unplug the monitor, and it would have no impact on the computer's ability to actually perform the analysis. What could it even mean for the visual output on the screen to be functionally responsible for the analysis, when the output on the screen is a product of voltages of a computer chip itself?

Well, that's what you're saying when you say that consciousness does stuff. If consciousness is generated by neural activity, then that neural activity is what is CASUING behavior, cognition etc. It's just that humans are not modular the way computers are, you can't 'disconnect the screen' (i.e. remove consciousness) without the underlying 'circuitry' also being removed or damaged.

But if neural activity causes consciousness, then fundamentally neural activity is responsible for everything that consciousness supposedly 'does', and there's no apparent reason why neural activity without consciousness would lead to different behavior, anymore than unplugging a computer monitor doesn't affect computation.

Now, of course, it's hard to understand how consciousness can't be casually effective, precisely because of the phenomenon of verbal report (implying consciousness is affecting our brain), but that does not resolve what I wrote above. I don't know what the answer is either way, but I'm skeptical of progress being made when there's so much difficulty in understanding what is is that we're supposed to be working out.

Expand full comment

I think you are getting confused between consciousness as an additional ingredient to neural activity, and consciousness as a label for certain kinds of neural activity.

Expand full comment

I am sorry to stop reading after one paragraph to write an angry reply, but this is one of my pet peeves: the Turing test has not been passed by Eliza or anything else. The test is not “can some human somewhere be fooled”, the test is “will an adversarial expert judge fail to distinguish between a human and a computer despite being able to ask any question they like”.

Turing’s paper: https://academic.oup.com/mind/article/LIX/236/433/986238

Expand full comment

Agreed. I couldn’t get past this sentence: “Dehaene's approach to this is simple and bold: a perception or a thought is conscious if you can report on it.” Reporting is an intentional activity, not mere stimulus response, like sunflowers turning towards the sun. Thus a perception or a thought is conscious if it is the result of intentionality. I’ll agree that circularity is simple, but I don’t think it is bold.

Expand full comment

That was concise and brilliant. Thank you, writer!

Expand full comment

Nitpick: it's been a while since I read Consciousness Explained, but I'm fairly sure that "Cartesian Theater" is the name he gives to the (strawman?) view of consciousness that he's arguing against, the one in which consciousness sits in a little theatre like a homunculus viewing experiences presented to it by the rest of the brain.

It's not to be confused with the "pandaemonium" view that he's actually proposing.

Expand full comment

Yes, I‘m also fairly sure that „Cartesian Theater“ is the naive view that Dennett aims to overcome in Consciousness Explained. By contrast, „Multiple Drafts Model“ is indeed what he calls his own model.

Expand full comment

I wish I could write a well informed comment, but I have no training in the field. However, a decade ago, for work in an unrelated field, I began several years of reading in neuroscience and found these issues so interesting that even though I've forgotten most of what I learned then, I'm succumbing to temptation and writing a poorly informed comment as an enthusiast who has lost his whatever edge he had. Blame Substack!

I read Dehaene's book soon after it came out, and don't recall much of it now independently of this review and my marginal notes. In my notes, I got excited about elements of his models of consciousness and memory as distributed networks, and particularly his notion of memory as synaptic circuits that persist latently, and that we revive to consciousness as reenactive performance when interactive perception shifts the circuit into attention (p. 196). That dovetails with a model introduced in Rodolfo Llinas's "I Of the Vortex," based on Graham Brown's portrait of brain activity as a complex steady-state, elicited and modified by interactive perception, rather than as a compound of stimulus-response patterns. (My interest was in the interplay between conscious and unconscious elements involved in execution of complex skills, which was part of the work on "flow" done by Mihalyi Csikszentmihalyi. Dehaene's approach fit.)

On the hard problem, I find it pretty satisfactory to use models of emergent structure as a framework for relating the unified experience of consciousness to its distributed neural elements. As a non-specialist, I don't see why there is a problem in the qualitative difference between material and experiential, objective/subjective modes, or why we should think that a theory could dissolve subjectivity into objective components. The physical sciences, confined to analytic models, don't seem to me the right grounds for resolving that--they tend to force us into consciousness as an epiphenomenon, which still doesn't unpack qualia. Models of emergent structures, or supervenience (I'm never quite sure of the difference), seem to me to represent the way that we can understand subjective phenomena as non-reductive without giving up a commitment to materialism or fleeing to pan-psychism.

I'll add that when I was into this stuff, I found the most satisfying approach accessible (barely) to me to be Olaf Sporns's "Networks Of the Brain," which uses the neural architectures of different species to explore how complexity theory bears on the "shape" of consciousness (or on different forms of consciousness, if you don't subscribe to a unified, on/off toggle model of what consciousness is). Sporns also writes in awareness of the "embodied consciousness" approach, which Dehaene doesn't engage with in "Consciousness and the Brain," and which I think is a promising approach. But Dehaene's work is grounded in his own clinical neurological research, while Sporns, I believe, was working as a theoretical cognitive scientist.

Expand full comment

" or why we should think that a theory could dissolve subjectivity into objective components. "

That expectation comes from reductive materialism. Reductionism requires that everything else reduces to physics, which is objective, so there is no irreducible subjectivity.

Expand full comment

I agree, Geek. However, I think that treating subjectivity as a material thing is a category error. Analyzed consciousness / being conscious is a category difference. So it may be disappointing, but not a mystery, that analysis can't exhaust being conscious. When reductive materialists insist it must, you get a ghost-in-the-machine model, with consciousness as an epiphenomenon, the disappearance of agency, and a tidy model that dismisses experience as non-salient, but, I think, still doesn't explain it.

There's a mirror-image approach that reverses the nature of the problem: certain forms of Buddhist meditation, rationalized by argument designed to discredit analysis. In some forms you wind up with the material universe as an epiphenomenon. (Equally Ancient, but the analytic inverse of Geek, or Greek.)

Expand full comment

It makes me really happy to see somebody else saying, "I think that treating subjectivity as a material thing is a category error"! I seem to have come to this same conclusion through a different route. For me it was reading lots of Kierkegaard, who considers the Western attempt to explain away subjective experience using objective science a source of much suffering.

Expand full comment

>In some forms you wind up with the material universe as an epiphenomenon.

Can you expand on this a bit, please? I think I know what you're referring to — but I'm not sure.

Expand full comment

Wow! Didn't expect to be called out on that, Himaldr! I was improvising. Let's see if I can follow up.

I was thinking of Mahayana teachings, and the model I was referring to (which is one of many available among Mahayana schools) holds that pure Consciousness is unitary, each sentient belonging to it in its sentience. In the version I'm thinking of, the material world has a form of contingent existence, but this existence is featureless and meaningless--"empty," without independent being--a shifting array of identical atomic particles that assume the guise of entities only through the misperception of Consciousness, which itself sustains them. Through a dynamic of desire, nodes of sentience perpetuate the illusion. Meditation by sentient beings in a human state can reveal the essential emptiness of phenomena and reveal the illusory nature of the material world, along with the aspect of Consciousness that perceives it: Mind. Doctrinal presentation included in some schools rigorous analysis, but only to demonstrate the inadequacy of logic and analysis.

What led me to tack on the comment about Buddhism was that I was drawing on Gilbert Ryle's idea of "category errors," which he, as a behaviorist, derived from positivist traditions conducive to materialist reductionism in the analysis of mind. I was borrowing Ryle to argue in the other direction, and it occurred to me that Zen Buddhism's anti-analytical doctrine also dissolves what we normally mean by mind/self, but makes Ryle's material bedrock go poof! too.

Expand full comment

The given definition of consciousness, "a perception or a thought is conscious if you can report on it", seems to me to include lots of things that I don't consider to be conscious. I don't just mean robots or the canton of Glarus. I mean any measuring device which automatically records what it measures.

Is a (film) camera conscious? Is a seismograph (recorded by a pen on paper) conscious? They can report on perceptions that they have had, so I think that they should be under this definition. Or, more likely, I'm misunderstanding what is meant by 'report' or 'perception' or even 'you'.

Can you explain how Dehaene's notion of consciousness excludes cameras and seismographs?

Expand full comment

I think the misunderstanding is rather in the word "definition". You are using it in the "mathematical" way: a definition applies to all cases, without exception. But for a lot of everyday concepts, definitions only apply to some "normal" range of situations or objects.

The "definition" above is not meant in the mathematical sense, that it should apply to everything. It just means that in a standard, every-day situation, without exceptionary circumstances, a human's perception is conscious if they can report on it. But there are exceptionary circumstances. One is locked-in patients, some of which are still conscious. Or more obvious, if the report is supposed to be given by filling out a form, and an illiterate person is unable to fill out the form, then this doesn't mean that they do not have conscious perceptions. It is just a situation which is out of scope for the definition. Film cameras and seismographs are also out of scope.

Dehaene (or the community) does not even try to give a definition that applies to all situations. They start with a definition which makes sense in "standard" situations, and try to understand the phenomenon there. They might try to push the term to a few selected situations where the data seems so clear-cut that it makes sense to extend it - for example babies and some animals. But they wouldn't apply it to an octopus, or a robot, or a seismograph.

(In fact, Dehaene does give something like a definition, but that is in terms of neural activity: "there is a P3 wave, and ...".)

Expand full comment

So Dehaene is saying something more like: Assuming that we are dealing with a person, here is a definition of conscious vs unconscious behaviors.

This is an interesting topic to study, as clearly demonstrated by this review. But I think it undermines the appendices.

Is a robot or the canton of Glarus conscious? This definition doesn't apply. I'm guessing that, for animals and babies, they shifted the definition of consciousness to "global ignition", unless they have some really good way of communicating with them. This definition, "large parts of the brain are activated", is also limited to things with brains.

This also sidesteps the hard problem of consciousness. If you take this as the mathematical definition of conscious, then it leaves qualia unnecessary. If you take this as a rough definition that works decently for people, then qualia might still be a necessary condition for 'consciousness', or 'people'.

Expand full comment

Yes, that is pretty much to the point.

I just want to add that for animals and babies, the shifted definition is a lot more specific than just "global ignition". As I said, Dehaene does give something like a definition, and it includes four characteristics that he and others observed during conscious perception:

1) global ignition, including parietal and prefrontal areas;

2) a characteristic timeline: at first (at ~100ms) only activity in the sensory region. Then (at ~280ms) a drop of activity in sensory regions, but increase of activity in parietal/prefrontal areas. Finally (at ~300-430ms) global activity and re-activation of sensory regions. For unconscious perceptions, the timeline ist roughly the same until 200-280ms, but the global ignition and re-activation is missing. (Dehaene makes it even more detailed than that.)

3) a late and abrupt increase of high-frequency oscillations;

4) a synchronization of activity over many and distant areas (the Granger causality that I mentioned).

There is ongoing debate on what exactly to include, but the point is that it is a pretty complex and characteristic signature. This whole signature is observed in babies as well. Point 2 is stretched a little bit because the timeline is slower; but all phases are still there, and in the right order. I found it convincing to conclude that babies are also conscious, because they show the same characteristic signature. And similarly for some animals.

On the other hand, this definition is completely out of scope for robots, or Glarus, or even an octopus. There the absence of this signature does not tell us too much, and "does not apply" is the best we can say.

Expand full comment

Thank you for this summary

Expand full comment
founding

> Are babies, animals, or robots conscious? For babies, yes, they are conscious. Their consciousness is 3 times slower than that of adults, which probably has purely physical reasons. The cables in the baby brain are not isolated. The isolation just doesn't fit into the baby skull. Unisolated fibers have lower transmission speed. The isolation is added later in several surges, the last and most drastic of which happens during puberty. Be patient with your babies and kids, and yes, even with your teens.

Huh, this feels like the opposite of my subjective experience (and, I thought, 'common knowledge'), that 'time passes more quickly as you get older.' That is, I thought there were more 'moments per unit time' (which I'm guessing are these consciousness moments?) when I was a child than now as an adult.

[I can give a story for why it makes sense that adults are faster at this--they have more experience, if nothing else--but it's weird that those observations are out of line with other observations I have, unless I'm misunderstanding some link.]

Expand full comment

I think this 'fast passing of time' is our hindsight experience, like when you think back of the last month or the last year, and being surprised how fast another year has gone by. This is not necessarily the same as the experience 'in the moment'. If you are feeling bored in some talk, then time seems to be passing very slowly in the moment, but in hindsight (after some time has passed) you have little memory of the talk.

But I don't have a good explanation for how exactly our sense of time is formed. I doubt that it comes (only?) from counting the number of conscious perceptions. It probably depends (also?) on our emotions and excitement.

Expand full comment
May 14, 2022·edited May 14, 2022

I don't have a consolidated and centralised understanding of time perception, but I will note here that there are studies that suggest our motor system may have a role in defining our internal clock. There is a paper called the Rhythmic Theory of Attention by Fiebelkorn & Kastner which suggests that there is alternating theta-band activity between 'sampling' and 'shifting' periods, where sampling involves heightened perceptual activity and shifting involves heightened motor activity. The core idea is that these are oscillating on a timeframe that is between 3-8Hz (where in this range this oscillation occurs I think depends on context, would need to reread).

But motor actions have big effects on perceived time and you get experiments where time perception locks onto motor actions (from Benedetto et al. 2019, The common rhythm of action and perception):

> The existence of a mechanism that keeps perception and action finely synchronized is suggested by a recent study by Tomassini et al., (2018). The authors assessed interval estimation for a brief visual stimulus (150 ms) that was shown (at random times) while participants were performing rhythmic finger tapping (at 1 Hz). Perceived visual time undergoes distortions which are locked to the motor acts; time is compressed close to the onset of finger taps and expanded in-between successive taps. Remarkably, the temporal dynamic of these perceptual distortions scales linearly with the timing of the motor tapping, so that maximal time expansion is always experienced at the center of the inter-tap interval, independently of the natural (trial-by-trial) variability in the tapping rate (see figure 4). Perceptual time is thus anchored to the internally-dictated rhythm of motor production. These results indicate that even if the sensory and motor clocks might be distinct, their functioning is nevertheless strictly coupled.

Does this answer how our sense of time is formed in the long-run? There are implications which seem to come naturally - i.e, if we're remembering time durations from long periods in the past, we won't have a guiding framework of motor action, so these are liable to distortion. I'm unsure if this argument would hold up to the motor-clock evidence, I need to familiarise myself more with that literature. There's also a potential link between the rhythmic attention theories and consciousness time-sampling periods you mention in the review, but again, I'd need to do more research to see if that substantiates itself.

Expand full comment

Well, the perception that time is passing really fast is common to flow states and other kinds of auto-pilot. I could see the idea that time 'passes faster' for old people because we have far more of an autopilot than young children, who have to consciously think about almost everything.

Expand full comment

Apropos of unconscious performance:

when I am trying to solve some sub-problem I haven't before, the first thing I do is bang out some code as fast as possible without thinking about what I'm doing. I keep my goal in mind and just write lines until I feel like I'm done.

Sometimes this stream of consciousness code is useless garbage, but usually It is useful as a starting point and sometimes after some cleaning up it is outside my capacity to make it better.

Can ya'll report similar patterns in things you are good at?

Expand full comment

>We know this because it happened several times. The first time was in 1966, when ELIZA passed the Turing test. ELIZA was a chatbot who could fool some people to believe that they talk with a real human. Before ELIZA, people assumed that only an intelligent machine could do that, but it just turned out that it is really easy to fool others.

Also, see Scott Aaronson's interview with "Eugene Goostman", a chatbot that was widely hailed as having passed the Turing test.

https://scottaaronson.blog/?p=1858

Very funny. And it shows that passing the Turing test doesn't require a smart AI: a stupid human interviewer works equally well.

Expand full comment

What's studied in this book sounds barely relevant to the thing normally called "consciousness" in English. It's basically a giant bait-and-switch.

"I am going to give you an explanation of what happens in black holes," a similar book could start. "Of course, black holes are defined as holes which are black, like the ones made by moles in my back yard. After careful scientific study, the following types of mushrooms can grow in black holes..."

Expand full comment

This book isn't about a topic I have much knowledge or interest in, so I don't have strong opinions on the content. But I wanted to say that you're a really good writer, so it was a really enjoyable read! I genuinely laughed out loud at the sentence "If you find this disappointing, then you will also be disappointed by "Consciousness and the Brain" by Stanislas Dehaene."

Expand full comment

Judging by this review, the book does a good job in reporting the biological phenomena correlated with consciousness, but makes no progress whatsoever in solving the mind-body problem. The reviewer comes close to recognizing this at the end: "More importantly, it goes against the intuitive meaning of consciousness for 99% of the people. So if we want to describe the concept of 'all-parts-communicate-and-are-coherent-and-Granger-causal', then we should better invent a new name for it." In other words, the physical characteristics described by the book--all parts communicating, coherent, Granger causal, and all the rest--are NOT equivalent to consciousness. As the critics in Appendix A point out, consciousness is qualia: pleasure and pain, the experience of seeing red, love and fear and hate. Explaining that the whole brain is involved in these processes, or that they form episodic memories, or that they're slow instead of fast, comes nowhere close to explaining the nature of these subjective experiences or how they arise from matter (if they indeed do). It is as if I asked what makes a car move, and Dehaene told me that when cars move, they get hot, their components work together, they emit something from the back, and they get lighter. Those are all true, and they might be useful insights toward an explanation of how cars move, but they're nowhere close to a complete explanation. For one thing, none of them even mention the concept of movement!

The practical impact of not having any answer to the hard problem of consciousness is pointed out by the reviewer, but then unconvincingly dismissed:

"Robots won't be conscious in the exact same way as we are. They might be “conscious” in a fascinating different way, but that seems to become a matter of definition and taste, not a matter of insight. So we should not base our treatment of robots on the question whether they are conscious."

Really? So if you learned that forcing a damaged robot to work puts it in extreme pain, akin to the suffering felt by gulag prisoners on the edge of death, you would treat it exactly the same way as if you learned that the robot is no more conscious than a rock? That goes strongly against my moral intuition, and probably against the moral intuitions of 99% of the population.

Expand full comment

I strongly agree with this objection to the usage of the word "conscious", and I angrily stopped reading the review after the 1-2 punch of being told that the book tried to explain consciousness and then being told that they were using the word to refer to something it doesn't refer to. As someone who is interested in the actual mind-body problem, I find it very frustrating when people do that. (And it's not because I expect to stumble into a solution -- I'm just really interested in people seriously trying to tackle it.)

Expand full comment

Same experience here. I was disappointed to find the promise in the introduction so far from fulfilled.

Expand full comment

What I would say is important is whether or not the robot *wants* to avoid the extreme pain, to the same degree as a human would, and not whether the pain has the further property of *consciousness*. I think that to the extent that our moral intuitions disagree with this, it's because our moral intuitions tend to conflate wanting something with consciously wanting that something, because the wantings we are consciously aware of, and thus think the most about, are conscious wantings.

Expand full comment

Author here.

Embarrassingly, the link which says "This link is worth clicking" is broken. It should refer to Figure 2 of the paper, which shows a ridiculously strong effect of schizophrenia for unmasked priming. This is the right link:

https://www.pnas.org/doi/10.1073/pnas.2235214100#fig2

Expand full comment

I enjoyed your review! That link also goes direct to the paper, not to the figure in question, lets see if this works though: https://www.pnas.org/cms/10.1073/pnas.2235214100/asset/a7460495-7bc9-4b1c-8df4-63dd683ddc82/assets/graphic/pq2235214002.jpeg

Agree, it's a striking finding and well worth clicking that link

Expand full comment

Thanks a lot for fixing that!

The other link still works for me; probably my browser has some different settings/cookies/whatever.

Expand full comment

This was excellent, thank you!

Your further extrapolation that consciousness is fundamentally about the formation of episodic memory makes a testable predication, though. Are there any situations where we can be confident that a person is unconscious, and yet later on they have non-spurious episodic memories of the event?

Expand full comment

Thank you!

That's an excellent point you raise. I don't know of any good data that we have yet.

But it doesn't seem super-hard to me to test it. For example, falsification could come from anaesthetized patients. I don't know what level of supervision is normal, but if we do/add some EEG measurements (or use some of the tests that Dehaene has for locked-in patients) then we should be able to tell whether they have conscious perceptions or not.

There are lots of stories where anaesthesia does not work so well, and patients have some episodic memories afterwards, for example of what the surgeons talked about. If this happens even without consciousness, it would falsify the theory. On the other hand, if patients can only report memories when the consciousness tests are positive, this would be strong support for the theory.

I don't know how often it really happens that anaesthetized people form memories. It's definitely not what should happen. But anaesthetics are always tried to be balanced at the minimum required level, so perhaps it's not so uncommon.

Expand full comment

Aren't dreams literally unconscious episodic memories? Or is that a different kind of unconsious to the definition used in the book?

Also where do false episodic memories fit into this? The individual certainly wasn't conscious during the false event as there was no event to be conscious during. Though the manipulation required to create the false memory might require consciousness during the points of manipulation. Though its my impression that the manipulation is a gradual process that could conceivably done unconsciously.

If asked what an unconsciously-formed episodic memory looked like that'd be it. Unconscious learning rewarding associations until a big enough net of associations click together and you have a false episodic memory that fits all of those associations

Expand full comment

It's hard to rule out that an episodic memory could be formed implicitly like this, and this touches points where the boundary between episodic and procedural memory becomes fuzzy.

As you suggest, my guess would also be that the test subject imagines the event during manipulation and forms an episodic memory of this imagination. Probably at this point they know that it is only an imagination. But our memory can change over time, and the info "this was just an imagination, not a real event" can get lost. I would guess that the manipulations tries to push the subject into that direction. If you are told that you have been on this balloon trip, and you recover an old episodic memory of an imagination of the trip, then you might re-store it falsely as a memory of a trip. That's why the memory should be activated several times: activation is always an opportunity to change and adapt the memory.

This sounds all consistent to me, but admittedly it is more speculation than knowledge.

Expand full comment

I don’t know but I can easily think of the opposite case - anterograde amnesia in which a person acts completely normally on the timescale of a minute or so but they have no ability to form memories. Look up Clive Wearing for example.

Expand full comment

Great review. It's made me want to read the book, which is one measure of success. And the sentence, "Dehaene phrases this in a way that ACX readers will love" made me feel like a very special flower (among a whole field of other special flowers here, I know!). Having book reviews written just for us is surprisingly complimentary.

Expand full comment

Consciousness changes the universe from a complex, interesting, but morally neutral system of physical forces and particles to a place where suffering is is possible. It's the difference between a machine of metal screeching because its mechanisms are jammed, and a person being tortured. It's a question of enormous weight, the biggest weight of all, and to hear advances in neural sciences lead people to dodge the hard problem or say that they're now more bored of it - rather than excited and more interested - is both baffling and worrying for me.

Expand full comment

Also joy and delight. Do they weigh less than suffering?

Expand full comment

I strongly endorse this. This idea has long seemed very significant to me, and it is here formulated very well.

Expand full comment

It is quite a frightening thought really, perhaps that is why people try to dodge it.

Expand full comment

I don't think this is quite right - the issue to me seems to me to be about *desire*, in that the person *desires* not to be tortured, while the machine (if it's like most current machines) doesn't desire anything. But there are better prospects for naturalizing this concept of desire than there are for "consciousness" - and I would also put moral weight on the desires people have that are unconscious, suggesting that desire is more relevant than consciousness.

Expand full comment
May 16, 2022·edited May 16, 2022

Is that so? I can imagine a sentience designed to be so imbecillic that it is capable of suffering but not capable of outcome-relevant thoughts, and therefore of anything we'd call desire. I imagine agonium - if I understand correctly that that word is meant to refer to a hypothetical arrangement of matter designed to contain as much suffering as physically possible - would be like that.

But I'm going purely off my intuitions here, which perhaps means nothing.

Expand full comment

My intuitive concern about this case is that it's not clear what would make the state "suffering" unless there is something outcome-directed about some of the states of the creature.

Expand full comment

Agonium experiences pain, but might not even be able to imagine a different state of affairs, let alone desire it.

Pain doesn't seem, in itself, outcome-directed. Sure, if you imagine touching a hot object, you flinch away, but what does a toothache motivate you to do, aside from seeking out a dentist, which requires the existence and knowledge of dentists?

Expand full comment

I question the conceptual possibility of agonium. I think a state has to be unwanted in some way to count as pain. I don't think there has to be specific activity that is motivated by the pain, but there does have to be enough structure to count as not wanting it (particularly if this pain is supposed to have negative normative significance - after all, people sometimes *want* some pain, and in those cases it would be prima facie wrong to prevent it).

If you just have a collection of neurons signaling in a way that counts as pain for humans, but unconnected to the rest of a brain or mind, I don't think the resulting state would be real pain any more than a description of pain in a work of fiction is real pain.

Expand full comment
founding

Did Scott picked a good first review on purpose, to keep us hooked? Because it worked. It's the first time in decades that I really updated on what I understand by "consciousness".

Expand full comment

The notion that schizophrenia and conscious deficiency are associated has obvious parallels with Julian Jaynes. Does the author explore that at all?

Expand full comment

I was thinking about Jaynes while reading throughout, though in a different context. I've always thought that Jaynes was really exploring the origin of self and thus self-consciousness. Often people are shocked by Jaynes' notion of an "origin of consciousness."

My sense is that Dehaene's version of consciousness has, indeed, been around for a long time, evolutionarily speaking

"For mammals, it looks like a universal Yes. We have pretty clear evidence of consciousness from apes, monkeys, dolphins, rats and mice, some of which came after Dehaene's book."

But the ongoing idea and experience of an individual self largely arose after "the breakdown of the bicameral mind." Daniel Dennett's supportive interpretation of Jaynes, in "Jaynes Software Archeology," is focused on this aspect,

"The project is, in one sense, very simple and very familiar. It is bridging what he calls the “awesome chasm” between mere inert matter and the inwardness, as he puts it, of a conscious being."

The dissatisfaction expressed by some readers of Dehaene are due to the fact that they are interested in "consciousness as experienced by a self," not consciousness per se. The sense of self arose when we could no longer reliably trust authorities, when our minds needed to adjudicate regularly between authorities (to come to "its own" judgments, decisions, and conclusions). This process of routinely needing to come to our own individual understanding rather than simply hearing the "voice" of god, authority, or community, this increased need for communication between the two hemispheres across the corpus callosum, then led to the gradual cultural development of a sense of "self."

On a quick Google, I found many articles noting various deficiencies in the corpus callosum associated with schizophrenia, perhaps leading to the conscious deficiency you note. Here is one,

"Patients with schizophrenia also showed thinner corpora callosa than controls but effects were confined to the isthmus and the anterior part of the splenium. "

https://pubmed.ncbi.nlm.nih.gov/34095833/

If the role of the corpus callosum is interhemispheric communication, and schizophenia is characterized by a conscious deficiency, it would be consistent with a Jaynesian construction of identity to find a smaller corpus callosum associated with a more weakly integrated consciousness.

Expand full comment
May 14, 2022·edited May 14, 2022

More of a university class than a book review, but really interesting and well written. I really am unconvinced by the tie of Dehaene's idea of consciousness to what most people picture when they think about, say, unique properties of sapient beings. I know the review tried to bridge the gap, but it failed miserably.

My favorite review so far!111

Expand full comment

Fascinating!

Slight tangent about binocular rivalry:

"Binocular rivalry occurs if your two eyes are presented with different images. In this case, most of the time you don't see a weird overlay of the two images, but instead your conscious perception flips between seeing either one or the other." ... "binocular rivalry is different in autistic people."

That's really interesting, as it suggests it could be used as an objective test for autism, rather than observing behaviour. The link is paywalled, but I found some other links suggesting the speed of flipping is different in autistic people or something.

I also found some links about binocular rivalry and aphantasia: most people can prime themselves to see one image rather than the other by visualising red or blue before looking at it, but people without visual imagination can't do this.

I just tried some binocular rivalry tests on myself using a cheap pair of 3D glasses. I did the ones on https://en.m.wikipedia.org/wiki/Binocular_rivalry and https://aphantasia.com/binocular-rivalry/ . I mainly did just see a superposition of both images, not the alternation you're supposed to see. (The only one where the alternation worked was the "warp and weft" one on the Wikipedia page, which worked like a Necker cube or the spinning dancer illusion for me. But with the words on Wikipedia, or the animals on aphantasia.com, I just saw both superimposed.)

I am probably on the autistic spectrum and very likely aphantasic, but just seeing the images superimposed doesn't seem to correspond to either of those, and I wonder what it does correspond to (except maybe having inadequately colour-filtering 3D glasses?)

I'm also confused how 3D glasses in general work as intended for anyone if binocular rivalry is a thing and works as described. Isn't the 3D image caused by the superposition of the two images? If people actually see the two images alternating, how can they see the 3D image?

Expand full comment

Technically your eyes are always seeing two different images, even without 3D glasses, since the perspective differs. I'm guessing the difference between seeing the two combined, vs alternating, has something to do with how similar they are/how well your brain can combine them into a coherent world-model. 3D stuff is designed to cohere.

Expand full comment

Sorry, I didn't notice the paywall, as I wrote the review from my university account. The summary on binocular rivalry from the linked article is:

" Individuals with autism show weaker binocular rivalry. Here, two images, one presented to each of an individual's eyes, alternate back and forth in perception as each is suppressed in turn by competitive interactions in visual cortex. In autism, individuals report (via button press) fewer perceptual switches between the inputs to their left and right eyes, as well as a reduced strength of perceptual suppression (when one image is fully suppressed from visual awareness). This replicated behavioural signature of autism in vision is predictive of the severity of social cognition symptoms measured using the Autism Diagnostic Observation Schedule (ADOS)127,132,133."

And as orthonormalbasis already said, the competing images in these tests are chosen to be incompatible with each other, for example a face for one eye and a house for the other eye. Then most people (though not all, and not always) see only one of the two images at a time: the house for a few seconds, then the face for a few seconds, then the house again, ....

If both images are compatible, then we usually perceive it is a single scenery, and most of the time we are not even aware that the left and the right eye get some slightly different input.

Expand full comment

Thanks!

That makes a lot of sense, since (AIUI) autistic people tend to do more bottom-up processing, more conscious perception of what's "really there" with less of the filtering or preprocessing that the brain usually does.

Expand full comment

OT: DeepMind are hiring for alignment research (https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment) and that post contains a link to a paper (https://arxiv.org/abs/2201.02177) describing work around the topic of "grokking*", that is, deeply understanding a concept and how ML systems achieve that from the data that they are trained with.

Anyway, the quick thought is.... Is it possible to take a trained model (say a large language model) and then calculate the optimum training set for creating that model? Think of it as asking "what is the smallest amount of training data that could have turned a random network into this one given the training algorithm and what should that training data consists of? ".

* From Heinlein's 'Stranger in a Strange Land', since you asked.

Expand full comment

Thank you so much for this wonderful review of a great book on a fascinating subject!

I read the book a few months ago (many thanks to the ACX reader who insisted on recommending it to me during a ACX meeting!) and found it a wonderfully clear synthesis on a fascinating and very complex subject. I find your review a great presentation of the book with some very interesting additions, and the synthetic way you described the book clarified some points for me. Thank you!

Expand full comment

Does anyone else start salivating when they read about Pavlov's dogs?

Expand full comment

I kept stumbling over the term "unconscious". To me, someone is unconscious if they get clubbed over the head. Is "subconscious" not, or less correct?

Expand full comment

"Subconscious" is a fuzzy psychoanalytic term. "Unconscious" simply describes that which is not conscious. You can refer to an individual observation as unconscious, or you can refer to a person who is currently not showing coordinated brain activity as unconscious. It just depends on the context.

Expand full comment

Apparently even Freud came to dislike the word "subconscious" in favor of "unconscious".

Expand full comment

Not very related to the review, but I just had a crazy experience with the rotating mask illusion: when I now first watched the video, I didn't know what the illusion is about - and I normally saw the hollow, back side of the mask. Then I read the description of what people usually see - and now I can't see the hollow side any more!

Expand full comment

> “However, if the image enters consciousness, then after 120-140 ms all neurons in the lower layers suddenly start to encode "diagonally". Now they agree on the same interpretation of the world.”

This concept of unconscious processing possibly followed by conscious processing reminded me of the description of two distinct sensations for every perception in the book “Mastering the Core Teachings of the Buddha” (presented in this blog some time ago). I refer to the following text passage (Part I, Chapter 5, on Impermanence):

> “We are typically quite sloppy about distinguishing between physical and mental sensations (memories, mental images, and mental impressions of other physical or mental sensations). These two kinds of sensations alternate, one arising and passing and then the other arising and passing, in a quick but perceptible fashion.

[...]

This habit of creating a mental impression following any of the physical sensations is the standard way the mind operates on phenomena that are no longer actually there, even mental sensations such as seemingly auditory thoughts, that is, mental talk (our inner “voice”), intentions, and mental images. It is like an echo, a resonance. The mind forms a general impression of the object, and that is what we can think about, remember, and process.

[...]

Each one of these sensations (the physical sensation and the mental impression) arises and vanishes completely before another begins, so it is possible to sort out which is which [...]”

I assume that the mental “echo” corresponds somewhat to consciousness as defined by Dehaene. The indicated possibility of consciously observing the preprocesses of consciousness seems fascinating.

Expand full comment

If both of these sensations can be observed by a meditator, then does this not suggest that the actual consciousness that's doing the observing is something else from the mental sensations which seem to map to what the book calls consciousness?

One can easily imagine a being experiencing only physical sensations.

Maybe the physical sensations direct are not easily observable to an untrained mind - it would be interesting to get a mediator to do some of the unconscious task experiments to see if they experience anything different.

Expand full comment

I loved this review! Extremely interesting. I will definitely be telling friends about it. And now I want to nitpick:

"The brain is very good at decomposing the world into units that make sense." tripped my passive voice sensor, and I asked, "Make sense to who?" Then it tripped my tautology sensor, because "who" is "A person with a human brain".

Does "make sense" mean anything more than "has been decomposed into units by the brain"? Because, if not, this sentence can be transposed into, "The brain is very good at decomposing the world into units that the brain is very good at decomposing the world into."

I don't think this nitpick represents a problem with the argument itself, which introduces an alien observer to help guard against exactly this sort of human-brains-evaluating-human-brains tautology trap. Maybe the first sentence just needs some slight tweak to say more precisely what it means.

Expand full comment

I wonder if this approach to consciousness resolves some of the arguments about free will. For a while, it seems like people have used the timing difference between actions and our conscious thoughts about those actions, which appear later, to show that the actions are not the result of free will. Hence free will is an illusion. Sure, pulling your hand off a hot stove is automatic rather than conscious, at least until after the fact. But in perception the conscious brain also rewrites the activity of the base level sensory neurons to correspond to the conscious interpretation. I wonder if a similar process works for decisions and actions, where the conscious brain makes the decision and then sends signals to the motor neurons, and then the action happens. I don't know, maybe the empirical results don't fit that, but I find it appealing as a mental model.

Expand full comment

Suggestion: in the brackets link 'This is a finalist' to a Google Doc with the list of all finalists. So that the ambitious readers can read them back to back if they'd like to.

Expand full comment

Perhaps Pycea can do us another favor and create a link to the finalists?

Expand full comment

This may be telling us that intelligence is not what we think it it. It seems like a good overview of the mechanical basis of consciousness, but this review makes me think it would be shorter to read the book.

Expand full comment

As I believe Scott himself once also related, I find that the main thing that happens to me after discussing the hard problem of consciousness with people who think it has been solved or reduced is that I start worrying that half of the people around me actually are p-zombies.

Expand full comment

I've become convinced that we all are!

Expand full comment
May 14, 2022·edited May 14, 2022

Review-of-the-review: 9/10

This is a very strong early contender! Super clear, surprisingly concise, thought-provoking. My favorite features were the motivating introduction which was compelling and very "Scott-ish", and the two appendices addressing _exactly the questions I had_ after reading the main body of the review. My least favorite parts were the paragraphs on memory vs consciousness and "Are We Smart Enough To Know How Smart Animals Are"; the injection of author's own knowledge and commentary disrupted the flow of the review for me.

Substantively, the review persuaded / informed me that the threshold of "consciousness" studied by Dehaene is a real phenomenon with important effects on how we perceive, experience, and model the world. It reminds me of Kahneman's "System 1 / System 2" distinction except with even smaller thresholds of latency and intentionality. On the other hand I'm even less convinced than the author is that this version of "consciousness" is useful for answering philosophical questions about the mind or subjectivity. If you did a masking experiment on me and then took me through the results, showing me the subconscious cues and that my responses were more accurate than chance, I might affirm that I didn't _notice_ the cues but I wouldn't deny that I _saw_ them. In other words I wouldn't disavow the "I-ness" of my subconscious responses. That I can't report on experiences that never "reached fixation" in my brain is vacuously true in a way that makes me suspicious of it as a philosophical argument. It's like reducing the mind-body problem to "if the brain is destroyed there's no longer an observable mind"; yes, of course you can observe an effect in that direction, but it doesn't demonstrate the inverse!

As always, many thanks for contributing!

Expand full comment

I also loved the review.

Regarding the usefulness of this version of "consciousness" for answering philosophical questions about the mind, personnaly , I am for ever flipping between the opinion that the hard problem of consciousness is a bad question, like a modern version of vitalism, and the opinion that indeed the existence of consciousness is a mysterious and fascinating phenomenon that needs to be explained.

Expand full comment

"from my internal perspective, I don't think neurons account for what I'm experiencing."

I said nothing of the sort. I am making no assertion about the causal relation between neural activity and experience.

Expand full comment

I have not said anything about experience being “mysterious.” (You keep saying that.)

Expand full comment

There's a large logical gap in your account of what one "has to accept."

I fully accept the bulk of modern neuroscience. (The only studies in neuroscience of which I tend to be skeptical are the ones of which Scott also tends to be skeptical.)

I have mentioned no "demon." (You have.)

Expand full comment

I'm not a Cartesian. Not even close.

There are many cases in science in which X causes Y, and yet X and Y are distinct phenomena. Your argument here is not valid.

Expand full comment

The much slower rate of conscious analysis is why martial artists train repeatedly. Even when people know a move, the point is to practice it enough that it starts before people notice the attack coming

Expand full comment

I don't understand why people are confused by 'qualia'. To me, it very clearly seems to be that the ability to notice that I am experiencing {whatever is happening at the moment}, is just a special case of consciousness, being conscious about being conscious. When I notice my own consciousness and me being a person in a real world right now, it also evokes feelings of awe, excitement and solemnity, among other less discernible ones.

But neither the noticing or the feelings are special, they just apply to higher-level concepts. Also, they are really complex/'high-bandwidth', which makes them feel vivid and special.

If anyone is confused about 'qualia', I would love to try and answer concrete questions, because the 'hard problem' just doesn't qualify as a problem _at all_ to me, if stripped of all the superfluous words around it.

Expand full comment
May 15, 2022·edited May 15, 2022

The way you've formulated this makes me think you're talking about something different than what is usually meant when discussing qualia / HPoC.

That is, it's nothing to do with specialness, vividness, or feelings of awe; not necessarily high-bandwidth or complex; not related to being conscious of being conscious of an experience. These are not only not salient features of the phenomenon, they aren't necessarily features at all.

E.g., an experience of "redness" is not necessarily vivid, awesome, complex, reflected upon, or consciously noticed. The question of what it is and why it should be so applies regardless, unanswered by "well the experience of redness is like that because it's a type of consciousness."

My question would be, then, how it is that you think "noticing a subjective experience is just a special case of being conscious of being consciousness" answers any questions about what subjective experience or consciousness actually are or how and why they are produced!

"It's a pattern of neurons" or "it's a type of feeling" or "it's a particular type of awareness" no more answers how and why this wavelength of light should feel like redness than saying "it's a series of numbers" answers to a caveman why and how a computer works — it's not wrong, but it's not enough.

Expand full comment

I see, it is indeed different, thanks for pointing that out. I was confused because apparently there are many formulations to HPoC, some of which do not involve the word 'qualia', and _those_ were the ones I referred to, in the process inventing my own meaning to 'qualia', as I was sure I understood what was being discussed. Sorry about the confusion. But I think my worldview also explains what people actually mean by qualia.

The multiple formulations are confusing and indicate to me there's actually no one Hard Problem. Rather, different people are perplexed by different questions, and I think it's better to tackle them directly.

Could you formulate yours, using only observations about the world and about your personal perception of it, without any ambiguous terms?

Expand full comment

I see a lot of ideas here that are reminiscent or straight up the same as in the book "A Thousand Brains: A New Theory of Intelligence".

Expand full comment

I loved this discussion but I think your discussion of the hard problem of conciousness leaves much to be desired.

In particular, the primary philosophical argument for taking the hard problem seriously isn't Searle but Chalmer's work on the conceivability of philosophical zombies: that is people who acted exactly like us (including claiming they were consciousness) but lacked experiences. Indeed, the Searle style arguments such as the Chinese room etc.. are relatively disfavored lately (though, based on my discussions with Searle many of the views he actually takes are different than the way his views are often summarized in the literature).

What makes the hard problem hard isn't explaining why experience would have structure but why it would be there at all. Moreover, if you can't explain why such and such neural firings give rise to any kind of experience you can't explain why that experience has a feeling that in some sense reflects the representational work that neural activity is doing in the brain.

Expand full comment

One issue I have with this review that I have not seen mentioned is that you make a lot of claims of the form "science shows..." without explaining the study setup. For example, you say "babies are conscious" without explaining what exact experiment was done to supposedly show this. This is a big problem, because there's no conceivable experiment I can think of that can show a 2-week old is conscious -- 2-week olds are extremely hard to study!

Even for, say, 4-month olds, the main way people study them is by measuring how long they spend looking at different stimuli. How do you get from that to consciousness, i.e. to self-awareness of their own thought processes? Perhaps there is a way to do this, but the fact that you don't explain what it is makes me skeptical. It makes me think you're trying to pull a fast one on me.

This keeps happening in the review, not just in the appendix. For example:

"In dream phases (REM sleep), external stimulation usually does not spark consciousness. However, the brain does react like a conscious brain if the stimulus is directly implanted into the brain via magnetic stimulation (TMS)"

Oh yeah? And what were the experiments that demonstrated that people do, or do not, have self-awareness of their own thoughts while they were dreaming? I am almost certain that any such experiment is subject to critiques of the form "this doesn't show what you say it does". Just because you have a paragraph saying "oh but researchers were very very careful" doesn't exempt you from explaining *how* they were careful -- you have to show, not tell.

In other words, I accuse you (and perhaps the author of the book) of editorializing: instead of presenting the scientific findings ("experiment X showed Y"), you present a story you claim has been proven by these findings, without nearly sufficient justification.

(I come across as bitter because this review annoyed me, but I should also mention that I found it to be well-written and I did learn some things from it, so thank you for writing it.)

Expand full comment

This accusation should go to me, not to the book. The longest chapter of the book is on the question how to recognize consciousness from EEG signals. The point is that the signal looks totally different for unconscious and conscious perceptions, and it has a very characteristic shape for conscious perceptions. Dehaene takes this as fingerprint of consciousness, so he concludes that when we see this characteristic shape, then this means that conscious processing is going on.

Now, this doesn't solve your problem, because whether such an argument is convincing depends on the technical details of how specific this fingerprint is, how much exactly the unconscious and conscious signals differ, and so on. This is impossible to discuss without going fully into the technical details, which is why I did not include it into the review.

Personally, I found the fingerprint convincing after reading a book, but if you want to form your own opinion, I don't think there is a shortcut to reading the book (or even the research papers).

To give you some idea, there are four major criteria that Dehaene requires for his definition of this fingerprint of consciousness. All of them are there during conscious perception, but not during unconscious perception:

1) global ignition during the P3 wave, including parietal and prefrontal areas;

2) a characteristic timeline of activity (a sequence of P1, N1, N2, P3a and P3b waves): at first (at ~100ms) only activity in the sensory region. Then (at ~280ms) a drop of activity in sensory regions, but increase of activity in parietal/prefrontal areas. Finally (at ~300-430ms) global activity and re-activation of sensory regions. For unconscious perceptions, the timeline ist roughly the same until 200-280ms, but the global ignition and re-activation is missing. (I'm simplifying.)

3) a late and abrupt increase of high-frequency oscillations;

4) high Granger causality between many and distant areas.

The point is that it is a pretty complex and characteristic signature. Since we observe this signature in babies as well, Dehaene concludes that they are also conscious. (Point 2 is stretched a little bit because the timeline is slower; but all phases are still there, and in the right order.)

Expand full comment

In order to have a proven EEG signature, you presumably need a known positive and known negative to test against; i.e. you need a person you know is conscious and one you know isn't conscious, and then you can check that the former has the EEG signature and the latter doesn't.

What's the known negative here? That is, what or who is the person we know to not be conscious, so that we can verify the the EEG signature is absent?

If there's no known negative, you are just saying "this EEG signature always occurs in all awake humans," which is hardly convincing as evidence for consciousness of sleeping people or babies or whatever.

You mention "unconscious perception", so presumably the known negative is some type of unconscious perception? But when a person perceives something subconsciously, they are still awake, and hence still consciously perceiving other things. So what does it mean to scan the brainwaves of the unconscious perception but not the conscious one?

Expand full comment
May 16, 2022·edited May 16, 2022

Yes, the positive and negative cases are conscious and unconscious perceptions (in the same test subjects, but also across different test subjects). So the two cases are discriminated by whether the test subject says "I have seen the word 'range'" or "What word?"

But you bring up an interesting question: in what sense is being awake the same as being conscious? Essentially, Dehaene does not really study "being" conscious at all, not in the sense of an ongoing steady-state process. Instead, he studies "having a conscious perception", so a discrete, clear-cut event that lasts about 500ms and has a very specific neural signature. It turns out that test subjects can report on the perception if this event happens, and can't report if the event does not happen.)

I confess that I am a bit guilty of abusing terminology when I say "someone is conscious" in the review. A more correct phrase would be "someone has conscious events". I don't know how often a normal awake person has this type of conscious events, but there can be some breaks in between. (At least of a few hundred ms; but I am not sure how long the breaks can get. One second? 5 seconds? More?)

Expand full comment

I don't understand what it means to have a conscious event. I consciously notice things every single second, do I not?

If you show me some blurry screen and I say "what word?", well, I at least consciously observed the *screen*, right? So I consciously observed *something*. So what you are measuring is not consciousness so much as "answering yes to the question"-ness.

I remain incredibly doubtful that this supposed brainwave signature is at all related to conscious experience. Equally likely it signals the thought "this experiment is stupid", or something along these lines; it's something that's more likely to be thought when you see the word than when you don't, granted, but in all cases you see the *screen*, so in all cases you are conscious.

Expand full comment

"I don't understand what it means to have a conscious event. I consciously notice things every single second, do I not?"

I am honestly not sure whether this is true or not. It feels a bit like it, but there are some many weird things with our introspection that it could also just be that the brain filly in gaps with plausible speculations whenever we try to remember something.

With the alternative that the signature measures "answering yes to the question"-ness: I think a more sophisticated phrasing of that is indeed the strongest possible way to criticize Dehaene's interpretation. The standard experiments typically do not just require conscious perception, but also "conscious access": you are supposed to do something, based on your perception (e.g., decide whether to press the left or the right button). That could be different from "conscious perception" only. This paper explains the criticism in more detail:

https://www.nyu.edu/gsas/dept/philo/faculty/block/papers/consc.BRAIN.8.pdf

It's not easy to disentangle conscious perception from conscious access, but the paper above suggests some experiments for this, where you are not told in advance what to do with your observation. But I don't know whether this has actually been done, or what the results look like.

Expand full comment

1. Minor error: the city of Glaurus has a population of 12k, but the canton of Glaurus has a population of 40k. All the pictures look awesome and I want to go there. Maybe split an AirBNB with somebody. I have nothing better to do.

2. I just did a self-experiment with binocular rivalry. I held my phone up to one eye as close as possible while looking at my desktop monitor with the other eye. There was no alternation. I saw both complete images simultaneously, blended together without obscuring each other. I can read text on the screen with my left eye at the same time as I consciously notice which apps are moving around in my right eye. It sounds like this is far from the normal result, but I've never been diagnosed with autism.

Expand full comment

Very strong start to the book review contest! I've decided to take notes this time around. I'll just plop those in below, might edit into a slightly more directed form later, after I've had a chance to read through others' comments.

I don't think I experience binocular rivalry as described? Curious on link with autism, don't know if I am or not, never cared.

Procedural vs Episodic memory: I seem to be vastly better at the former than the latter. Mental quirk? Affected by diet?

On response time and consciousness: I not infrequently respond in seemingly unconscious ways (jumps, starts, vocalizations, etc.) with an odd delay (sometimes even over a second)

> Some cool experiments show that when we are shown a surprising image, the time we believe the image to appear is 300ms after it actually appears. But if we can predict the image, there is no such delay, and we perceive the timing correctly.

When watching people play games, if they very quickly open and close a menu (to check a number, say), I typically can't perceive the number even as existing, unless I am watching intently for it. This gives a plausible and interesting explanation.

Musing: what we decide, we get more of.

>since we never "observe" different parts of our mind to be incoherent or even independent

Hmm. I agree, given "observe" as "have a conscious perception" with "conscious" defined as it's being used here. Otherwise, I strongly disagree, at least for my personal experience. I regularly notice my own internal disparate nature.

Expand full comment

I thought this post was quite excellent

Expand full comment

If you want to see what something being visible for only a few tens of milliseconds looks like, Paul Christiano made an anagram-solving game with an option to have the letters only show up for a certain amount of time. You can set it as low as 0.02s:

https://paulfchristiano.github.io/anagrams/

Depending on the monitor, they'll generally show up for a single frame. Makes for a fun challenge.

(of course, this is different from masking experiments since it all takes place on a white background, but still, gives a sense of the time scales involved)

It's a great game in general - oddly rewarding.

Expand full comment
May 16, 2022·edited May 16, 2022

I am seeing a lot of people arguing that there is no Hard Problem of Consciousness. In this TED talk I will attempt to convey why there is such a problem, and what properties a valid solution to it would need to have by analogy with another Capital-H-Hard Problem.

Specifically, I claim that the Hard Problem of Consciousness is "Hard" in precisely the same way as what I will call the Hard Problem of Metaphysics, that being "Why is there something rather than nothing?".

Suppose that we perfected the Standard Model of particle physics, i.e. we found a set of equations that predict all natural phenomena to arbitrary levels of precision. Someone might then ask the question "Why is there something rather than nothing?" and receive the response "Because of this set of equations". I claim that this response does not answer the question at all, and the hypothetical person delivering the response has made a category error.

The equations of the final draft of the Standard Model might describe the behavior of matter and its interaction with spacetime perfectly. They would not in any way address why matter and the spacetime containing it exist at all.

I am not aware of any candidate solutions to the Hard Problem of Consciousness, but I am aware of one valid solution to the Hard Problem of Metaphysics, that being the Mathematical Universe Hypothesis.

The MUH actually answers the question. There is something rather than nothing because Mathematical Platonism is true and the universe we perceive is precisely a mathematical structure. Why is there Mathematical Platonism rather than no Mathematical Platonism? Because the absence of Mathematical Platonism is a logical impossibility; mathematical truths are logically necessary.

Proof: The statement "2+2=4" is not logically necessary. However, the statement "Given the axioms and definitions of number theory, it follows that 2+2=4" is. It is as unavoidably true as any syllogism. All mathematical truths are statements of the second type.

The equations of the Grand Unified Theory solve the Easy Problem of Physics. The Mathematical Universe Hypothesis, if true, solves the Hard Problem of Metaphysics.

Appendix A of this review addresses the Easy Problem of Consciousness. It does not interact with the Hard Problem.

Expand full comment

Why must the axioms of mathematics be true, though? And that's even before going into the weeds of incompleteness and stuff like the axiom of choice.

Expand full comment
May 16, 2022·edited May 16, 2022

The axioms don't need to be true, any more than it needs to be true that "all men are penguins" for it to be necessarily true that "if Socrates is a man, and all men are penguins, then Socrates is a penguin."

I don't think that incompleteness matters, since even the necessary truth of statements within an incomplete but consistent formal system (although actually, we could generate truths about infinitely many formal systems) is enough for us to have successfully done a Mathematical Platonism.

And then of course the axiom of choice needn't be taken to be true or false for statements like "given the axiom of choice, the following holds" to be true.

Expand full comment
May 16, 2022·edited May 16, 2022

>The axioms don't need to be true

Therefore MUH simply moves the question of why there must be something rather than nothing one level further.

In fact, the existence of anything is unexplainable. There are two alternatives: either there originally was an uncaused cause, or an infinite sequence of causes has occured up to that moment. Both are obvious logical absurdities.

Expand full comment
May 16, 2022·edited May 16, 2022

That's totally fair.

I suppose my position is that deductive logic is valid/exists without cause, and Mathematical Platonism follows from there. Obviously this is not something I can prove, as the act of doing so would assume the conclusion.

I'm personally willing to believe it as a matter of religion, however.

Incidentally, the question of whether the MUH is true then depends on the Hard Problem of Consciousness. I.e. we need to know whether or not abstract mathematical structures can be conscious (whatever that means).

Expand full comment

Well, I'll grant that the cosmological argument isn't worse than any other attempt to resolve this, but this only gives you deism. I haven't seen a reasonable case for any particular religion beyond that.

Expand full comment

There s nothing in formal logic that prevents infinite sequences.

Expand full comment

"Because the absence of Mathematical Platonism is a logical impossibility; mathematical truths are logically necessary."

that's a confusion of levels. The truths of mathematics don't assert the ontological existence of any mathematical.entities..that would be a metamathematical claim.

Expand full comment

I suppose I'm defining Mathematical Platonism as the doctrine that mathematical truths are valid in a sense that is independent of conscious observers and indeed the physical universe.

You seem to be making a distinction between mathematical truths and mathematical entities?

So for example, we can conceive of various truths related to the number/entity pi, but that is distinct from saying that pi "exists" in some Platonic sense?

I think that if the truths hold, then the truths automatically have an overall structure to them that I conceive of as being the essence of pi, but it's possible I'm playing games with semantics. :)

Expand full comment

Well, as far as I understand it, Tegmark's Level IV Mathematical Universe is *already* metamathematical, including other "universes" with other sets of axioms than the usual ZFC ?

https://space.mit.edu/home/tegmark/crazy.html

I guess one could try to go one meta level even above that... but would that even make any sense ?

Expand full comment

I'm not defining metamathematical as alternative axiom sets. No axiom can prove something actually exists.

Expand full comment
May 22, 2022·edited May 22, 2022

Not even in a mathematical universe where everything that can exist, does ? (Though I guess it's a moot point then...)

Expand full comment
May 16, 2022·edited May 16, 2022

Charmers once joked that Dennett might be a p-zombie himself. When I read works by the “explain away consciousness” crowd I get the feeling that they perhaps dont really grok what people like charmers and nagel means by qualia or consciousness. lots of the arguments talk past each other. are some of us… zombies? and that draws the line between those who recognize qualia and those who genuinely seem to be confused about what we’re even talking about? (kinda like that mental imagery study that showed some of us lack it completely-but never knew!)

Expand full comment

This is a critical mistake I keep seeing, which Chalmers himself commits in his more off-the-cuff remarks: p-zombies talk about qualia *exactly* as much as non-p-zombies do, otherwise there'd be a physical, causal difference in how that speech was produced that could be examined. The fact that you talk about qualia *cannot in any way* be caused by that subjective experience, whether or not you have it.

Compare a universe physically identical to ours, either with or without qualia: Zombie Chalmers is making the same arguments as Qualia Chalmers, as are Zombie Dennett and Qualia Dennett. It is supposedly just an astonishing coincidence that Zombie Chalmers' arguments are correct about a counterfactual universe he has no experience with, and that Qualia Chalmers was accordingly correct by similar luck.

Expand full comment
May 16, 2022·edited May 16, 2022

Great review, much appreciated. Now let me go off in a linguistic tangent :)

I find it quite confusing and unsettling how English seems to lack a general word for what the mind does, in the broadest sense. (Same goes for other languages as far as I know).

We have words for perceiving, thinking, giving attention, awareness, self-awareness, having feelings, emotions, memories, etc. And of course we have the word consciousness, which is hugely overloaded — this review makes a good point that the phenomenon it chooses to call "consciousness" is indeed a worthy candidate for the name. Then we have words like "know", which can refer to a broad range of mental events, but often has extraneous implications such as the cognition being correct or somehow justified.

I was kind of hoping that we could use "cognition" as such a general term, ie. that we "cognize" a thought, a perception, a memory, a sensation, etc. But this article then goes and reserve the word "cognition" for the super narrow sense of abstract reasoning!

Then there is "qualia", but that is again super technical, specifically refers only to the subjective experience, and is hardly ever used outside of philosophical wrangling and thought experiments.

Is our experience of the mind's operation so scattered, that we forgot to make a general verb for it?

Then again, people often take the word "mind" itself in all sorts of restricted senses, equating it with thought, and contrasting it with other types of cognitive events like emotions and intuitions. Looks like we don't have either a good verb *or* a good noun for the whole thing!

Expand full comment

But the whole thing is pretty vague. Does "mind" encompass both conscious and unconscious processes? If so, which of the unconscious do we include? All of them? Then "nervous system" would be good enough, but probably not what you had in... mind...

Expand full comment

Well in actual practice, if I talk about having "cognized" something, for lack of a better word, it's probably because it was conscious of it :)

Expand full comment

What about "experiencing"?

Expand full comment

I've started to use qualia in the context of talking about colors : I find it kind of incredible that one of the biggest of Newton's discoveries : that colors aren't "real", has seemingly been largely forgotten (in the deep sense) by non-specialists ?!

Expand full comment

Complete tangent, but what does the acronym "UNO" refer to with regards to the United Nations?

Expand full comment

A translation mistake, it should just be "UN".

The acronym in my native language (German) is UNO, and it is derived from the English term "United Nations Organization". Quite confusing. :-)

Expand full comment

I noticed one other potential translation issue, which is that in English, the word "bank" isn't associated with a bench for sitting at all - just the financial institution, and the side of a river. But otherwise, great review!

Expand full comment

Darn! Yes, this was also a translation issue. Next time I should get a native speaker for proofreading.

Expand full comment

Another one - Pawlow should be Pavlov :)

Expand full comment
May 16, 2022·edited May 16, 2022

I'm thoroughly unconvinced by this review/book's point of view about consciousness. Instead of talking about word definitions, I'd like to express my points of disagreement as empirical questions that I don't think the given paradigm answers:

- What *precisely and in general* (on the maths/physics level) causes some groupings of matter to "feel like they are something" or "feel sensations"? Many animals have different nervous systems, different neuron types of sensing different things (e.g. nociceptors), and so forth. Some structural, describable, measurable thing shared by humans and cows and maybe insects and probably fish, but not rocks or probably plants, and maybe some future AIs.

- Are there any systematic differences (on the maths/physics level) between pleasure and pain *within* a type of sentient being?

- Are there any systematic similarities (on the maths/physics level) between pleasure/pain experiences had by *different* types of sentient being? If I stub my toe, how similar is that feeling to the feeling when a tiger stubs his toe? Where does that similarity come from?

- Is there a way to measure anything like utility/hedonic units/whatever, across individuals and species of sentient beings? If not, what if anything can we construct as a "ground truth" for utilitarianism/consequentialism? If we can't use anything for that, what do we do with the obvious resulting problems? (e.g. tradeoffs between two vague types of "betterness", neither of which are actually measurable for some reason. *Moral Uncertainty* is kind of about this, I think.)

- Will [QRI](https://www.qualiaresearchinstitute.org/) *ever* come out with good research? Seriously, they seem to be the only group thinking in these general-yet-reductionist (i.e. good) terms about consciousness (especially as laid out in Principia Qualia).

And yet, their newsletter and blog are heavy on speculations and trip reports, but light on brain scans and testable predictions (like "if you grow X cells in Y shape and then poke it, it is mathematically guaranteed feel pain" ).

TLDR knowing that neurons exist doesn't "solve" consciousness any more than ELIZA "solved" AI. We should expect a real explanation to be "meaty" and good in the sense that it would pass smell tests from like The Sequences. An example of this kind of a preliminary sort of this explanation (which may or may not be true) in AI would be https://www.gwern.net/Scaling-hypothesis#blessings-of-scale .

Expand full comment

> The isolation just doesn't fit into the baby skull

That answers that question. It always bothered me: if we don't create new neurons after birth, why do our skulls grow? Making space for the myelination makes absolute sense!

Expand full comment

How does the theory about shizophrenia fit in with the autism-is-reverse-schizophrenia idea? (Not that I think that idea is bulletproof.)

I don't think I understand in any depth, but it kind of sounds like we'd expect a reverse schizophrenic to consciously notice something faster than average? Which seems to track with sensory sensitivity.

Expand full comment
May 18, 2022·edited May 18, 2022

Yes, some low-level signatures of autism/schizophrenia are opposed to each other. Overall, the picture is still confusing. From the review "Sensory perception in autism":

"First, individuals with autism often show faster detection of single details (targets) embedded in cluttered visual displays (that is, among distractors) and a relative insensitivity to the number of distractors in the display [31]. This visual search superiority in autism has been widely replicated [31-37] and extended as a promising early marker in toddlers through eye-tracking [38,39].

...

However, perplexingly, basic measures of visual sensitivity such as visual acuity [37,42], contrast discrimination [43,44], orientation processing, crowding [45,46] and flicker detection [47,48] have all been shown to be typical in autism."

https://docs.autismresearchcentre.com/papers/2017_Robertson_Sensory-perception-in-autism.pdf

(I hope the link is not paywalled, but it's hard to check from my university account.)

Speculating, that might suggest that the standard test of detecting a masked image should be easier for autistic people (while it's much harder for schizophrenics), but tests with contrasts would not be. But I don't know first-hand experiments. If true, this would be a careful "Yes, probably, sometimes" to your prediction.

It would also fit the framework that Scott described in his book review on Surfing Uncertainty, that autistic people have too high confidence in their top-down predictions. The overly confident predictions should cause perceptions to propagate to the highest levels (i.e., reaching consciousness) all the time, even when not necessary.

https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

Needless to say, all this is very speculative. And Dehaene does not address autism in his book at all.

Expand full comment

Great review. Point of contention: Does the brain truly break everything downinto discrete units, or does the left-brain break everything down into discrete unit?

Expand full comment

It occurs to me in reading Appendix A that in the classic formulation due to Descartes ("cogito ergo sum"), the "I" is smuggled in grammatically. Perhaps writing in Latin, Descartes simply failed to notice that his supposed first-principles logic had introduced a first-person.

Expand full comment

This was a great book review. Very high information content, very clearly communicated, on a difficult and profound topic. Kudos!

Expand full comment

My question about the topic of qualia as touched on above revolves more or less around the following: is it substrate-specific or algorithm-specific? That is, if you use a biomimetic approach to building an AI, taking some liberties to avoid doing a 1:1 molecular simulation, just implementing the basic necessities, would it still have qualia, or could those be just a byproduct of how organic cognition works? Could you get a p-zombie still able to mimic organic behavior by not having it be comprised of entirely independent computational elements like neurons? Could the conscious subjective individual experience as we experience it be a coincidental 'luxury' organic systems get as an outcome of their general composition? I guess, to sum these up: does having qualia for a sentient mind require embodied cognition? Maybe I'm asking nonsensical questions however...

Expand full comment

I wonder why Scott still isn't gathering book review ratings at the end of each review rather than ... much later. Seems like the delayed approach not only annoys readers, but will introduce some kind of bias (most likely a recency bias for the last reviews in the series).

Expand full comment

> So perhaps we are conscious in dreams, and we are only cut off from outside perception. But it is too early to be certain.

I am so confused when people say things like this. How is it not completely tautologically obvious that we are conscious during dreams? If Dehaene's neural activity criteria for consciousness doesn't happen to capture what is happening during dreams then, BY DEFINITION, he hasn't captured all of human consciousness. If we think we are conscious and a scientific experiment tells us we are wrong then it has to be that the experiment is wrong, not us. As Searle says: when it comes to consciousness, the appearance IS the reality.

I might be grossly overgeneralising but it seems to me that it is only people who deny the hard problem that say things like this. It's a really fascinating subject area where very intelligent good faith proponents on both sides seem to be talking past each other and have a while different fundamental conception of what the problem is about

Expand full comment

This was a fantastic read, and has done more to convince me of anything I've seen before about the hard problem of consciousness simply not existing. Bravo!

Expand full comment
founding

Ahh, yes – questions like "Why I am I 'me' and not someone else" and "Why is there something instead of nothing" are indeed exactly the kind of 'confusing mysteries' that I think can't be answered directly. I strongly suspect they need to be 'dissolved'.

But it's still not clear whether you're claiming that qualia are things that are "inherently unanswerable" – EVER – by any possible future neuroscientist. It sure _seems_ like you're claiming that.

I just can't understand how something could be NOT-independent of the brain but somehow also NOT observable at all, by "externally observed neurological processes". Are qualia some kind of non-physical thing? How do they 'interface' with the brain?

To me, the strongest evidence of the 'existence' of qualia is, besides introspection, communication. I'm very sure that both introspection and communication are neurological and, in principle, 'externally observable neurological processes'. It seems impossible for people to be able to communicate about qualia if they're not also, fundamentally, a neurological process too (and thus one that _could_ be 'externally observable').

I think the review describes the best 'dissolution of the mystery' of qualia – they're just a kind of special 'conscious memory'.

I can't figure out what _other_ 'framework' you or anyone else could possibly have in mind that provides testable predictions. It sure _seems_ like you all are claiming that qualia are 'magic', i.e. non-physical.

Expand full comment

When I try to distill the hard problem of consciousness, to argue that it is real, here's what I come up with. In theory, you could systematically replace every cell in my brain with a machine, large or small, that would receive and transmit signals to the other cells the same way the organic cell it replaced did.

As long as all of these machines operate in the same time scale relative to each other, they should be functionally equivalent to my brain. And it shouldn't matter what they're made out of. Maybe each one is the size of a swimming pool and rolls billiard balls around, passing some balls to other swimming pools nearby. Say these balls move around at 1 inch per hour. In theory, every aspect of how my brain works could be captured accurately by this system.

The question, then, is when is swimming pool billiard system remembers a mistake it made 30 years ago, will it really feel the onset of crushing shame the way I feel it? As these trillions of balls all slowly roll at a maximum speed of 1 in per hour? If you believe there is no hard problem of consciousness, I think your answer has to be yes.

Expand full comment

> 1) "I have seen the word range", or 2) "What word? There was no word!?"

This isn't necessarily measuring consciousness, it's measuring memory? Unless I'm missing something, we could consciously see the word range, but not commit it to memory and be able to report on it afterwards?

Expand full comment

Sorry to break the "420 Comments" -- too good.

But I just don't understand people who dismiss the "hard problem" and talk about the experience of red. That is not it at all. Haven't you ever felt lust? Or deep hunger? Or overwhelming pain? It FEEL like something. It isn't some academic "qualia" and it isn't a sense of "I" -- matter and energy have arranged such that it freakin' FEELS like something. This is never-endingly amazing to me, no matter how many things like this I've read.

Expand full comment

Thanks for this review; it sounds like my kind of book. I have long favored an account of consciousness offered by William Powers in his 1973 book, Behavior: The Control of Consciousness. Unfortunately it's not a view that's easily stated, but Dehaene's view seems consistent with it. But, Powers offered his account on the basis of an abstract theoretical model of the mind, but had little to no empirical evidence for his account of consciousness, though he offered informal observations in addition to his model. It sounds like Dehaene has plenty of empirical evidence.

BTW, Scott reviewed Powers's book back in 2017, https://slatestarcodex.com/2017/03/06/book-review-behavior-the-control-of-perception/, but had nothing to say about Powers' account of consciousness and, on the whole, gives the book a mixed review. I've got a recent post where I quote Powers on consciousness at some length, https://new-savanna.blogspot.com/2022/08/consciousness-reorganization-and.html.

Finally, I agree with you on the so-called hard problem of consciousness: There's nothing there.

Expand full comment