Well put. I meditate not so often and by instinct and I dont think I’ve hit jhanas (unless feeling particularly joyful and smiling widely during insight meditation counts), but i have no difficulty believing they’re real, given the surprising and wonderful experiences ive already had in meditation. I’m surprised at how unbelievable people seem to find them.
On the “leaning into pain lessens the suffering of it” point, I did accidentally discover it outside of meditation and also found that applying it to mild anxiety has the same effect, really any discomfort. I think it’s something known amongst psychologists, because when I told my therapist about this weird way I reduced anxiousness, she knew exactly what I was talking about.
This is not helpful in clarifying what you meant XD
Mostly that I keep getting correlated with INTP on Myers-Briggs, which are introverts (and I used to be very shy, which seems to be traditionally a "girly" characteristic ?), and also (in)famously are very bad at analyzing/expressing their feelings...
This seems like one of those philosophical questions where it's probably possible to conjure up a clever counter-example if you think on it hard enough, but in general the proposition tends to hold. And in these kinds of cases, I guess my question is-- how does finding a counter example meaningfully change our understanding of the human experience?
Raymond Smullyan's dialogue "An Epistemological Nightmare" is a very well-done exploration of this question. It features an "experimental epistemologist" who uses a brain-reading machine to contradict a patient who claims that a book seems red to him. According to the epsitemologist, the machine can tell, objectively, that the book doesn't seem red to the patient. It spirals hilariously outward from there.
That’s a great story! Very mid century analytic philosophy. It’s clearly engaging with the Ryle/Sellars discussion on “the myth of the given”, whether there is anything people can be infallible about. It has a good chunk of Smullyan’s own primary interests in logic. But it makes the mid-century move of assuming that epistemology and ethics are just totally separate things, and that there is no real “should” of belief.
I think the story is wrong. At least sometimes. For example, "The machine never claimed to be untrustworthy, it only claimed that the epistemologist would be better off not trusting it. And the machine was right." The machine didn't claim that epistemologist would be better off not trusting it. The machine claimed that epistemologist thinks that he could save himself from an insanity if he stopped trusting the machine. The machine doesn't think, it just shows psychological states.
I mean it's kinda important, because epistemologist was mistaken to believe that there was a paradox. It would go away if he assumed that not only machine could be wrong but he himself also. He thinks that it would be best to not trust his machine, but it doesn't mean that machine is not trustworthy, it even doesn't mean that it would be best not to trust it (though it probably would be), it means exactly this "he thinks that it would be best to distrust the machine".
Logic can be a bitch sometimes. When applied to psychological states at least. When I was in love for the first time, I constantly thought that my head is going to explode. "It doesn't compute". Yeah... I managed to reduce the problem to a circular reasoning loop of three statements, and decided that the only way out it to stop pursuing my love. It does seem sad now. I wish to move in time and to try my current mind on the same problem. Without a memory about the first try of course, or it will be a different problem. I believe I'd crack it if I got a chance.
I love Raymond Smullyan. He is one of my favorite people - I just love the way his mind worked. His essays are insightful and often funny, and if you like mathematical and logical puzzles, his writings can keep you busy for years. One of my favorite pieces of his is this musing on free will: https://www.mit.edu/people/dpolicar/writing/prose/text/godTaoist.html
When schizophrenics describe thoughts being inserted into their minds are they lying or reporting their internal state faithfully? We label those as delusions (lying) since it can't be true. When someone genuinely believes in an internal state which is not believable, are they temporarily suffering schizophrenia? There seems to be no firm ground to stand on, in this discussion. Would diagnostic neurological data (sometime in the future) help disentangle this confusion? Clearly not according to this! Thanks for the cool share.
This feels related to the claim I once heard that “polar bears aren’t actually white, their fur is clear, and it just happens to appear white”.
Guess what? If something appears white, it’s white. If your clear fur happens to reflect photons across a balanced spectrum of wavelengths, then your fur is white.
I think the point is that it doesn't reflect photons, it scatters them.
And the fur only scatters light in this way when lots of transparent hairs are near each other, scattering the light repeatedly between themselves. If you looked at a single hair, it would look transparent.
Then I believe the fur is white, a single hair is transparent and the incorrect implicit assumption is that "an object which is composed of multiple smaller, identical (to each other) objects is the same color as its component objects".
But Robert said the bear's fur is clear, which is not the same as transparent.
Transparent defines some object which light passes through unaltered. Clear on the other hand defines something lacking color. Something lacking color i.e. water, can refract and reflect light, thus changing the appearance of the object.
Clear case of needing to sharpen the terms we use. Does a red ball (one we all agree is red under normal lighting conditions) become a black ball in the dark, or is it a red ball that appears black? It quickly becomes obvious that both ways of talking have a point, and now you can talk more clearly by knowing which one you mean (apparent colour right now, or color under typical lighting conditions).
Is this the experiment, to see if Scott providing the model i.e. the Epistemologicist, we all become miniature tyrant Epistemologists? [in the tone of 'The First Rule of Epistemology Club is We Don't Talk About Epistemology Club.'
Another way of arguing with the "polar bears aren't white" point: if you shave a polar bear, their skin is black. If you put something transparent over something black, it should look black. Given that polar bears don't look black, their fur is not transparent, polar bears look like the colour of their fur.
That's kind of true in the way that clouds are white. The water droplets themselves are clear, but the way they shatter light makes them appear white.
It's not the "standard" wavelength absorbing/emitting property, but the way humans determine (guestimate?) color still shows it as being that color.
That said, that's more of a schematics issue and there's plenty of that.
Even with just color there's things like how white/black/pink aren't colors or why the sky is/isn't blue/sun is/isn't yellow/white.
It does NOT seem to be related to the difference one's conscious and what is likely to be the reality of the situation.
In these cases, they are the clashing of one's conscious thought and what is happening to an outside observer. It's obvious that one's opinion does not need to match with that of the objective truth, but here even with things of purely subjective nature, it still might not "match".
However do we want to consider these "false beliefs"/"lies" or whatever?
I feel it would be easiest if we choose to keep them the same as non-subjective items but just not use the connotation of lying. Like you can hold the thought/idea that the moon is made of cheese without "lying" even if it doesn't match the outside objective view.
"Experience" can be complex and mean a lot of things. But if we're a bit more precise and say "qualia," then Scott's position is absolutely correct. You can't be wrong about your own qualia, because qualia just are the immediate experiences. You can lie about them, you can subsequently forget and misreport them in all kinds of complicated ways, but you can't be wrong about them in the moment.
So if someone says they see a 7D object on DMT, do you agree this must be true, or do you think that dimensionality of objects is not a quale? What if someone says they saw an impossible color (ie as different from any color we know as green is from red) on DMT?
First, I'd like to factor out the issue of third-person reporting by assuming that I myself have taken the DMT. That way, concerns about language and the possible distortion of reporting can be put aside.
If I perceived a 7D object or a novel color, then, yes, it is true that I did. This seems perfectly coherent and possible, at least in principle. We know that 7D spaces are mathematically well-formed, so there's no contradiction or incoherence involved. It seems pretty clear, as a general matter, that our neurophysiology can enter into unusual states in which perception is greatly altered or extended. That's a common experience on psychedelics (unlike the more extreme examples here). So, yes.
Is there room for incorrect attribution? We can say "I had an experience that was like this and this and this" but to characterize it as "perceiving a 7D object" is a kind of assessment/appraisal that we may not be equipped to make.
Like the woman claiming enlightenment, the person on DMT is an expert of their own true experience to the extent they stick to their experience. At the point that they characterize it as "enlightenment" or "jhana" or "7D object perception" then it seems like we need other tools to assess the truth of that claim.
I see that as part of the language/description problem that I'm trying to factor out. It's really not interesting to quibble about whether the words someone uses to describe their qualia are the right words. It doesn't get to the core question of whether someone can be wrong about their experience. (Wrong because they couldn't find quite the right words isn't the relevant kind of wrong.)
Take Scott's color example. I might have no idea what to call the novel color. I might just say "some novel color for which I have no name." That's perfectly fine.
If I say something as precise as "a 7D object," that would only be because I could count the dimensions, say, by repeatedly turning a rod at right angles. If I do that carefully with my DMT brain, and keep coming up with 7, then it's a 7D object! (Mind you, I find this example quite implausible, but that's irrelevant to Scott's question.)
What happens if you had a memory recorder. You watch your memory again the next day when sober. Then you notice that there were only 6 dimensions, but they moved around a lot. Is that another sense in which you can be wrong about your own experiences?
When you start to factor things, it seems like we can be wrong in a lot of different ways.
OK, then you really saw a 6D object, which works perfectly well for what Scott is trying to get at. It isn't that 7 is a magic number that you have to get right. It's "did I really see that weird-ass thing?" Answer: yes.
Oh that's interesting about trying to factor the language/description part out. I think I'm having trouble separating the language/description from the knowing/experiencing because we humans seem to know/experience a lot by telling ourselves stories about our experience. The story-telling (to oneself or others) is very often baked into and reinforcing of the experience.
The example "I see a color I haven't seen before" and it not mattering what we call the color makes sense to me. The story part of this seems to be that it's a color "that I haven't seen before" and assessing the truth of that statement will depend on the same kinds of ambiguities of memory, perception, and attribution that the other examples do -- is this color "new"? is this a "jhana" I'm experiencing? Am I "angry"? I don't really know how we get free of language/description even if we dispense with the specific word mattering -- it seems to me we're still dealing with the uncertainties of attribution, which may be mistaken in all kinds of ways.
If we ask the person who took the DMT a week later what they think about the novel color they saw, might that attribution change over time? "I'm not sure anymore that it was really a new color or just an idea my head cooked up while I was high."
The Buddhist idea of "no self" is coming up for me here, the idea being that we do experience a somewhat solid sense of self in everyday life, but on further examination, that self turns out to be much more provisional than we often credit. It's not that the self has no reality whatsoever and that the self we experience is a total lie -- but also it's not as "really real" as we often experience it to be. The stories we tell ourselves about our experience -- as we are experiencing it and later -- seem likewise provisional in all kinds of ways. It's good enough most of the time, but once we start to ask "is this thing I experienced really true?" I think the whole structure starts to break down.
We do have a variety of examples of this sort of thing, either people writing down their miraculous insights, or people making claims as to, eg, where things are (because they can see through walls or back in time, or whatever).
The actual validation after the fact is uniformly disappointing. The scribbled words are meaningless, the lost key remains lost...
If I were raised a certain way and were, say, a closeted gay, when I see a man who excites me, I might claim the experience as something like "a personal encounter with satan".
If I were raised a different way, still gay (but now happily married and faithful), and see a man who excites me, I might claim the experience as something like "evidence for the non-unitary nature of the self".
Same experience, very different framings (and everything else) around the experience.
Even in that case, might there not be 2 very different subjective experiences? In the initial moment perhaps things are the same, but the sequent factors might very well also have physiological impacts. The sick-to-your-stomach feeling of shame feels different than coy delight. So much so that I'm not sure most people could actually separate the instigating experiences from the second-order experiences let-alone the interpretations place upon them after the fact.
I guess I'll throw in my too-little-sleep experiences of seeing subtitles under people in real life. Never could get the text to outpace their speaking speed, though.
Were they accurate renditions of the things the people were saying? I've never heard of this before, but I wonder if it could be a new weird type of synaesthesia.
Yeah, they were accurate. I don't remember if they were being typed as people spoke, or if they appeared as a finished block of text like in movies, but I could only read the parts of the block they'd said already. Definitely some kind of sleep deprivation effect, it's happened a couple of times.
Are you capable of actually answering theorems about 7D objects? For example if I asked you about optimal sphere packing in 7D, or whether a knot could be untied, or whether one 6D shape could be rotated to a second 6D shape, would you be able to answer?
C'mon. Someone who's really bad at geometry, or just dumb, could see triangles and not be able to answer theorems about them. This has nothing to do with whether they really saw triangles.
I'm very sceptical of the claim that our neurophysiology can enter a state capable of visualising, let alone perceive, 7D objects. Sounds like a very fantastical claim to take without less than solid evidence. Is it not much more likely that you are perceiving some exotic combination of lines or whatever and are interpreting it as a 7D object - wrongly?
It's not worth over-indexing on the 7D example. I don't think the details of this example get at the real point. Even if we could do experiments that strongly suggested that someone really could visualize a 7D object on DMT, one could just turn around and say, "OK, well how about a 100D object?"
Do people in fact sometimes see 7D objects DMT? I have no clue, and for the purpose of what Scott was discussing, it really does not matter.
The novel color example is much more plausible and illustrates the underlying point well.
> We know that 7D spaces are mathematically well-formed, so there's no contradiction or incoherence involved.
So what if someone told you they saw a square triangle? No fancy semantic or non-euclidean trickery, just a square triangle. Is *that* report wrong? (And if so, why would anything seen during the same trip, generated by the same process, be any more trustworthy?)
Yeah, that was my first reaction as well. Why would the "mathematically well-formed" property of a shape have any impact on whether or not your drug-fueled brain is perceiving that shape?
As I've said several times now, I really don't like the 7D example because it leads people down rabbit holes that aren't actually relevant to the underlying question.
The novel color example is much better and less distracting.
But to answer your question directly, the reason I said "mathematically well-formed" is simply to note that 7D objects really are a thing, as opposed to "square triangles," which is nonsense phrase literally referring to nothing. I'm personally highly skeptical that DMT would actually let one see 7D object but, once again, that's irrelevant to the underlying question.
If someone reported seeing a "square triangle," I would certainly conclude that the report was false, because we know that there can be no such object as a "square triangle" that one could possibly see.
But the question here has nothing to do with reports! That's why I began by saying "I'd like to factor out the issue of third-person reporting [so that] concerns about language and the possible distortion of reporting can be put aside."
If someone is bad at description, or giving bad reports for any other reason, that's simply not relevant to the question at hand.
Assume you took the DMT yourself and had weird experience X. The question is: did you really have experience X? Forget what you might say about it afterwards; that doesn't matter. Just: did you really have experience X?
What about someone describing optical illusions, like a devil's fork or an endless stair? Those are mathematically impossible, but my experience in seeing them is very real and doesn't even involve any mind-altering substances.
What about it? There's no problem here. You see what you see, as you see it. Whether it corresponds to something in the world isn't part of the question. (It's obvious that an experience can correspond to nothing in the world. The mention of DMT was suppose to prompt that understanding.)
I've spent some time thinking about this and now I can visualise a square triangle.
Try this: Visualise a wireframe square in three dimensions. Keep the bottom edge fixed, and rotate the top edge ninety degrees clockwise around the Y axis, so you're looking down along that edge and it turns into a point.* Now the shape appears to be an upward-pointing triangle. Spin the whole shape 45 degrees counterclockwise around the Y axis, and it appears to be a square (well, rectangle). Keep spinning, it becomes a downward pointing triangle, then a sideways bowtie shape, then back to upwards pointing triangle.
Now simultaneously perceive the shape from the original angle and 45 degree rotation, voila, a square triangle.
* Sunnyboy Iceblock shape for those who grew up in Australia
I would argue that perceiving a 7D object is an interpretation of some experience. It is a story you told yourself about some collection of visual impressions. You did not perceive a 7D object, because such a thing does not exist in 3D space.
In the case of a 7D object, that could well be. But that simply speaks to why the 7D object is not a good example for getting at the underlying question, which isn't about the stories one tells oneself in language, but about the experience itself.
The novel color is a better example. I see no reason to suppose that one could not see a novel color when on DMT.
> But on the weekends I often experimented with drugs. I recall vividly one
episode in which a magical color appeared to me. I had been taught, as a child,
that there were seven colors in the spectrum, including indigo. (Newton had
chosen these, somewhat arbitrarily, by analogy with the seven notes of the
musical scale.) But few people agree on what “indigo” is.
> I had long wanted to see “true” indigo, and thought that drugs might be the
way to do this. So one sunny Saturday in 1964 I developed a pharmacologic
launchpad consisting of a base of amphetamine (for general arousal), LSD (for hallucinogenic intensity), and a touch of cannabis (for a little added delirium).
About twenty minutes after taking this, I faced a white wall and exclaimed, “I
want to see indigo now—now!”
> And then, as if thrown by a giant paintbrush, there appeared a huge, trembling,
pear-shaped blob of the purest indigo. Luminous, numinous, it filled me with
rapture: it was the color of heaven, the color, I thought, that Giotto spent a
lifetime trying to get but never achieved—never achieved, perhaps, because the
color of heaven is not to be seen on earth. But it existed once, I thought—it
was the color of the Paleozoic sea, the color the ocean used to be. I leaned
toward it in a sort of ecstasy. And then it suddenly disappeared, leaving me with
an overwhelming sense of loss and sadness that it had been snatched away. But
I consoled myself: yes, indigo exists, and it can be conjured up in the brain.
> For months afterward, I searched for indigo. I turned over little stones and
rocks near my house. I looked at specimens of azurite in the natural-history
museum—but even that was infinitely far from the color I had seen. And then,
in 1965, when I had moved to New York, I went to a concert at the
Metropolitan Museum of Art. In the first half, a Monteverdi piece was
performed, and I was transported. I had taken no drugs, but I felt a glorious
river of music, hundreds of years long, flowing from Monteverdi’s mind into my
own. In this ecstatic mood, I wandered out during the intermission and looked
at the objects on display in the Egyptian galleries—lapis-lazuli amulets, jewelry,
and so forth—and I was enchanted to see glints of indigo. I thought, Thank
God, it really exists!
> During the second half of the concert, I got a bit bored and restless, but I
consoled myself, knowing that I could go out and take a “sip” of indigo
afterward. It would be there, waiting for me. But, when I went out to look at
the gallery after the concert was finished, I could see only blue and purple and
mauve and puce—no indigo. That was forty-seven years ago, and I have never seen indigo again.
It's hard to say what he meant exactly by "indigo"... :
1.) Could have just been a regular color, the magicalness of which having been enhanced by his altered state (see other reports of "heightened" senses, "vibrant" colors...
and/or :
2.) a color with more lightness than physically possible
3.) a color with more chroma than physically possible
Note that colors are very complex, and changes to lightness, chroma, hue are often nonlinear, so they tend to be, and historically have been confused :
and the words we tend to used for them often confuse the matters, you have several examples in this very discussion about how "white, black, pink" are "not colors", even though white and black are definitely "color as visual qualia", and pink does correspond to a hue.
4.) finally, it *might* be possible (but IMHO unlikely) that he saw a completely new (to him) hue
I'm not sure how it's even possible to imagine something like 4.) if we haven't experienced it, like if we were colourblind - and then even the lack of color qualia, like in "red/green" colorblindness might hard to imagine *correctly*, since it's probably not just the absence of "red" or "green" qualia, but the difficulty of discriminating between them ?
Always feel vaguely embarrassed to admit to it because I’m a weird guy already but I had a strange/religious experience where I saw what I perceived to be God. Not trying to convert anyone but sharing because it is of interest to poke at.
I’ll strip out the personal stuff and just relate how it felt phenomenologically.
I was having an early morning jog and suddenly my dead grandfather was walking next to me. I was twenty six years old. Somehow my grandfather walking and my jogging didn’t cause us to become separated. I didn’t have any emotional reaction to him being there. As soon as I noticed that we were going different speed but not moving apart it felt like space sort of unzippered or unfolded or what have you. Like when you are unrolling a carpet and you can see it spin around an axis except it kept shifting 90 degrees. I don’t know exactly how many times it did that but I do sometimes wonder if I experienced that because it seems like the sort of thing you should experience when you’re having that kind of moment and my mind just sort of filled in the details.
There’s more to it but however interesting it is to me personally it’s pretty standard to what I’ve read after this happened (I also sometimes wonder if things I’ve seen in popular media played a part in my experience). Seems very standard but I sort of felt like I was in a “world” outside time and space but it was more like I was in the “place” that ideas or forms or qualia come from and that I was there as the idea of myself.
I’d been having a very hard time at that time in my life and felt immediately better although I still felt quite bad and immediately found a therapist to start seeing because seemed like the thing to do. Never happened again.
So I guess I experienced what I think n dimensional space feels like. But it’s not like I could immediately solve any higher dimensional geometry problems because I felt like I experienced something once. I guess your brains world model can just divorce itself from reality sometimes.
Don't feel embarrassed about it! The standard model of reality is about keeping everything neat and tidy in its own little box.
Sometimes things don't, can't, or shouldn't fit. God, a spirit, hallucination, or just you breaking yourself out of a bad cycle. Whatever it was, keep moving forward and appreciate the bit of magical headroom.
It did cause me to question a lot of what I believed as I was an atheist/empiricist prior to the experience and I guess now I’d maybe describe myself as a pan psychist but even that isn’t quite correct. I have had dream states where things felt unreal but that was the first time where the reality dial went above the way being awake feels. Sounds like privileging a hallucination and I do question it quite a lot, but I have trouble honestly dismissing it.
[epistemic status: Afternoons spent pondering higher dimensional spaces instead of doing my discrete math homework]
True and false do not apply. The standard for whether you see something is:
Does it minimize prediction error. Can you predict what comes next?
Often that's just: Will it prevent you from bumping into things?
Human vision creates 1-d objects/lines and 2-d mental objects/surfaces from processing ~100 million rods+cones. But each individual receptor is 0-d. 1d perception is stacked 0d. 2d is stacked 1d. 3d vision is a bonkers way of talking about depth perception.
Depth perception is just heuristics applied to your 2d surfaces and their relations generating useful metadata (for not bumping into things).
If you can visualize an onion, mentally spin it around, pan a mental camera, flip it, you are still only animating a set of a 2d frames. The picture "has" depth, but that depth perception is just metadata.
A truer form of 3d visualization of an onion, would be being able to see all the layers at once. How each fibre connects to the next in 3 perpendicular planes. If you cut onions a lot and looked deeply within, and practice generating the associated memories on demand extremely quickly and accurately, you may claim that you can visualize an onion in 3d.
Higher than 0d vision is all about filling-in-the-blanks and using heuristics to minimize prediction error. This applies to dimensions beyond 2d as well.
It gets very difficult to extend "vision" to more than 3d.
Because each dimension multiplies the maximum potential info of a mental object. At some point the objects become so complex, that when we grasp the fifth sub-component of it, we've already forgotten the first four. To claim that you "see" something higher-dimensional, you would need to solve that problem.
The trick is to study something so deeply, that you become extremely familiar with. Something, that's not too complicated. Start using discrete coordinate systems.
Learn how to visualize the 5-prime field in 3 dimensions. Only 5^3 = 125 points to consider.
Mentally project it into space and walk around it, make yourself see all the diagonals.
It's like a see-thru box.
Now for adding a fourth dimension:
You cannot mentally project it into 3d-space anymore. Not directly.
But you can still project any 3d-subspace. 4 subspaces with 125 points each.
And you CAN project all those subspace into 3d space next to each other.
(visualize them as 4 different "boxes")
Do this super fast, you can say that you're seeing it in 4d, even if a line (or some other shape) passes thru 4 dimensions at once. Use a bigger prime-field and you can simulate something closer to continuous lines and circles. If you mastered "seeing" 4 dimensions, you can extend to 5d in the same vein. Visualize a 5d space as 5 different 4d spaces. So 5 rows of 4 boxes.
It's a headache, but with enough coffee and sheer bloody mindedness... I'll let someone else volunteer. The standard for "seeing" is the same as for normal vision. Does it minimize prediction error? If you can quickly answer questions/navigate that space, then we may agree that you "see" it. If you cannot fluidly play TicTacToe (or chess) in it, you don't really see the space.
Now Emilsson said, that he could see something complicated..... He helpfully provided diagrams. Seems like he can usefully apply the concept of dimensionality on an object that he's extremely familiar with. Eh.... sure why not. Many objects can be usefully compressed/abstracted into orthogonal dimensions, allowing for reasonably compressive abstraction in that way. Does not sound too crazy.
Much like the "JOY=LOVE*JUSTICE" person experiences the emotion of profundity but in a sort of inappropriate, contentless, drug-induced way.. the 7d guy is experiencing "galaxy-brain intellectual euphoria" from drugs, probably because they induce trippy visuals so he's gonna think about that stuff, but no he didn't have a 7d quale
Disagree, but I think we're going to quibble about what 'wrong' means.
I don't think you can say someone is 'wrong' about something unless they have formed a specific conscious belief about that thing.
The process of forming such a belief is very different from the process of experiencing a qualia, and takes much longer. There is plenty of room for error or intervention in that process which could cause your specific conscious belief about what you experienced to differ from what you experienced.
Tue, people experience the qualia they experience, by definition. But experiencing a qualia is not something that can be either 'correct' or 'wrong'; it's not a belief or a statement that can be evaluated in such a way. You need to actually form a belief before it can be evaluated for truth, and that's a whole other fallible process.
I think the thing Scott is trying to probe is just the part in the last paragraph, which you nicely summarize as "people experience the qualia they experience, by definition." I think that's exactly what many are balking at, and it's the essence of the point I'm emphasizing.
The fact that one can be wrong about *the beliefs one forms* about one's qualia, I take to be obvious, and not what the discussion is about.
I don't think anyone claims people don't experience their qualia. Wouldn't that just be disputing the definition of qualia? But for someone to discount your qualia, it needs to be reported to them. And if they don't believe your report, they can say "You're wrong about that." I think the question being debated is; can you honestly misreport your own experience, even to yourself. It definitely seems possible to me.
It's trivially obvious that people can misreport experiences, in that the mapping from experience to language is imperfect and fallible. This simply is not interesting.
But Scott makes it clear in his second paragraph, using the example of hunger, that the real issue here is not misreporting in language, but the experience itself: "if someone says they don’t feel hungry, maybe they’re telling the truth, and they don’t feel hungry. Or maybe they’re lying: saying they don’t feel hungry even though they know they really do (eg they’re fasting, and they want to impress their friends with how easy it is for them). But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are."
"But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are."
Sure there is. Their stomach hurts and they think it's nerves, but then they eat and realize they were just hungry. The feeling was there, but they misinterpreted it. This is pre-language, but a misinterpretation of the feeling. It doesn't mean the "feeling" was wrong somehow... I don't know what that would mean. But their conscious interpretation of the feeling was incorrect.
Scott discusses precisely this objection in the following paragraphs. TL;DR "hungry" is the name of the experience, not of the causal basis of the experience.
Indeed. Qualia are immediate experiences.... and yet many people have memories of their past where details have become altered, sometimes radically, from what factually happened. For instance many people 'remember' events from their early childhood - but it was other people who told them about the events.
In which case 'red' will be 'red' but it may prime many other associations for cognition, and some of those associations may be wrong or disproportionate.
This is all correct. Memory is complex and is far from infallible in relation to the external world.
BUT ALSO: What Scott was discussing has nothing to do with veridicality in relation to the external world. He was specifically discussing present internal states.
If you define qualia as the things people can’t be wrong about, then sure they can’t be wrong about them. But then they may well not exist! It isn’t trivial that there exists something you can’t be wrong about.
There are probably a bunch of predicate issues you need to get past about the nature and existence of qualia that are going ti determine one’s answers to these types of questions. For instance, if I’m a behaviorist who simply defines hunger as a probability of engaging in eating behavior when presented with food, I’ll get an easy answer to that one hypothetical.
There’s no logical problem with the idea that one is in error about one’s own subjective experiences. Start with the harder case about being wrong about what one believes. The following two statements are logically compatible with each other:
1. X believes proposition p.
2. X believes that X does not believe proposition p.
This is the foundational observation of some of the literature on self-deception. One can believe something while believing that he does not believe it, because the objects of belief in statements 1 and 2 above are different kinds of things. The first relates to a proposition and the second relates to X’s mental state. And if the two statements are logically consistent with each other then there are possible worlds where both are true at the same time.
It’s all the easier for 3 and 4 to both be true:
3. X is experiencing sensation s.
4. X believes that X is not experiencing sensation s.
Unlike the first pair of statements, there isn’t even the appearance of any sort of logical contradiction there. So unless we’re going to go with synthetic a priori I think we have to admit the possibility of being wrong about one’s own qualitative experiences, whether it actually happens or not.
I don't know about this. I do see both pairs [1,2] and [3,4] as being contradictory. I think this boils down to the idea of an unitary conscious entity being an illusion. From a neuroscience perspective it seems more likely to me that there's multiple agentive entities within a single human brain conspiring to create an illusion of a single entity that has beliefs.
I have been reading your blog for about 8 years now. Of all the times you've pulled the double 'the' thing. I've never caught it on the first read. I feel like Charlie Brown going to kick the football.
nah, he didn't have a line break between the thes. Change the screen size you view it on, or zoom in to change the text side, and it won't always be on a line break. I miss them either way.
What about cases of Sartreien bad faith, or when people reappraise past experiences and claim that they didnt actually feel what they claimed they felt at the time. Aella made the claim that she's seen a lot of sex workers who "liked" doing the sex work when they were doing it, but then later claimed that this was some sort of false consciousness and they actually hated it, they just didnt know it. Are they "lying" or is this two separate selves making two independent appraisals?
I intuitively would go with the two separate selves hypothesis. To make up an absurd example, someone who's never seen the sun might say moonlight is incredibly bright and believe it, but upon seeing the sun they might say "nevermind, my notion of brightness wasn't well calibrated and now I don't think I did experience brightness when I was under the moon".
This example is perhaps a bad analogy, since presumably a sex worker can have known what it is like to like something before or during their time as a sex worker, so it's not simply a matter of having uninformed priors / expectations / calibration.
An alternative hypothesis is that it's a defense mechanism: you believe you are happy because you don't see a way to improve the circumstances, and so might as well not suffer your own unhappiness on top of that (is this even remotely what is going on with Stockholm Syndrome?)
Anyway, I've done this thing (ie, describe my reactions as positive and much later realize they were actually not positive) and these are some of my hypothesis for what leads to this.
It's easy if you think of perspectives/attitudes as (unconscious) strategies that fit the current situation.
Take a similar example, dumpster diving. Is it gross and shameful to touch garbage, or awesome to find free food and stuff? Depends on a lot of factors (need, ecological attitude, getting over the initial hurdle), that can change over time.
Or the male refractory period. Am I lying when sex goes from the most to the least important thing in 30 seconds?
It's possible that they're "lying" in the form of blurring together two concepts. Last night I played videogames until 1 AM and at the time I felt like I was having fun. This morning I woke up at 7 AM and hated the fact that I hadn't gone to bed until 1. If I were a little sloppy with language, "I hate the fact that I stayed up until 1 AM playing videogames" might turn into "I hated staying up until 1 AM playing videogames", even though I'm not reassessing how much fun I had.
I think that type of experience isn't uncommon. An example: You go to your office holiday party and spend a few hours talking and smiling and laughing. You are so caught up in performing having fun that you may think to yourself in the moment "this is fun", but immediately upon leaving you're thinking about how boring the conversation was and how you resent having to attend such events.
Or: how many people deny to themselves that they have some type of sexual attraction, and then later realize that they had be deceiving themselves.
For the final happiness example, what would be the equivalent of asking the woman to tell the listeners when she next had a thought? I'm not sure there is one...
Being "wrong about one's experiences" is an ambiguous phrase.
It's possible for someone to be wrong in the sense that they *misclassify* their perception, so they can say "I'm not hungry" even though they're experiencing hunger. That also seems to apply to the person who says that he perceives a 7 dimensional figure when under psychedelics; he's misclassifying his perception.
Note that one comment brought up qualia, but qualia can't be magically communicated. You need to decide what categories the qualia fit into, and communicate information about those categories; this decision process can be wrong.
This is basically what I was thinking. There's a whole additional messy layer of language here. When I hear one commenter say "I've experienced jhana, and it's the most intense pleasure conceivable" and another commenter say "I've experienced jhana and it's nothing special", I'm deeply skeptical that they're actually describing the same thing. Not necessarily because either is lying, or even mistaken, but perhaps because they're using the same word in different ways.
Sure, but we have no way to analyze the thoughts of anybody outside ourselves without communicating in language. In the absence of a brain-swapping machine, if the words to describe something are ambiguous, then we have no way to determine if someone else's thoughts are ambiguous. We also have no way to communicate whether or not our thoughts are ambiguous to others.
Or maybe different mind just have different pleasures?
Plenty of people claim that sex is the greatest pleasure ever and OK, whatever, to quote Woody Allen, as meaningless experiences go it's one of the best. But I've been far happier with more cerebral achievements, both constructing abstract things (a program, a book, ...) or understanding something that's taken twenty years to finally figure out.
This may be a liking vs wanting thing; the body wants sex, but the mind doesn't have to especially like it and may prefer to just get it out the way so it can get back to more mind-ly pleasures.
I don’t know I am a pretty cerebral guy into a lot of pretty central pleasures. Fucking playing video games that are just big databases where you run a car company, boring shit like that. Also some pretty awesome academic accomplishments at times, cool professional projects and achievements.
None of it is a stitch on good sex, which is only really devalued by how comparatively easy it is to have. But on an absolute scale it absolutely trounces anything else (though I haven’t experimented much with hard drugs).
If someone asks me what a labradoodle is, I can not only talk about it, but point to one the next time we see one. The labradoodle exists outside us both, and we can both see it, and agree that we are looking at the same thing.
But no-one can show their mind to another. The words that we use to talk about our inner experiences point to places that no-one but the speaker can see. Given how different people are on the outside, I see no reason to think they are any less different on the inside. Following someone else's description of how they learned the jhanas may be like trying to find one's way round Berlin with a map of Paris.
I have not experienced anything like the jhanas, despite my customary method of meditation happening to be quite similar to what someone reported as producing them very easily.
You're absolutely correct that they're not describing the same things.
I checked it out, managed to verify that yes, indeed, First Jhana is a thing that exists, and also found out that depending on precisely what you're initiating the pleasure feedback loop on, the nice feelings can behave quite differently. To give one example among many, there are more and less sexual variants of the state that you can get to show up.
The various different pleasure-feedback-loop states *do* have prominent aspects in common (nice physical body feelings, high energy levels, muscle tension, intense focus needed or else it falls apart, the happiness is the "eeeeee!!" sort and not a contentment sort). This is what justifies lumping them together and denoting them as "First Jhana", but one could just as easily say "this phrase is uselessly vague" and start inventing a whole bunch of new terminology to classify the variants of the pleasure feedback-loop states.
And this isn't even getting into intensity! Apparently there's a bunch of different depths you can go during First Jhana, and I've only been able to figure out the easy version that's about as good as a pan of fresh oven brownies. The "I was filled with luminous joy for an hour and it was 10x better than sex" person is describing First Jhana at a very very different intensity level than the person (me) saying "I got a few full-body chills and random giggle fits and a prominent sort of lifting helium feeling for about two minutes and walked downstairs with a sex-like afterglow", and the overall situation is much like describing both racecar and a go-kart as "car".
And before you ask, yes, this does cause a whole lot of arguments between Buddhists who are like "no, only this particular type/intensity counts as Jhana, the rest are just fake approximations"
So, yeah, "Jhana" is a real thing that can happen but it's a pretty vague word and the people describing it differently are indeed having very different experiences and it's always a good idea to clarify exactly what experience people are claiming to pulled off. But the elements of (nice physical body feelings, high energy levels, muscle tension, intense focus, "eeeee!!!"-like happiness instead of contenment happiness) do tend to be commonalities in the experiences.
Yeah, exactly this. If we call back to the predictive processing/perceptual control model, most of us (without pretty extreme meditation) do not experience pure sensory experience. We are experiencing (an interpretation of (an interpretation of (an interpretation of (... (sensory experience)...)))), with the nesting not being recursion but non-recursive processing layers, and some of the top layers possibly being extremely sophisticated concepts usually shared with others.
In this model there are then at least two distinct ways to be perceiving something "wrongly" - first, where a higher layer in some sense overwrites the output of some lower layer in some opposite direction. For example, a fear output is clumped into some sort of more abstract "not afraid" stance, even though all other bodily fear and fight/flight mechanisms are firing - claiming you are "not experiencing fear" in this state would be both true (evaluated at the top layer reporting into your conscious) and false (evaluated on your overall state, including the outputs of lower layers that you are not conscious to in the moment). A longer-term, ossified form of this error is often referred to as Layering in postrat meditator circles. I can believe this might result in the anecdote with the "enlightened" woman who merely refactored out the idea of thinking without refactoring out actual thinking.
And the second way has been mentioned by others several times already is a failure of communication of characterisations, communicating as if your characterisation is some shared concept when it is (unbeknownst to you) not accurate to the shared understanding of it.
> It's possible for someone to be wrong in the sense that they *misclassify* their perception, so they can say "I'm not hungry" even though they're experiencing hunger.
Furthermore people can be actively *unsure* of whether their perceptions are what they think they might be—the example of hunger reminded me of someone the other day who reported not being sure whether they were hungry or whether their GERD was acting up. They associated the same experience with both causes, and the only available labels they had for the experience also indicated information about the assumed cause. A less uncertain version of theirself could have chosen wrongly and thought or acted accordingly.
I feel like I am constantly wrong and or unsure about my own experiences, so I guess it depends on what you mean by 'wrong'. I also feel like other people in similar situations are very confident they understand their experiences, which could be true, but I am suspicious given my own experience.
When I think of being 'wrong' about an experience, I am not thinking that I am wrong in thinking I am upset when I am upset. I think I am often wrong in understanding why I am upset, and it is very easy to falsely attribute my feelings to some causal idea that I later doubt or strongly believe to have been wrong. This happens both with mood and with (maybe) simpler things like why my stomach hurts or why I have gas, etc etc. So my explanation for the woman example is that she felt good in a vague meditation related way, and she interpreted that as thoughtless enlightenment, because that was a readily available framework, then later learned that she interpreted her own internal state incorrectly, and something else was going on.
I got some sleep, and now I will try to clarify my thoughts here a bit. I think a core disconnect for me is that I do not think of myself as entirely the same thing as my body. My personal experience of consciousness feels limited. I can control my body, to an extent, but there is a lot I can't really do, like turning off my senses in response to a negative stimulus. Eventually my nose will 'get used to' a bad smell, but I seem to have zero conscious control over that, some shadowy corner of my brain that I can't touch is responsible for deciding when a bad smell can start being ignored. My, subconscious if you will.
My general model of this would be that my consciousness is a sort of learning/pattern matching thing that has significant but not total control and understanding of a more complex and larger organism that is the whole me. It feels like I get a lot of noisy signal from my subconscious systems, and as such, miscommunication is common. I would imagine that like most aspects of the human condition, some people get more noise and some get less, and on the other side, some are quicker/more confident in their diagnosis of the signal. I think being 'wrong' about experience is basically when the conscious self mistakes noise for signal from the less conscious bits.
When I read the original "I achieve jhana" bits, and then when I started to read this post, I thought about the Sam Harris example of "I am enlightened." I think it is the perfect example for all those jhana-ites out there. Thanks for citing it.
But I feel like you can fail to notice having thoughts. I don't know what it would mean to not notice that you're not really blissful.
With the Sam Harris example, I wonder whether she had some real spiritual experience that changed the quality of her thoughts somewhat, so that she didn't have the thing she recognized as thoughts, but was having something that the master could point out.
The lady in the Sam Harris example was doing most of the same thinking as everyone else, but somewhere between her having the thoughts and being able to talk about them, she was doing something different than normal. This could look like not forming short-term representations of thoughts, not forming memories of representations of thoughts, forming memories of thoughts but not trying to notice them, trying to notice memories of thoughts but not being able to, or recalling thoughts but discarding them due to conflict with another activated worldview. Or, more realistically, some of several of these at the same time.
The example lady seems clearly mistaken because we have a model of how this lady works, and "thoughts" are things in this model that do things like occupy short-term memory, or represent plans for later recall, and the model says the thoughts are still there doing their thing (or else she'd be doing a lot more shambling and drooling), there's just this noticing-and-reporting problem. But even absent such a model of the thing itself, if you suspect that the noticing-and-reporting system is behaving badly, that should be a tipoff.
Could you report that you were massively blissed out, but actually be mistaken in a similar sense to above? Well, you'd have to have mentally represented your state as being blissful even in the absence of bliss, or had accurate short-term representations but formed a long-term memory of bliss anyhow, or had accurate long-term memory but altered it when recalling it, or recalled the memory accurately but overwritten it due to conflict with another activated worldview.
I'm deeply curious what you're experience treating addiction is like, as a psychologist. Because my gut reaction was that of course people can honestly be wrong about their own internal experiences, of course their own mind can lie to them, addicts in withdrawal experience this all the time.
From personal experience, when you're trying to quit a pack a day cigarette habit, that your mind lies to you constantly. You can not believe how many excellent reasons there were, from bills to upcoming finals to job applications, for me to start smoking again. And it never felt irrational, it never felt like a the the moment, it always felt internally like the logical and rational thing to do until I'd quit and relapsed a few times and got that feeling that...like, my logical brain was lying to me. I always conceptualized it as my inner will and my logical mind and every time I would get strong cravings, my logical mind would generate a ton of good, rational, sensible arguments for smoking again and eventually irrationally rejecting was the only thing that proved effective. I'm not even sure how I would characterize my internal experience at that time.
But, rather than go off my personal experience, I think the right place to look for further insight is into addicts. You seem to be struggling with issues of how people deceive themselves and our own internal experiences and addicts are a wide class of people with deep internal conflicts and struggles of self-perception that do not pattern match to mental illusions or the the tricks. And you've got excellent access to them.
Yes this. Also true of a person in the grips of an anxiety attack or a depressive episode. And of course to a lesser degree true of all of us all the time because our level of self-awareness rises and falls constantly and we are almost always telling ourselves stories of one kind or another.
Your rationalizations for why you should smoke were metacognition. The impulse to smoke and the brief instant of stress relief were simple cognition that exists and hence “true” (albeit at a cost of greater stress later.)
According to the reframing from my group therapist, working on replacing what an addiction provides is much more effective than focusing on how addiction will hurt us: in the case of stress, working alone or with a group to develop stress tolerance skills which would be much healthier, more rewarding, and more sustainable than cigarettes-- so addressing the original cognitions to make tackling them with metacognition easier.
Wow, I think discussing this untangled the thought for me suddenly. Cognition is always inherently “true” because it simply is while metacognition can be true or false, and language to represent metacognition can be true, false, accurate, inaccurate, etc.
I see a couple of cracks in the idea that people can't be wrong about their subjective experiences.
First, while I grant the "I didn't realize I was hungry" example of not being aware of their inner reality, it feels like there should be a distinction between being unaware of something and being unwilling to accept something. Example: suppose a child is clearly afraid of a dog, but is ashamed of being fearful. However if you ask them they say they aren't afraid and you cannot get them to admit it. Some children in that situation might be lying in the sense that they know that they are afraid, but they don't want you to know. However, I'm pretty sure that there would be other children who will not admit to themselves that they are afraid. Maybe they're repeating to themselves in their head "I'm not scared, I'm not scared, I'm not scared". It seems to me that this child is choosing to believe something factually false about their inner experience, in a way that is different than simply being unaware.
Second, vaguer objection. People have incorrect memories all the time, including of their own actions, attitudes, and experiences. Is there a reason there should be a time limit for that memory shift? Can someone remember their experiences of a few seconds ago incorrectly? I would assume so, given the evidence that we can misremember our recent actions or motivations. (Why did I come in to this room? Where did I set my keys?) If we can have those incorrect beliefs about such recent experiences, what is so different about the present?
I agree "the present" is not usefully distinguishable from the recent past. The time it takes to speak a sentence about "the present," and in which time the present has become the recent past, is on the same scale of time that it takes for a "what was I doing?" lapse of attention to occur. By the time I can report it, it is a memory and not present experience.
There's another aspect to this - when someone says any variant on "you are not feeling the way you claim", it has in my experience generally been about power. What they are saying is "I am so much more important than you that what I say about your internal state is true". Except not quite - sometimes it's "my immense superiority over you allows me to perceive you more clearly than you do yourself" with the speaker honestly believing in their immensely superior perception.
This is the kind of thing I expect to see from the kind of therapists who participate in forcible deprogramming - their job as they understand it is to make the defective patient behave a little bit more like acceptably normal people. It also sometimes comes up in situations presently labelled "rapey" - "you really ARE attracted to me, I know it" and variants.
It's also true that quite often outsiders can better predict someone's behaviour than that person can themselves. And the behaviours involved are generally considered to go along with internal state. Many parents predict correctly that their cranky child will be less cranky when fed, and summarize their insight with "the child is hungry" and even "the child is behaving this way because they are hungry". Ditto, sometimes, when the child in question is overdue for sleep.
Indeed, part of raising children may include teaching them to recognize situations where their body needs something, and correctly realize what is needed; saying "I'm hungry" or "I'm tired" is optional, but often comes along with acquiring this self knowledge.
But I still don't think it's OK to describe someone else's internal state in contradiction to their self description, if only because their state is quite likely to change to "angry at you", whatever it was previously.
I agree that it's usually about power. The issue with "never contradict a person's lived claims" is that while it seems like the more compassionate option no one is able to actually sustain it. It too breaks down into a power game where some people cannot be contradicted and others can.
I think the 'trick' Scott describes here is a pretty good way of viewing most examples of 'experience being wrong', mostly because it demonstrates that feeling a specific way often doesn't lead to effective outcomes. If you take some drug and report "wow! Time was moving so slowly!" then you, or even more likely, a listener who believes you, may think that taking this drug makes you more productive. I think it's fair to call a statement false, if there are many interpretations and implications of the statement that are not true, and by that criteria I'd say 'time feels like it moves slowly' is mostly false.
Trying to judge the plausibility of jhanas using this framework does lead to many obvious questions. 'is someone experiencing jhana visibly more active?' for example. There is no way you could get a satisfactory answer from just one external criteria, but combining many together... Maybe.
Just looking at people testimonies, jhana does feel suspicious under this framework, for example "experiencing jhana is better than sex, but I'm not addicted to it, or feel a strong desire to constantly do it". This implies that jhana is substantially different from a form of happiness like sex, which means that there may be other non-obvious ways in which sex!happiness and jhana!happiness differ.
>This implies that jhana is substantially different from a form of happiness like sex, which means that there may be other non-obvious ways in which sex!happiness and jhana!happiness differ.
I think we could find counterexamples where that conclusion doesn't follow. Good sex is very pleasurable, yet I believe there's a subset of asexuals who can derive pleasure from it, but don't actively pursue it, or find it particularly addicting
There's a common meme where people (possibly most commonly teens?) joke they don't feel like taking a shower at first, then don't feel like getting out. Warm water does feel good and relaxing.
Somehow repeating the experience of having a shower daily doesn't immediately self-correct the reluctance to get in
I think separating the qualia of happiness (the various kinds of it, if you want) from the other parts of the experience that acompany the feeling is probably high yield.
It's not unreasonable to think it is some other part of the experience that causes the difference, not necessarily just sex!happiness being distinct from jhana!happiness (which may still be true)
One dimension to consider - claiming an absence of experience is different than claiming a positive experience. So if I say “I am experiencing an absence of thought” or “... absence of hunger”, it’s entirely possible that thought or hunger is going on in my mind but outside my field of awareness or cone of attention. However if I have a positive subjective experience of bliss, I don’t see how that qualia can be “mistaken”.
I think it’s also possible to lie to oneself after the fact. See: false memories. But this doesn’t really fit the specific scenario of sitting down and doing a repeatable thing that produces bliss; it’s more about long-term memory being mutable/unreliable.
Suppose someone is simultaneously receiving stimuli that should cause both bliss and anxiety. Perhaps too much caffeine at the same time as the other stimulus.
(This assumes contradictory feelings can exist, that anxiety is not a sort of anti-bliss)
Could this person's anxiety be outside of their cone of attention, in such a way that they can describe feeling blissful due to the other stimuli, but would report discomfort otherwise?
If anxiety is a bad example, also substitute with tinnitus (which is easy to forget, but annoying when noticed), back discomfort, or hunger.
We can still answer that this does not matter, because tautologically the overall end-result feeling is still what was faithfully reported. But the overall feeling may have components that are not reported faithfully, and that could be a useful way to talk about things like experiencing 7D objects.
Perhaps if we could conceive of snapshotting just the visual experience, without the other impairments and drunkenness, so that you could examine the visual qualia sober and at length, it would still look like intricate psychedelic geometry but not evoke confusing 7D descrpitions.
Back to the bliss with filtered anxiety, if we could find a neural correlate for two contradictory feelings, and show that a person's cone of attention is sectively filtering either, isn't it sort of ambiguous at what level of perception the filter really is? The malaise or discomfort that's not noticed in the moment can be noticed in retrospect in my experience ("oh, I've been uncomfortable because of X for the last minute, and I'm only paying attention/noticing now!")
There may be a use in separating the "available" or "input" qualia vs. the noticed, or highlighted qualia. (And acknowledging again it doesn't exactly mean someone is wrong about their own experience. They're only reporting the noticed qualia, while I also want to call input qualia real and meaningful)
Possibly I'd be tempted to model Sam Harris' woman as having 'actually experienced the thoughts' (for the definition of experienced I'm using here), but not having noticed them.
Then, we could both say she's faithfully reporting what she noticed about her internal state, while still mistakenly overlooking qualia that she discarded as unimportant.
If someone has a good time while successfully ignoring back discomfort or tinnitus, it is not exactly the memory, and not arguing against the tautology, but we can still usefully point at some other real internal 'feeling' that was not reported by inattention?
I see no puzzle here at all. We are bombarded with all kinds of sensory experiences and can only focus our attention on very few at a time. If you are not used to attending to your hunger sensor, you will not report a hunger experience.
In addition, our brain/mind is not well modeled by a single agent, and the one answering the question may not be the one experiencing the qualia in question. I see it all the time in people who tend to dissociate severely, and I assume you have seen plenty of those in your practice.
There is a homunculus. And it itself has a homunculus inside of it. And that one too, and the regression is infinite, because it's actually two mirrors pointed at each other. This is a necessary feature of self-consciousness; what Kierkegaard would call spirit and Heidegger would call Dasein.
So, as long as you're a reflective spirit / Dasein, both sides of this argument are always, necessarily, true. You can always apply the tautological solution, since there's always a higher reflection, being tricked into immediately experiencing whatever it is by a lower reflection. Also, there's always some level at which a person is wrong about their experience, because there's always a lower reflection, doing the tricking.
I think this whole post is reaching for the concept of mediated versus immediate experience. Scott is insisting on the existence of immediate experience, at some level. The commenters are insisting that experience is always mediated by various reflective processes. I think both are right.
Thanks for this. This clarifies what seems to me to be my internal impression of the situation. It really does seem like a hall of mirrors, which is not actually infinite but just two mirrors that together create an illusion of infinity. I guess I should make a note to read about Kierkegaard's notion of spirit and Heidegger's notion of Dasein at some point.
Edit: after more reflection I think a better analogy is that it feels like just one mirror, a single reflective entity, which is sufficiently flexible to circle around into a cylinder and reflect upon itself.
That's a very interesting and IMO important insight into consciousness. I think that definitely having the qualities of being a quine, of self-reference, self-reflection, or however you'd like to label it, is a necessary process for what we call consciousness, and I think I understand it a little bit better now.
In my mental model of people, there are multiple different conceptualizations of "self". Think of it like the Freudian ego and id concept - the ego may legitimately believe that the person is not hungry, although the id would describe its current state as hungry.
This neatly solves the paradox - people have multiple states of being simultaneously, so it's entirely possible that they could be accurately reporting that they are not hungry one minute while they're most attuned to their ego self but then end up identifying that they actually were hungry when they are instead attuned to their id self.
It's a little woo, but I hope you can grasp the underlying concept - this paradox arises due to the simplifying assumptions made that people are coherent and singular, and a deeper evaluation of that assumption resolves the paradox.
Two things come to mind. Not saying they're counterexamples, just food for thought.
First, I've seen people (and had the experience of) being woken up, immediately insisting that they weren't asleep, then usually realizing that yes, they were asleep.
Second, often when I doubt someone's subjective report of their own state, it's not that I think they're wrong or lying, but more that I think they have some control over it when they act like it's something that happened to them. Like if someone is getting angry about something, I don't doubt that they're angry, but I do think that they could choose to calm themselves down and instead choose to stew in their anger. They aren't wrong about being angry, but they're wrong about whether (and to what degree) they're choosing to be angry.
A person first waking up making inaccurate reports is experiencing hypnopompia and hence their observations about consciousness are expected to be less accurate than a person that is fully conscious. Technically this is important to note for data about meditation, as a meditator in hypnopompia (or hypnogogia) will be making less accurate reports about meditation than a meditator that is fully conscious, but this is really only a point that comes after concluding “qualitative reports on meditation can yield meaningful data.”
My ex would never be hungry but would experience the side effects or hunger - light headedness, headache, irritability, etc. And I’d say, “Are you sure you aren’t just hungry? Let me get you a snack.” “You don’t know me! You think you know everything!” Hand them a snack…two minutes later, “La la la…did I tell you well my presentation went today…”
I'm inclined to agree with the proposed tautological solution to the title question: if you define "internal experience" as exactly the subjective component of your internal state that you can't be wrong about, then I think that's perfectly reasonable. Of course this might differ from any objective physical state (even one that we don't fully understand), which you certainly can be wrong about.
This feels mostly like debating definitions though, IMO the more interesting part is, where does the subjective and objective diverge, and why? The post gives some interesting examples of such cases, but I'm left feeling a bit unresolved about the dynamics.
Likewise regarding lack of resolution; I’m almost tempted to reframe the whole thing in purely consequentialist terms: “if the goal is objective, then objectivity needs to be combined with subjective accounts. If the goal is subjective, then subjective accounts are most relevant.” This works in practical situations, but accomplishes little for advancing philosophical thought. (That almost feels like a tautology too, haha.)
This seems relevant, with other examples of ways people seem to be wrong about basic features of their own visual experiences: https://philpapers.org/rec/SCHHWD-2
And supposing that it's true that there are certain kinds of features of our own experiences we couldn't (or practically couldn't) be wrong about, we could still ask whether the claims under discussion in the jhana debate are of that kind. And at least some of them clearly aren't. Claims like "this experience is ten times as intense as orgasm" are the sort one could be wrong about even if we think our epistemic access to our experiences is pretty good, because it involves using memory and comparing experiences of different kinds and at different times, among other things. "This hunger is twice as intense as the thirst I had yesterday" can be mistaken even if "I feel hungry right now" can't.
Thank you, this seems to align with my view. It seems pretty obvious that a claim like "this experience is ten times as intense as orgasm" is just made up nonsense. Experiences are ordinal, not cardinal. All you can honestly say is that "this experience is more intense than that experience." There is no way to assign any cardinality to it.
> Without assuming veridical interpretation of numbers, (Narens 1996) formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, if y is judged p times x, z is judged q times y, and if y' is judged q times x, z' is judged p times y', then z should equal z'. This property has been sustained in a variety of situations (Ellermeier & Faulhammer 2000, Zimmer 2005).
> Activation of neurons by sensory stimuli in many parts of the brain is by a proportional law: neurons change their spike rate by about 10–30%, when a stimulus (e.g. a natural scene for vision) has been applied. However, as Scheler (2017)[23] showed, the population distribution of the intrinsic excitability or gain of a neuron is a heavy tail distribution, more precisely a lognormal shape, which is equivalent to a logarithmic coding scheme. Neurons may therefore spike with 5–10 fold different mean rates. Obviously, this increases the dynamic range of a neuronal population, while stimulus-derived changes remain small and linear proportional.
That's interesting, thanks. However, it seems to me that the Stevens' power law compares perceived magnitudes of different intensities of the same stimulus. There is no way to compare the perceived magnitudes of different stimuli, let alone complex experiences. So I stand by my comment.
It depends then how literal they were with that orgasm comparison (indeed, probably not much ?)
Like, if you were hallucinating a sound, you could try comparing it to your general experience of sounds ?
Maybe not to the point of being able to do science though, since that requires an "all else kept equal" setup, which seems to be very hard to pull off for a state where you hallucinate being "somewhere else" and/or where you are more or less cut off from outside stimuli ?
In my experience most of the problems go away if the sentences are rephrased without an "I" (i.e. pronoun plus form of to-be) but instead describe what goes on a noticing.
In the first example someone doesn't notice the feeling of hunger. Done. They may lie about it or not but the noticing makes the process clear and doesn't attribute it to some inherent quality.
The time example is more difficult because multiple different things get lumped into time perception: A) Density of action or experiences: Times without action can be boing and feel long and get described as time goes slow.
B Felt measures of biological clocks: There can be a distinct sense of urgency or time having passed without explicitly checking a clock and independent of the amount of things going on. The same type of clock that allows some people to wake up at a planned time.
Additionally, for A, there are distinction between noticing actions in short-term memory or long-term or episodic memory.
This mix leads to the feeling one may have sometimes that time seems to go slow and fast at the same time depending on how you look at it. A bit like an optical illusion with two readings as in the bearded face/woman under tree example.
Depending on these cases the example would be rephrased as
- noticing there is a lot going on right now (maybe interpreted as time going fast)
- noticing a memory of an episode with many activities (maybe interpreted as time going slow)
- noticing a feeling of urgency, e.g., of getting a train (maybe interpreted as time going fast)
I suspect there's something along these lines which is akin to the placebo effect. Any time there is a trial of a new medication which is ultimately ineffective there are always a small number of people who feel "much better" from something which doesn't work. Indeed - they may become staunch proponents of the treatment. Either this needs to a be a very niche positive effect (only works on people with eg. rare mutations), or people have tricked themselves into believing that the medication is doing something positive.
no that's just regression to the mean, some number of people would randomly have felt better anyway and happen to have undergone treatment around the time they started recovering naturally.
Well, might as well take this to the controversial places it wants to go.
If someone says, "I experience that I am a *different gender than my physical gender.* Can they possibly be experiencing the feelings of that other gender--have they somehow mastered complete telepathic empathy and understanding?
What they mean can only be, "I am experiencing that I am feeling what I believe that other gender feels."
(To be clear, I don't actually care if gender revolves around feeling a _correct_ feeling or not.) But aren't people clearly wrong about their own experiences when they are claiming to experience something that they cannot know what they correct experience is?
So while there are very few externally objectively correct experiences, this is an example of one of them. And once we've demonstrated that some experiences are objectively experienced wrongly, where should we draw the line?
Considering the intro “might as well take this to the controversial places” acknowledging how upsetting the topic is for many folks & the conclusion “objectively correct” asserted without any evidence, I am not confident in good faith discussion, but I will go ahead and offer some anecdotal data, as the topic is pretty close at heart to me.
First, a number of my best friends are trans, and they all had to spend YEARS developing any confidence that they share any experiences with other trans people of the same gender as them, let alone cis people of the same gender. Transitioning is not about perfect emulation of a cisgendered person’s experience--and in fact, many trans people tend to feel they have more in common with other trans people of various genders than cis people of the same gender. No one is making the claim of identical qualia REGARDLESS of whether self reports of qualia are reliable.
Second, primary goals for transitioning are usually a combination of alleviating anxiety & alienation, increasing confidence & self-acceptance, and saving lives. None of these goals are “wrong” or need any “lines drawn” to me, as I approach the topic with basic humanist consequentialism. If you think otherwise, I highly recommend including some notes on what moral perspective you are approaching the topic to better facilitate discussion and transparency.
Third: physical sex is expressed in bone and fat density, hair, genitalia, vocalization, etc along scales with no clear line between one and the other, only a pair of normal-curves that meet in the middle. Physical gender on the other hand is expressed in neuron structures, with a striking amount of structural similarity between cis women attracted to men & trans women attracted to men, followed by a high similarity between trans men attracted to women and cis men attracted to women, and finally either medium-high similarities or a lack of data for various other genders, sexualities, and sexes. (As there are ten times more neurons in the human brain than humans, with 2^100,000,000,000 possible states Without counting connectivity states and tubulin data, there are of course no identical human brains-not among cis people of the same gender or even identical twins, only patterns and divergences.)
I'm reminding of a line from Dennett's "Quining Qualia" where he quotes Wittgenstein:
"Imagine someone saying: 'But I know how tall I am!' and laying his hand on top of his head to prove it." (Wittgenstein, 1958, p.96) By diminishing one's claim until there is nothing left to be right or wrong about, one can achieve a certain empty invincibility..."
Basically, the only sense in which we're guaranteed to be right about what we're experiencing is a sense in which claims about what we're experiencing are pretty much devoid of content. If feeling hunger is a matter of being in a state with typical causes/effects, you can be wrong about whether you're feeling hunger. But if feeling hunger is just being inclined to call whatever state you're in "hunger", then sure you can't be wrong when you say you're feeling hunger, but that's not because there's some substantive fact about yourself that you're reliably tracking.
And to continue on the Wittgenstein thing, as soon as anything is reported (and probably before, since there's no private language in Wittgenstein), language gets into it and it's a whole mess. Even *if* we somehow grant that you can't be mistaken about your sensation of hunger, once you start to *report* it, any notion of it being private goes out the window.
Or rather: Quine did indeed argue that you can never be *certain* about a translation into a different language (the famous Gavagai thought experiment).
Wittgenstein rather argues that you cannot have a private language, as you have nothing to check it against.
What counts as something valid for being checked against? If a group of speakers only speak a single language together, it doesn't seem like they have anything external to check their language against, and everything that they "say" would be self-referential and contentless.
Yes, this is Wittgenstein's point (apart from it being contentless, that is). The only point to language is communication, so you have nothing to stand on except if you seem to be managing to communicate with people. There is no external "meaning" to language, nor any rules, outside of this. Language is a "game".
'The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words "block", "pillar" "slab", "beam". A calls them out; — B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.'
It doesn't seem like the workers are communicating anything that is heterogeneous between them. They may as well be parts of a single machine using nerve impulses or a serial bus. Physically, this seems fine. We can treat them as a single entity. But then it's no longer clear why their activity is different from a "private language".
The ideas in Wittgenstein is that since a language only has any meaning as communication, a private language doesn't make sense. You can test your regular language by talking to people and see if it works, but how would you test your private language?
The question of whether a person experienced a jhana is unlike the hunger or happiness subjectivity question -- in my mind -- because jhanas are a thing described with reference to an external tradition of practice and expertise. Like the woman claiming enlightenment, the question is in reference to an outside body of wisdom accessed through teachers who presumably are further down the road than the person reporting the subjective jhana or recent enlightenment experience.
Picture a new-ish yoga student who has mainly learned yoga from YouTube videos and books. After some diligent practice, they feel they've nailed Half Moon pose. They're like "yep, it looks right, it feels right; I've got it." But then some weeks later they have the chance to take an in-person yoga class with an experienced teacher who says, "actually, your hips do this in half moon pose, your leg goes here instead, and the whole pose should feel more like X than Y." The yoga student wasn't lying about their experience before, but they lacked sufficient background and context to accurately assess their experience relative to the tradition in which they were practicing.
Many comments in the jhana discussion seemed to argue that people were intentionally lying about their jhana experience in order to seem special. Many of these people seemed to dismiss jhanas as a real experience because it seems to them supernatural like levitating or mind reading. Once you step inside the Buddhist tradition, it becomes clear jhanas are not a magical supernatural kind of thing. But also, one could see how people reading books and practicing at home without working with a teacher might also be guessing about things they don't know a whole lot about -- and might also be bragging to get attention. I don't imagine that's what most people are doing, but you could see how some people might. How do we describe that yoga student's experience relative to half moon pose -- mistaken, I guess we would say, right? Not mistaken about how it felt to be in what they thought was half moon pose, but mistaken that they had accomplished what the yoga field calls half moon pose.
There's a whole spectrum of ambiguity that also exists because different yoga (or meditation) teachers might disagree somewhat about whether the thing being described or performed (whether jhana or half moon pose) constitutes an accurate instance of that thing or not.
Another example might be whether a person had a manic episode or not -- a huge amount of psychiatric diagnosis falls into this realm of question. There's the person's self-described experience; there's an external body of expert information (the DSM, research, etc); and there's someone with more experience (one or more clinicians) assessing it. There exists room for error -- lying, mistakes, confusion, inaccuracy, expert disagreement -- at all of three levels.
Delusion is a word that gets used a lot in both Buddhism and psychology to describe the situation in which a person claims to not be having an experience (like anger, say) even though their behavior strongly suggests they are having that experience. The whole parade of psychological defenses exists in this same weird territory where people at one level are having an experience (or part of them is) and at another level they are disowning or unaware that they are having it. Because we aren't these unitary selves, I think it's often possible to be deluded about our own subjective experience. But the assessment of delusion is made (often controversially) by someone from the outside who brings some deeper or wider expertise about how to identify delusion.
If you tell someone what they're going to experience if they do something right, that's going to dramatically increase the chance that they experience - or think they experience, or report experiencing - this.
> One might think that the blind [...] would be immune to such errors, but that is not the case. For example, one of the two blind subjects [...] believed that his ability to avoid collisions with objects was supported by cutaneous sensations in his forehead and that sound was irrelevant and "distracted" him. [...] [I]t was only after a long series of experiments, with and without auditory information, and several resultant collisions, that he was finally convined that his judgments were based on auditory perception. Similarly, Philip Worchel and Karl M. Dallenbach report a nearly blind subject convinced that he detected the presence of objects by pressures on his face. [...] In fact, so common was it [...] for blind people to think that they detected objects by feeling pressures on their faces, rather than by echolocation, that their ability was originally called "facial vision."
Quick perusal, I don't see the method they used to rule out the face stuff. My initial thought is; sound hits the body everywhere, not just the ears, so maybe their face actually is sensitive enough to pick up some sounds.
What if, after many vision tests (and a few hundred terrible car accidents), I know that when I subjectively see “red” I could be seeing red but I could be seeing what others call green. I see both as “red.” Knowing this, I say, “I believe I see red” This is subjectively true — but also highly equivocal. For example, if I say “I believe I see red” while driving toward a traffic light, I’m probably bracing for a *possible* collision. Does this equivocal state fit neatly into your two categories of subjective experience?
The thing is that if this were true, you wouldn't have car accidents, and no vision test would detect this. In fact, assuming that this had been the case for you since birth, you would never know or have any way of suspecting. All your knowledge of color names comes from people describing things that you see. If people describe a certain wavelength of light to you as red when you are first learning language, then that's red to you, full stop. The traffic light on the bottom would still be green to you, and the one on top would be red, and you would stop or go as appropriate. You would never have know anything different. All of which gets into the issues several people have raised about the potential disconnect between subjective experience and the language used to describe that experience.
No, that's not right. As a toddler he would have been confused by people saying "this is red" and "this is green" when pointing to what looked like the same thing to him. Later, he would probably have been diagnosed with some form of red-green colour blindness, and then he would either never drive, or drive and learn a workaround like "the red-looking light at the top means stop, the red-looking light at the bottom means go."
I've taken to using the distinction of signal privilege vs representational privilege to talk about this. (https://tis.so/the-limits-of-signal-privilege) In those terms, I would say that yes, you can represent your own signals incorrectly even if you can't be wrong about the signals themselves. If someone else describes different affordances that will interact with your future signal than you do, they can certainly be more correct than you ("I'll ice my knee and feel better!" "You literally don't have a knee."). So ultimately whether you want to call it "wrong" or not depends on whether you're talking in a signal sense or a representational sense.
This is basically another angle to look at Wittengenstein's idea about a "private language"; if you rephrase your question as "can people ever fail to be fluent in their private language"? you can see that you've already gone too far by assuming the private language must exist.
Very interesting stuff. I'm curious though what it would mean to see a 7 dimensional object. Consider this situation:I tell you that I perceived a 7 dimensional object while in an altered state, but I only have a popular conception of dimensionality. When you explain rectilinear dimensionality, ie that the count of dimensions is a measure of how many lines you can draw which are at right angles to all other lines in the set, I consider and say, I did not understand what I was saying, I had an impression that seemed to me 7 dimensional, but it was not actually a 7 dimensional object, by this definition, a definition which I accept as more true and meaningful than my previous concept of dimensionality. Then I have gained a sort of enlightenment which shows my previous error. Or have I misunderstood?
Yeah, there's a lot of stuff going on here, tough to sort it all into neat categories. As a general rule, honesty is important to keep distinct from truth, since there's a definite tendency to conflate the two. But when talking about personal experiences, honesty and truth are more entangled than usual, making it very difficult to keep the two apart.
I think it's worthwhile to bring up the idea that people interpret their own experiences. Eyeballs send the image, but you decide what you're looking at. This interpretation is something that you learn, sort of like walking. It is a skill that you build and expand upon, until the interpretation is so baked into your everyday existence that it feels like you aren't doing anything. Memories are formed, then they are interpreted, then the initial memory fades away, while the interpretation sticks around. In this sense, people can be simultaneously honest and wrong about their own experiences, correctly stating the interpretation, while completely failing to account for the initial, less filtered experience.
If you carry this model through to its natural conclusion, you'll notice that it allows for a sort of roundabout dishonesty. If I build a system of interpretation which biases my memories towards a dishonest interpretation of reality, then I use that system so frequently that it becomes second nature (like telling my body to walk), then I've effectively created a way to be honestly dishonest. My interpretations will always be genuine, even if I was acting dishonestly when I created the system which creates my interpretations.
I hope some of that is insightful. This is a pretty neat topic to think about!
Is this what happens with a Delusional Disorder? The interpretation filters are broken and the memories are then malformed. But the paths through the filters become trails then roads etc.
My other question is about the woman who said she was not thinking while the mystic revealed her to have been thinking. You say if she was lying she could just continue the deception but this seems to presume that she knew the truth herself but tried to deceive others. What about the case where she deceives herself? Where she experiences the qualia of thinking but 'talks herself out of it'? You seem to presume a simple and undivided conscious self, is that how you experience yourself?
So it's probably pointless to discuss this because the real answer is "I'll never be in another brain so who knows?"
But I think it's much much more complicated than you're making it here. For instance, you discuss people talking about time slowing down on salvia and explain that there are three "levels" on which they can be wrong, and while they probably didn't really have "more time" in some objective sense, it's absurd to say they were wrong about their subjective perceptions.
But then you add in a parenthetical that literally says just that - the person you're speaking to, the person who is not currently on salvia but who is talking about their memory of a subjective perception, was wrong about that memory.
This happens *all the time.* Constantly I'll remember hating a movie that the internet tells me was bad, and then get corrected that during the movie I actually was really into it, or vice versa. For years, psychiatrists thought they could uncover repressed memories of deep trauma, and the patients legitimately thought they'd had the subjective experience of that trauma. Job interviewers are more likely to hire the first or last person they interviewed, because they have a stronger recall of their subjective perception of them.
People online who want to hear voices or have multiple personalities badly enough genuinely think those things are happening - more likely they're choosing to remember a stray thought as audible, and to give it more power in their memories than it had in real life than they're legitimately lying. And people who claim to have experienced an orgasmic state of bliss from meditation probably didn't but probably genuinely remember having done so.
I don't understand what "wrong" means in this context. I've only used salvia once because it scared the shit out of me. I understand that gravity's vector did not objectively shift 90 degrees and that my body was not being cut into infinitesimally thin slices by a razor sharp filament. To an external observer, I would have appeared to be laying down on my friends' couch for 15 minutes, pressed back hard into the cushions. But I sure as shit experienced those things at the time.
You sure as shit experienced something corresponding to your comment, but I think it very likely that you have processed the experience to give it a coherence which it lacked in the moment.
Robert Jones has it. Basically our perceiving and remembering selves have two different goals. Our perceiving self experiences sensation and our remembering selves try to put that sensation into some kind of context. That includes changing our memories of subjective experiences if they don't align with the story our remembering selves wants to tell.
Something that seems to be missing here is the concept of effective communication and interpretation. It may be possible that the thesis here is correct, that a person cannot be wrong about their internal experience. However some context is required to interpret many of the statements given as examples as being about internal experience. For the person who claims to no longer be angry about their father, there is some true thing about their perception of their internal experience of their emotions about their father. There is also some false meaning that is indicated by their behavior when their father is brought up. Depending on the context, the statement that they choose to make May indicate one or the other of these meanings more or less. In many such cases the meaning much more likely to be interpreted may be false, which would make the person wrong in having chosen those words to convey something about which they are not wrong.
3 dots on 3 planes = 6 degrees of freedom: I guess so? That's assuming the planes are unrelated to each other, e.g., you don't know if they're parallel or how far they are from each other.
3 dots on 3 planes = 3 degrees of freedom plus a new dimension: Um, what? And what's this about oscillators, & what does it have to do with the pictures of dots following curves? The dots on the different planes can't be coupled oscillators, because we would need more degrees of freedom to know the relationships between the planes (or between the dots). My guess is that he's drawing a single dot traversing 3-space and projected onto 3 different planar slices, but you have to add a lot more information than the "3 planes, 3 dots" scenario gave you.
I liked this a lot probably because it confirms some of my own thinking on this, where you have some sort of world modeler in your head that generates and experiences qualia, the sensory apparatus hooked up to that that world model, and the external reality feeding the sensors.
I can give myself goosebumps at will by convincing myself that I am cold. I can even make it happen on just particular parts of my body. I experience the qualia of cold and my guess is the wiring that normally goes from my skin to my brain is at least slightly two way. Of course I never actually change my environment but the model in my head can exert control over things that it is wired to.
So, I think I’d agree with what you’re saying that people can’t lie about their internal experience to some extent. I may not be understanding correctly what you’re trying to reconcile/justify at the end with the homunculus fallacy. I know even if I convince myself I’m happy that my mental model can’t stay out of sync with my own internal state and external reality forever. Or at least not optimally. But I don’t think that requires there to be some separate experiencer that exists on its own outside of the rest of me, then again I think I might be missing a piece there.
I always think of all three pieces of this model as being parts of an agent rather than one specific component being the “real” agent. The same way intelligence isn’t a single magical something but a series of specific inputs and information transformations. That stuff functions interdependently. Does that resolve the need for a tiny person watching a tv in our heads?
It seems clear that the brain is always doing a million different things at once.
To me, it would be strange if only a tiny handful of those things produced conscious experiences. The idea that the auditory system stops producing conscious experiences when we 'tune out' a noise, then starts producing them again when we 'tune in' to that noise, despite the system doing almost all the same things in either case, seems absurd.
It makes much more sense to me that the brain is producing all kinds of different conscious experiences at any moment with all of it's different processes. And that what we understand as 'our' conscious experience mostly has to do with which of those get encoded as memories, and which of those get control of areas relating to reporting those experiences.
Split-brain patients are a good example here. It seems very clear that both hemispheres of the brain continue to 'think', at a level where it would be surprising if they aren't both producing conscious experience. Yet the conscious experiences that split brain patients verbally report are those with access to the verbal centers of the brain. It takes very precise manipulations to get reports from the parts that don't have access to the speech centers, but when you do they seem to report entirely different thought processes (that I assume produce qualia based on their complexity).
Another good example is sleep and 'unconscious' states. Evidence from twilight drugs is suggestive to me that we are never 'unconscious' so long as the brain is working, we merely have times when conscious experiences aren't encoded as memories and therefore fail to become part of our self-reported history or self-concept. Similarly, while some people report dreaming often and some don't, it seems clear that everyone does dream (the brain is doing the same types of things in each case) and the main difference is again whether those experiences are encoded to memory.
Thus, I think the simple ways people can be 'wrong' about their conscious experiences are cases where significant conscious experiences do not have access to reporting mechanisms or encoding mechanisms. People may be wrong in their memories of what they experienced, even with a .1 second delay in some cases. And they may be having experiences that are not being 'noticed' by the part of the brain that is talking to you, as with the 'not having any thoughts' example above.
As others have mentioned some puzzling experiences with perception, the famous experiments by Gibson showing that the organs participate in the data collection should be referenced. I find it interesting that we can reflect on our experience at other cognitive levels, like the experience of our interpreting an idea or meaning (about an optical illusion like the Frazier Spiral illusion), or the experience of our verifying (or in this case falsifying!) in judgment the fact or knowledge that the spiral actually is composed of circles. The experience of experience itself is unique compared to other levels in the scientific method (i.e., methods that result in knowledge or *scientia* the Latin root). In any case, the border areas where we might be wrong about our own experiences are dealt with in metaphysics and worth studying if you are concerned with reality, existence and being.
I think the issue comes down to the fact that, in a structured phrase "I am experiencing [...]", everything after "I am experiencing" is an attempt to communicate some internal experience; observing that, "You're not experiencing X, you're experiencing Y" is an attempt to correct not your internal experience, but rather the language you use to attempt to communicate that experience.
Scott please read my narrative of my phenomenology of a psychotic episode found in the blog post "Yoko Taro is a Dragon from the Future" and give me a brief inventory of which things you think have ontology to them, which are confabulation, and which are whatever else. I have been trying to get any such analysis done forever and mostly people just become agitated and hostile when I try
"I'm not angry!", said by someone very clearly angry.
I don't think this person is somehow not experiencing the qualia of anger. I just think they (like most people most of the time) are not introspecting on their current emotional state. They are genuinely mistaken about it.
At least one aspect of this is that words are slippery and experiences are described within a cultural context. I'm from a British culture rather than an American culture, and view American emotions as basically "performing" what they have been told to do via movies and TV for over a hundred years.
So, to take one example, grief. The combination of my personality and my culture mean that I don't "perform" grief and, more than that, I think I mostly don't feel grief the way others act it out. An American psychiatrist (perhaps not our host!) might insist that I have some sort of PTSD, that I am so upset by the loss of a loved one that I simply refuse to process the experience. Well, you can insist that all you like, but I think it's BS and an example of US cultural projection.
So that's one example of how things can get lost in communication, and how something like jhana can be described differently by different people.
Another example is how I interpret what was experienced. Like Tolstoy, in my teens I suffered from complex partial seizures which are a form of epilepsy that, in my case, never took over my body, but did have my mind experiencing something like a very pleasant waking dream.
If I were a religious person, I'd probably claim this as some sort of religious experience, interpret in that light, and very soon I'd be remembering it that way, not just a pleasant waking dream but angels, god talking to me, etc etc.
If I were a junkie, I'd probably claim this as some sort of drug experience and use it to justify my on-going habit; I'd remember it in the context of whatever drug I used and would claim the two as parallel experiences that both make me a superior human being to the rest of you squares.
But I am a boring, science-minded square, so I remember these as pleasant waking dreams, as devoid of meaning as the other random experiences of dreaming. And because I don't see any reason to project meaning into them, I don't project meaning into them, and they don't grow to something beyond what they initially were.
I recently watched this video by MachineLearningStreetTalk [1] which had clips that might be useful, so I'll summarize them below.
In the intro, John Searle gives a lecture on consciousness in AI where he describes the categories of subjective vs objective epistemology (knowledge), and subjective vs objective ontology (existence). He says that "lots of phenomena that are ontologically subjective admit of an account which is epistemically objective", and explains how this is crucial for developing a science of consciousness. He draws a distinction between phenomena that is observer-dependent and observer-relative, and gives an example of how money is observer-relative since its value depends on the user. Since all observer-relative phenomenon are created by human consciousness, they contain an element of ontological subjectivity. Yet we can still have an epistemically objective science of a domain that is observer-relative i.e. an objective science of economics. Perhaps that isn't the best example but I think the point still stands. (Full lecture [2])
In a later section [3], Karl Friston gives a computationalist perspective about how feelings or phenomology may emerge from an in silico replica as hypotheses generated from a separate model which takes as input data from all underlying models involved in planning, exteroception, interoception, etc. From this perspective it seems easy to reason about how a person's subjective interpretation of ontologically objective information can be incorrect or inconclusive. He also touches on chronic pain which can be psychologically driven. Another interesting example is Alexithymia, in which an individual is unable to identify and describe emotions they experience and is associated with impaired interoception.
+1 I was waiting for someone to mention the words "interoception" and "Alexithymia". In Lisa Feldman Barrett's account of emotion, emotion in an interpretation of ones physical sensations and context. It's not that you ARE angry, it's that your body has a series of physical reactions which you interpret as anger and which you then experience as anger. But under a different set of circumstances, that anger could feel like love. This type of thing happens all the time. As someone with moderate Alexithymia, I am often wrong about what emotion I am experiencing (though I'm getting better). So yes, you can be wrong about the physiological reaction you are experiencing, but I don't think you can be wrong about the qualia.
> So yes, you can be wrong about the physiological reaction you are experiencing, but I don't think you can be wrong about the qualia
Really well said. Shameless plug, but I'm actually workng an app that aims to improve Alexithymia. We just released an MVP and are investigating new tools feel free to join the discord if you have anything to share or suggest! https://www.animiapp.com/
While the topic itself is incredibly complicated to answer, and the best answer I could provide would fall short, I feel the topic hasn’t been done justice until we address the sheer amount of cognitive bias towards normative information that has been expressed recently in discussions. In other words, people are more likely to believe a description of familiar experiences and disbelieve an unfamiliar experience EVEN IF the unfamiliar experience is calculably MORE LIKELY than the familiar ones--- I have been the subject of this dozens of times.
The best example was an appointment with a nurse I had many years ago. I was very unwell but had no idea what was happening to me and thus couldn’t describe my experiences well.
The nurse tried to ask “Did you pass out?”
I insisted, “Well, I described what symptoms I could. I’m not sure if I passed out.”
She got really angry at me, and said, word-for-word, “How could you not know if you passed out?!”
She was not the only medical professional to repeatedly assume I must have the ability to distinguish consciousness from unconsciousness despite this requiring a level of metacognition skills that have to be developed and maintained, and the inverse requiring no metacognition skills, making it more likely if no priors are applied.
The cause was eventually determined to be narcolepsy: experiencing a mix of consciousness levels simultaneously is a defining symptom of the condition. There is a little vindictive joy in being able to hold up the numbers and say “Ha, I was right!” but the deeper issue here was having my description of my subjective experience rejected in the first place, in spite of probability for no identifiable reason.
Reading the enlightenment example enlightened me. I legit stopped thinking thoughts right when I read that. Got 10% of my brainpower back for reading with an internal voice and looking at stuff.
I mentioned in the comments of the article on Jhana that I believe for every 1 person actually experiencing Jhana, there were many many more who believed they had reached it but actually didn’t. People have a good time meditating and see the positive benefits, so they convince themselves they’ve achieved Jhana. Without a baseline experience for the bliss of enlightenment, there’s no way for them to know the difference. I’ve had friends who, in response to me talking about the enlightenment of a mushroom trip, say something like “I already have that without shrooms.” When these people finally try mushrooms, they realize how completely wrong they were. They obviously weren’t lying, their mental models of the experience just sucked.
Another example I see of this is with elite endurance athletes. They will often say things like “when you think you’ve reached a wall, that’s actually about 40% of what you’re really capable of”. There was a time where I was lifting, and a friend with far more experience our significantly more weight on the bench than I was used to doing. I told him it was way passed my limits, but he insisted I was capable based off what he’d seen. He turned out to be right. Moral of the story is that subjectively speaking I had been pushing myself to the absolute limit, but my experience turned out to be wrong. I simply had no concept of what actually pushing myself really felt like.
What does this entail about depression and anxiety being so prevalent in a world of more material comfort than ever before? Perhaps people could benefit from engaging with the less fortunate to build a sense of gratitude.
I don't think it's that. I think it's that there's a whole bunch of different intensity levels for what's called "Jhana", that can smoothly blend into each other, and so the overall situation is kind of like someone from the Appalachians and someone from the Himalayas talking about their mountain-climbing experiences, and realizing that they mean something very different by "mountain-climbing"
It's not that the Himalayas are "true mountains" and the Appalachians are "false mountains", it's that "mountain" is just a super-vague term that points towards what they have in common instead of how they're (very) different.
Same thing with Jhana. The term desperately needs extra qualifiers added to it to slice things up more finely and make clear what's actually being claimed, it's denoting a pretty big chunk of mental territory.
Or you could just take the path of going "only things over 20,000 feet are real mountains! Almost nobody has ever climbed a real mountain!" and in the meantime Joe is still getting benefits and a nice time out of hiking in his local backyard hills.
Ahhh interesting analogy, that makes sense if the definition of Jhana encompasses a broader range of experience than I had thought. The way its often portrayed on here and elsewhere makes it seem like it’s on the extreme end of possible human experiences. I suppose the people being interviewed are something of “Jhana experts” who you could expect to have a more intense experience than casuals, just like professional climbers will have a more demanding experience hiking in the Himalayas than I do in the Appalachians, even if we’re both climbing mountains.
That said, even if they’re both literally “mountain climbing” the 2 experiences are *not even close* to the same thing. I can climb the Appalachians for years and I would not have the slightest clue about the experience of climbing Everest and nothing I’ve done before would prepare me. If the difference between the “pro” and “casual” Jhana is as big as the difference between the Himalayas and appalachians, then I feel like the already fuzzy and imprecise term loses all meaning. If that’s the case, then I experience Jhana fairly often when I mediate for 10-15 minutes before bed. I feel pleasantly connected to my body and the universe during and after the sessions. There is usually a specific moment where I feel my consciousness shift into a deeper meditative state. Maybe this is a mild form of Jhana, but I am still a long ways from the life changing spiritual enlightenment people report having at the top levels.
This is why I’m really interested in hearing from people with considerable experience with *both* “Himalayan” Jhana and high dose psychedelics, I want to know how these experiences relate to one another. From the descriptions they sound very similar, and it would be useful to be able to use enlightenment of a mushroom trip as a “baseline” for what you can expect out of Jhana. With psychedelics, there is a considerable range of intensity depending on dosage and other factors. Still, there is a threshold you cross which serves as the dividing line between tripping and not. It’s fuzzy and tough to describe, but I know from experience it’s there, and you know when you’ve crossed it. Would be curious to see if it’s the same with Jhana.
I wonder if this is in any way related to the fundamental attribution error? We categorize the experience of others differently than we categorize our own? So a someone else's mind includes the subconscious and their bodily states. Our own mind experiences the subconscious as foreign and part of the environment that the mind navigates.
So... it all depends on where we draw the circle that delineates 'self' from 'non-self.' Scott is describing people as they would describe themselves, with 'self' being only their conscious, holistic thoughts. Other people are including things like 'bodily states' and 'subconscious thoughts' in the definition of the self. I would guess that Scott's model allows more granularity and is therefore a bit more useful, provided that someone had the time and energy to sit and ponder.
In any case, as you mention memories can be constructed after the fact. The past, then, is a foreign mind that we tend to describe as a non-foreign mind. If your memories of a past mind thinking in seven dimensions doesn't actually allow you to do useful computation in seven dimensions then your memories of the past mind are false.
I'm not sure if I've said anything here that Scott didn't say. If this post is redundant to Scott or the thread (I haven't read the whole thread,) then I offer my sincere apologies.
The thing about qualia is that they're kind of irreducible things-in-themselves, that cannot actually be transmitted to other people.
You can see that something is red, and be absolutely certain of it. But if you say "red" to someone else, that's just a word you've learned, that doesn't inherently have any relation to the quale you experienced. We assume that different people have somehow similar qualia when they look at a red thing, but this is fundamentally an assumption, and not checkable.
This seems like a difficulty in resolving this question, because if someone is describing their qualia to someone else, they're passing it through a messy translation filter that might very well communicate something completely different from what they actually experienced; whereas if they just think about it to themselves, then the whole process is sufficiently self-referential as to be of questionable interest. If someone says that they saw a seven-dimensional object while on DMT -- what does that even mean? What is the "real", "true" quale of seeing a seven-dimensional object? Either you know that quale, or you don't; and in either case, you can't judge whether someone else is having it.
If someone is describing a quale, their description may very well be wrong. E.g. someone thinks they are having the quale "enlightenment", and says so, when they're really just a naturally content and happy person. But it's hard to say that this is being "wrong about their own experience", because it's fundamentally an interpretive claim _about_ their experience, not a direct transmission of it (which is impossible). I'm not sure what "being wrong about your own experience" would even mean.
Are we to assume there is one single step in brain computation in which data hits our "consciousness"? Clearly there is processing on the data before it hits our consciousness, and there is processing after.
Example of processing before consciousness: Your optical nerve gets pixels input which is theb translated to colors and shapes which is then translated into the concept of a polar bear by checking against your world model and other objects you've seen before. You don't have a conscious experience of seeing lines and shapes though, your first conscious experience is already seeing a polar bear.
Example of processing after consciousness: Let's say you squint harder and realise your first conscious impression was wrong, maybe it's actually a new kind of white bear but not a polar bear.
There may be ways to define what are right/wrong/good/bad ways of this processing occurring. So on DMT, the processing might be converting a drawing of waves into the concept of 7 dimensional objects because it wrongly decides to check and match against your knowledge of high school geometry. Whereas if you're not on DMT you're more likely to check against simple drawings of waves and be like "oh yeah that's just a wave".
But the even bigger question for me is whether there exists a single step at which things hit consciousness? If I write down the brain algorithm as a program with subroutines, will I be seeing an infinite loop* with exactly one "hit consciousness" step? Or for instance, can I simultaneously be conscious of different parts of my brain outputting different things after different amounts of processing? Or can I sometimes skip the hit consciousness step all together, or sometimes hit it way too often inside a single loop iteration?
*technically it will terminate on death, and to some extent while sleeping but in a short time interval its practically an infinite loop.
A friend introduced me to an Emerson quote this evening: "Foolish consistency is the hobgoblin of the little minds".
I love that you're questioning and analyzing your own assertions here. At least speaking personally, it makes me upweight your thoughts and opinions as I assume they're all constantly being subjected to the same rigor (although I do have to ignore the whole coming-to-the-AI-X-risk-conclusion thing).
I think the fallacy is not the homunculus, but the excluded middle.
"This is also how I interpret people who say “I’m not still angry about my father”, but then every time you mention their father they storm off and won’t talk to you for the rest of the day. Clearly they still have some trauma about their father that they have to deal with. But it doesn’t manifest itself as a conscious feeling of anger."
In this situation, the way out is surely to say: this person both feels angry and doesn't feel angy. There are lots of ways this might happen. For example, they might have two different parts of their mind (e.g. conscious and subconscious), one of which is angry, and the other of which is not. Or they might have an emotion that they don't remember, e.g. they act angrily, but literally do not remember the emotion, so it doesn't feel to them in retrospect (sometimes very brief retrospect) as though they are angry. Or they might be feeling emotions that share some features with anger, and some features with not-anger: perhaps they feel happy, and don't realise that anger is compatible with happiness; and experience an undirected tension, which is like anger but doesn't have a target.
So it seems like there are lots of mechanisms by which one could feel both angry and not-angry; but the folk psychology theory of both the individual and the psychologist is ruling out that possibility for both, so they're stuck in a dichotomy of "either I'm not angry or I'm lying."
Well, I wouldn't no-way. We certainly seem to agree on a great many things, but the borders are interesting areas to investigate! Minsky's point about thoughts being ambiguous is that it's a feature, not a bug.
I used to experiment with lucid dreaming. I think of this as sort of being able to flip different qualia switches with my conscious mind. I’ve come to believe these switches do as little work as possible.
For example, I might flip a switch that says I’m listening to a symphony. My conscious mind really feels, “Wow, I am listening to a symphony with a perfect reproduction of all notes…isn’t it amazing that my mind can do this?” But, after long reflection, I don’t think my mind is actually reproducing the same qualia associated with listening to a symphony in an actual music hall. It’s just flipping the switch that makes me believe I am.
It’s a bit like asking people if they can picture a penny in their minds. Many honestly think that they can, yet when asked to draw, they can’t remember if Lincoln faces left or right or where the date goes.
There is obvious vulnerable component in all this... the reporting part. That's the part doing "compiling what theoretical inner experiencer feels into an outside report", even if no real "experiencer" part actually exists behind it.
If you subvert reporting in any way - not just lying, but deny it experience to evaluate and transmit, or deny it memory of having a certain feeling because lower circuits didn't feel it as important enough to encode and filtered it out, or some part just didn't get attention to be included in report despite being possible to recall later, you could have every feeling and still be wrong in reporting it.
People wouldn't be wrong in their feelings (as they certainly had them before they were filtered out), but they will be unable to report them.
And then other people could see something obvious reflected on your face as it happened (or read instruments), and correctly guess underlying feeling, yet you would deny it happening as you would have no memory of it to report.
"The map is not the territory" makes sense, directionally. But better yet, have a third level.
1. Natural language with all its ambiguity
2. Conceptual structure given an agreed upon set of axioms
3. What actually exists exceeds our capacity for modeling or communication
The word "map" often refers to a conflation of L1 and L2, and we have a tendency to overassociate our perspective on the world, what we sense and process, with L3.
When we swallow the bitter pill that we must write code (ie go to L2) to sidestep definitional debates, and give up hope of "being right" or "knowing the truth" except in the trivial academic/synthetic sense that given a rule for rewriting strings of symbols, one may apply the rule correctly or not, we open ourselves up to deeper levels of experience.
I prefer to use English words to those that require a whole cultural and textual tradition to understand, so for me, "Joy" feels more natural than "Jhana". That said, if life feels hard to enjoy, why not choose to find value in suffering? When we observe and study the data that comes thru the channels of pain, fear, anxiety, etc., we treat it as a resource, an asset on our mental balance sheet. And indeed, to suffer for deep purpose sounds better to me than to feel pleasure devoid of meaning.
Most contexts in which I've heard someone say "you're wrong about your experience," they're pretty clearly saying "your experienced qualia is poorly matched with some more objective reality". As applied to the aforementioned example of hunger: the idea isn't "you are wrong about not experiencing the qualia of hunger" it's "your digestive system is objectively asking for food and your perceptual engine is making a prioritization error by not noticing this and providing the qualia of hunger".
As applied to the "false enlightenment" case, the person had manipulated their perception prioritization engine into ignoring categories of experience for attention purposes (in a way apparently different from the intended goal? Most meditative experiences sound to me, from the outside, like increasingly elaborate manipulations of our perceptual engines, but I have no idea).
All to say: we don't usually mean "this qualia is wrong" we mean "this qualia is a poor map to reality". This is an important thing that comes up all the time. I can't think of a case in which being wrong about the qualia itself matters/has any impact distinct from the qualia being a poor match to reality.
I think the point of "false enlightenment" story was that if it would be "true enlightenment" then just waiting a bit and focusing on "do i have any thoughts? how about now?" wouldn't have broken the experience; hence being largely same as your first example.
"(though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)"
I can very confidently assert that people having a bad trip on psychedelics keep on repeating from one minute to the next "How long is this going to take?" while obviously having a very unpleasant experience of subjective time dilation. There seems to be an element of amnesia to this: they forget that they asked the same question maybe twenty seconds ago, and incorrectly interpret that a very long time has passed. However, this is a very obvious immediate subjective experience while the trip is going on.
As contrasted with ketamine, where a lot of people report amnesia and a subjective experience of time going faster ("wow, where did the last hour or so go?"). So merely the amnesia part doesn't explain it for me.
I've never seen 'never trust a fart' given such philosophical depth.
Re jhanas: I don't know anything about Bhudism. So I don't have any idea what difference a Buddhist would see between 1) meditator A, seeking and finding jhana and 2) meditator B, seeking and finding Paul Ekman/Darwin 'Expression of Emotions in Animals and Man'- style happiness.
One factor that complicates this, and that I never really appreciated until I had a toddler, is that we learn to express these inner states from other people. My son says he's hungry, and he might be hungry, he might be lying, or he might actually not fully understand what hungry means. In that sense I think he actually could be *wrong* about his internal state, because he doesn't fully understand the meaning of the language.
Right! And then they grow into adults with widely varying abilities to be aware of and name their internal experience. Which says to me we are all “wrong” to varying degrees about what our experience is and how best to describe it in a way that communicates well to others. The word “wrong” doesn’t so much work for me here but maybe more like at what levels of subtlety are we aware of our experience and how good is our language for describing it (ie, what’s our basis for calling this experience “jhana” versus “a happy feeling while meditating”?)
I think that last paragraph actually catches a very important distinction. If we take a sort of No-Self as a premise (as contrasted with a permanent Self), the claim "I'm experiencing X" becomes "There is an experience of X", and the counter-claim becomes "No, there is not an experience of X."
It seems to me that the latter one is a harder claim to make than "No, you just think you're experiencing X." To claim there is a complete lack of experience X feels intuitively stronger than claiming someone is wrong about something.
I think this is due to the idea that we think of a person as an observer to their mind states, and that observer might be correct or incorrect on their observations. Whereas if we think experiences simply bubbling up, we remove that part of the equation. No mistaken observations occur, as there is no one observing. I guess this is at odds with most people's intuitive view on the matter, but as a meditator it seems very right and natural to me.
I don't think this means that people cannot be wrong about matters related to their cognition, but I do think it does make the claim "No, there is no experience of X" a lot harder to defend.
I think it's useful to distinguish between attention and conscious experience. It seems possible to have the latter without the former, and so by definition it will be possible to be mistaken about one's own experience (if one is not paying sufficient attention to it). For example, I could be in physical pain from sitting in my chair for too long while reading a book, and yet not notice this pain because I am so engrossed in the reading material. Suppose I stop to reflect now and realize that not only am I feeling a pain in my left foot at the moment, but that I have been feeling it for the past five minutes. If you had asked me during my inattentive phase whether I was in pain, I could have honestly reported in the negative. This seems to go beyond the "absent thoughts while meditating" example, since it's not that I made a past mistake in categorizing my thoughts (as being a 'pure conscious stream' or whatever), but that I really missed a conscious experience.
Of course, in normal situations our being prompted by a question would be a sufficient stimulus to activate our attentional mechanisms, but it is at least conceivable that we can fail at this task. It's possible to go so far as to say that our attentional mechanisms are perfect and incapable of missing something, but that seems a really strong (and demonstrably wrong) claim.
It might also be countered that attention = consciousness, and so I wasn’t really consciously aware of the pain until the moment I realized it. But this still commits us to introspective error, since that would mean that my realization that “Aha, I have been in pain for the past five minutes” is itself erroneous. This does leave open the possibility that the more moderate claim of "we can't be wrong about the experiences we are presently aware of" is infallible though (which is perhaps what you had in mind anyways?).
Just to clarify, I envision the remembering of the past pain not as the realization that one is presently in pain and that probably this has gone on for quite some time, but rather as the true remembrance of a past event with the newfound sensation of pain in the left leg being present in the memory. I myself have experienced this on many occasions.
In my very uninformed model of conscious sensation:
feeling of X = some deep neutral network spread around my brain has spit out a low-dimensional output that I've been calling "X"
I'm pretty sure I'm just conflating ANN 101 with the vastly more complex brain network, but funnily enough this model seems to reduce the problem of this post to triviality.
If the "hunger" output is produced by the network beyond a certain threshold, I feel hungry, otherwise I don't. And sometimes the many inputs to the network happen to be *almost* right for "hunger" but in an unfamiliar/untrained combination that doesn't trigger the output enough.
So saying "I'm not hungry" is a statement about the output reading, not the inputs.
This also works alright for the statements "I can see in 7 dimensions" and "I'm enlightened". In this case the error is in what label we give to a new network output.
I'm writing this because it seems to work too well to be right, and I'm hoping to get a good scolding and a solid update to my beliefs.
Sure they can: "I've been experiencing blue sky at 12:34:56" can be easily described as right or wrong. And even if you are saying that you are happy it may be because cosmic rays activated speech-related neurons without your brain actually being in a happy state.
You can construct your theory of knowledge with "you always right about experiences" axiom, but nothing forces you to and you get more confusing and contradictory view that way.
If you ask a two year old what color they see something as, and they answer wrong, I think that might constitute a pretty clear example of someone being mistaken about their experiences in some sense. (I think people who claim to not be hungry may sometimes be doing something similar to that.)
Generally speaking, people attempting to describe their own experiences could mean something very different than what I would mean by the words they say? I would imagine Andres is having some sort of experience which they are choosing to describe as "seeing 7D phenomenal objects", but I suspect I would choose different words if I had the same experience.
I don’t know … Seems like you can have conflicting experiences.
E.g. If a psychotic patient tells you they hear voices telling them that you are actually secretly a lizard person, but they know that’s absurd, and they just want the voices to go away… Both experiences are “real” phenomena, but there are no voices to hear and one part of the brain knows it. The same brain experiences voices, and experiences no voices at the same time.
This also brings to mind the split-brain experiments, that also seem to show conflicting experiences: One part of the brain can be factually incorrect about the nature of reality, and the other part of the brain can be seemingly wrong about what the first part believes and why.
A question then, is what we mean when saying people can’t be wrong about their experiences… Are we taking each experience individually, are we describing a meta-experience of having conflicting experiences, or are we describing the mental process of squaring the experiences…?
Though Serano apparently dances around the definition for "female" without ever saying exactly what "she" means by the term.
In any case, the point is more or less, as you suggested, that the "truth value" of someone claiming to be a member of particular categories -- "male", "female", "messiah", "famous historical generals" -- is contingent on what society deems to be the "necessary and sufficient conditions" for category membership. Individuals claiming membership without evidence of the required "membership dues" can rightly be deemed as mad as hatters.
Hmm… That’s seems like a bit of a non sequitur to what I meant. Definitions of terms, “society deems”, “category membership”… These seem like semantic issues. My point was about how to think about internally conflicting experiences in light of what Scott wrote. Maybe you meant this as a reply to someone else?
Maybe a "bit of a non sequitur", though I had been wondering whether the subtext to Scott's question wasn't the somewhat topical issue of transgenderism, and of the claims of transactivists.
But don't think it's entirely a non sequitur since your own "factually incorrect about the nature of reality" seems to hinge on the terms and framework we use to describe that "reality" (someone once argued, with some justification, that that terms should always be put in quotes).
My point was, sort of, that however we describe "reality" -- to ourselves or to others -- it seems always to be the case that we use words to describe it -- "semantic issues" from square one. Though I suppose something like, "I see the spinning dancer turning clockwise" as opposed to "the dancer IS spinning clockwise" is maybe closer to the dichotomy of what you and Scott are getting at:
But, in some cases, we might simply be using a non-standard definition, or be unclear on what are the accepted criteria for category membership, in the case of the sexes, but, in either case, I think transwomen claiming to be female have to be seen as "factually incorrect about 'reality'".
The subtext isn’t so “sub-“, as it is spelled out quite clearly and hyperlinked in the first paragraph: “A tangent of the jhana discussion: I asserted that people can’t be wrong about their own experience.” You should browse through that discussion if you haven’t already.
Reading subtext into things is a dangerous exercise, though. Even if I articulated my point so poorly that misunderstandings are unavoidable, the response still comes across as a bit of a Rorschach test…
Don't see any reference to transgenderism in either of the two articles. Couple of comments other than mine in this one.
But still think it moot about "can't be wrong about their own experience" since it seems dependent on how defines "experience" in the first place, and somewhat contingent on how one describes those experiences.
Though agree with you about "dangerous exercise" -- why I tend to phrase any reference to it in terms of questions & hypotheticals.
But your call of course on "not discussing gender here", though I might suggest Scott pick up the cudgels again on the topic. And that particularly since it seems naturally to follow from the current ones, and since he gave a more or less decent thrashing of it some 8 years ago -- even if some of what he said then seems to contradict what he's saying now -- and since it ties in with my point about categories:
Maybe this is already covered by the examples in the main post (but no example matches it 1:1 IMO): a very common experience (almost a trope) while on psychedelics is something like: "Wow, what is this weird feeling I'm feeling? I can't identify it. Wait, maybe it's happiness? Oh, it feels so good to be happy! Wait, it isn't happiness. Ah, now I recognize it: I'm hungry! I better go grab a snack.".
To me, it makes perfect sense to say that this person was hungry all along, but misclassified the hunger due to the altered mental state making it hard to identify emotions. So it makes sense to say that this person was wrong about being happy.
This is not like the example of someone not feeling that they are hungry. The psychonaut know that they are feeling a feeling, they are just unable to classify it.
Another effect I think is similar is that it reliably takes me a few seconds to differentiate hot from cold.
I'm inclined to think the problem is less the slowed down mental status, more that the feeling itself (in my experience) is unclear.
I have failed to identify hunger for about 45min on a trip, I felt weak, short of breath and generally unwell, but even consciously looking for it not something that exactly felt like hunger (that's probably not exactly the same as what happens when it takes a few seconds to identify which feeling it is, but my hypothesis is the feelings themselves being weird and unusual, I think moreso than the drunk mind being slow to name them)
I'd make an analogy with myopia. I'll have a hard time reading without glasses because the qualia comes pre-blurred. It can be hard to classify feelings like you said, but I feel like it's a sensory/qualia distortion more than the altered consciousness/drunkenness. (Not sure if the distinction entirely makes sense!)
My problem with this analysis is that it doesn't do much to define what's meant by "honesty". For myself, I find it helpful to think of honesty as dependent on self-knowledge, which is distinguish from what I call "sincerity" which is rendition of what one thinks or feels quite apart from any issue of self-knowledge. For example, if someone has not ever understood that anger can frequently be a manifestation of underlying anxiety, they may sincerely talk or act as if they were driven entirely by anger and subjectively report no anxiety, even if they were quite apparently motivated by anxiety from the perspective of someone who understood them better, but this would not be honest because they lacked self-knowledge.
I also recoil at making honesty and lying a dichotomy. I only call it "lying" if it's an intended misrepresentation (by commission or ommission) or possibly a really blatant departure from what most people are expected to know about themselves in our culture. The opposite of honesty is dishonesty, opposite of sincerity is insincerity. Thus, the way I look at things someone can be honest yet at the same time insincere (because what they say isn't what they believe or feel in the moment). The novel, The Sympathizer, is a great rendition of how this can be.
Of course not everyone draws these distinctions. But I do think honesty is much more difficult that just saying what you think which is why sincerity is a very important concept. And I think that assuming people are lying if they misapprehend their own thoughts or feelings isn't a very useful way of looking at what a challenge it is to live honestly and authentically in our world.
As someone who studied and was briefly involved in criminal law, I cannot believe this post took more than 1 word: "yes".
One would not believe how many well-meaning and sincere witnesses will remember things that just did not exist or did not happen. This can ve as simple as "the man in red pants came from the right and hit him"/"the man in white pants came from the left and hit him" (where only one man came and hit the victim and there was no-one in white pants until a few minutes later, and the court can see this on the video).
And that is without even touching on false confessions.
To show people how we fool ourselves I tend to ask people to close their eyes and concentrate on picturing the front of their house. After a few seconds I ask if they can see it with all the details. Sometimes people are a bit insulted: of course! Then I ask them to enumerate one of the details that is repeated. E.g. how many shingles are there horizontally? This tend to break down the illusion they really saw those details.
You don't have to reject the "experiencer fallacy" outright, it's enough to accept that the experiencer isn't a very consistent entity, but somewhat variable depending on circumstances. I like the "boardroom" metaphor for consciousness, where various modules/processess can bring up their issues for attention and participate in reflection, but no "board member" is there all the time. So, for example, you might be completely sincere about not being bothered by your father issues *at that moment*, but as soon as something reminds you of him, the "board" is reshuffled and suddenly you care a whole lot.
I thought this post was going to be about memory. We know people can be wrong about their memories (e.g. contradictory eyewitness testimonies, false memories implanted by others, or people being sure about where they were on 9/11 but then finding their contemporary diary entries that contradict it).
The post is mostly about people's subjective experiences in the immediate present: I am/am not hungry, I do/do not see a 7-dimensional object. But the question that prompted the post is about people reporting their experiences (of jhanas) in the past, so the unreliability of memory could come into play.
(not saying "and therefore jhanas aren't real", just saying "this is a factor to consider")
Phenomenology started out at pretty much exactly the position you're putting forward here: if we think of qualia as the apriori of experience, i.e. the condition for the possibility of experience, then our experience of qualia is direct and unmediated - there is nothing that can get 'between' us and the essential building blocks of perception. If there is a fact of experience, then that experience is a fact.
In this view, we're each 'authorities on our own experiences', and in experiencing ourselves experiencing something we become immanently aware of the given underlying structures of experience that we have always been taking for granted.
Now, the problem with this idea, imo, is that the fabled 'qualia' which constitute directly experienced objective facts seem to be a myth, and perception is mediated by cognition just as cognition is mediated by perception. Rather than "everyone always knows and is right about what they're experiencing", isn't every 'experience' both mediated by categorization (the dull pain I feel in my right arm, which appears in this localized form only because I have mapped my own body and the notion of pain, the citrus yellow which I can perceive in a certain way precisely because it has appeared with a certain stability in a certain context, and where my 'immediate' experience isn't so much that of a color, but includes, is bound up with, the various contexts within which I would expect to encounter that color - in other words all those neurons are firing as well) and something which requires further reflection, interpretation, for us to make sense of it even to ourselves?
We are always already mediated in our experience by our conceptual frame of the world, just as our conceptual frame is always already mediated by experiences, so there's no 'ontological ground' we can refer to when we're talking about an experience. In talking about it, we are, in a sense, reconstructing a process that has 'reflective depth' to it, and what it is and how we should make sense of it isn't by any means a trivial question.
I (also) wondered if the post was going to be about the political idea of "lived experience", which is often thought to be incontrovertible, e.g. people's lived experience of racism or sexism. Can someone be wrong about that, or about a particular instance of that? Maybe, if you think Bob was mean to you because of your race, but actually Bob is a jerk and is mean to everyone.
I'm surprised more people can't see that the obvious answer is 'yes'. Particularly, about being happy. Two quick examples: politics on twitter, people think they like it, they think they're having fun, but actually they're getting angry and outraged and addicted to those feelings, and mistakenly think it makes them happy when they are not. Another example is cocaine. Ask any user, and you'll find that the first few bumps always feel great, but towards the end of the night ask the user, and they mistakenly say they're having fun, but you can see they really aren't, and the next day people can more clearly see that the fun wore off after a few lines and it just became about hitting the need.
Children are often tired and hungry and not realizing it, even when they're told/asked. This suggests there's a learning curve, and this kind of things are often unequally mastered even into adulthood.
To drag this down to grim reality, I am answering "Yes" to "Can people be honestly wrong about their own experiences?" because at the moment I (and others) are dealing with a family member making claims about our shared childhoods.
These claims are wrong (some of them involve me, and when I say "That never happened", they come back with some rationalisation as to how they are right and I am wrong). They are in therapy, and I have the feeling that they are telling the therapist all this, and being believed, and so nothing to challenge their presentation of "the facts of my experiences" is happening, and they remain convinced of their mistaken memories or interpretations.
So yeah - it's perfectly possible for someone to give a report of their experiences, which they honestly believe is true and what really happened, even when what they claim ranges from the 'misinterpreted' to the 'literally impossible to have happened as you describe it'.
Sorry for dragging down a fun discussion of mental spaces, but I have no idea what to do or where to turn right now, especially as any challenging I do is further incorporated into the victimhood narrative this person has going on: "argument over claims and denial of same" becomes "heated discussion and yelling" becomes "you physically assaulted me!" so when they are telling this to a third party, the story goes "and Deiseach hit me when I tried to tell the truth about what happened back when we were kids". The third party is going to believe them, why not, they weren't there and family member doesn't come across as obviously delusional just upset and fearful, as they would be if they had been physically assaulted.
There seems to be some confusion about what "being wrong about your experience" means. It's well known that memory is malleable and fickle, for one thing, and even in the moment people can get very different impressions of the same event. "Your memory about an event is factually accurate" and "you honestly think that you experienced what you remember experiencing" are very different things, and my impression is that Scott says that you can't be wrong about the second one.
Can you have a factually incorrect memory about something that happened 1 second ago? Because for all practical purposes that would be the same as being wrong about what you are experiencing.
Of course you can. Say, you have bad eyesight and mistook a stranger from afar for your acquaintance. You're factually wrong, but your internal experience of thinking "I think I've seen Bob" is true. It's pretty much tautologically true for most intents and purposes, and Scott brings this up regarding subtle conceptual points.
I think you can, in the sense that even short term memory is a necessary summary of your mental state. The full mental state (which in a sense is the experience) is gone forever, not stored anywhere. And as I think consciousness is basically the same thing as memory from what happened 1s ago, i all boils down about difference between experience (the whole brain activity) and conscious experience (a summarized, edited to be easy to remember and speak about, version of the brain activity). I think they are sometimes very different, and I suspect it's the case for Jhana, maybe it's a hack about creating a conscious story (in fact short term memory) of pleasure, while the whole brain activity is very different from other more addictive kinds...
I wrote on it back in the Jhana thread, so I will put it back as I think it's relevant:
If you admit that consciousness is just a summarized, coherent (or trying to be) and sequential story about your actual mental processes (which are less coherent and parallel because made of multiple actors collaborating/competing), coming after the actual decisions or experiences, then yes, I believe you can be wrong about your experiences. In fact, you are always wrong, exactly in the same sense memories are not the experience itself (and false memories exists). I think it's mostly the conscious thread that is stored as long term memories, and also the one that is communicated to others (because it exists exactly for that, it's a compressed serialization which is exactly what you want for storage and low bandwidth communication. Your conscious experience, like short term memory (not clear if they are different), has been edited to remove incoherences, compress it, maybe to the point it is a misrepresentation of the actual mental state before it. it's indeed an illusion but not in a pure philosophical sense, as the mental processes running on the side, even before the conscious thread is built really exist, can be observed and maybe are even partly recorded (although this is not clear).
A good example is drugs preventing medium/long term memories. Can they be used as sedation, as anesthesia, would you accept to use it (together with strong bonds) for surgery? Basically, the whole discussion regarding the different steps of complete anesthesia are very enlightening (and very disturbing for people trying to put a profound metaphysical importance on consciousness)
“Well, it’s possible that fundamentally all happiness is an illusion, but I’m definitely experiencing the normal type of illusory happiness right now, it’s pretty vivid and intense”, and they say “No, you’re wrong about that”, I still feel like they’re making some kind of weird type error. Can’t justify it, though.
-------
How would they know? The difference I find between "hunger", "happiness" or "seeing polar bear fur as white" and jhana/thoughtless consciousness/seeing in 7D is that the first ones are possible and lots of people experience these regularly while the others are either impossible (I don't think we can see in more than 3D as a matter of, like, how our eyes are constructed? and thoughtless consciousness?! really?) or extraordinary claims where "you're fooling yourself" seems the far more likely explanation.
Sorry - this may be too basic for our philosophers here but I thought maybe a stupid layman like myself might bring things down a level or two...
It's trivially false that all statements made by honest people about their experiences are true, if this includes both past and present experiences, because people sometimes contradict their previous statements, and it's impossible that both the original statement and the contradiction are true.
If it's limited to present experiences, this problem doesn't arise. The honest person who says, "I'm happy" is telling the truth, and if they later say, "I now realise I wasn't happy", they are then mistaken (perhaps by misremembering). However, this is now a very weak claim, which in particular doesn't help with reported experiences of jhana, unless the speaker actually is in jhana at the time.
Here's a silly, trivial example that might clarify a larger point.
Suppose you are estimating the probability that something will happen and you estimate there is a three in four chance of it happening, so you say "I subjectively think there is a 80% chance this will happen" because you have a brain fart and momentarily think 80% = 3/4.
Would we say you are
"accurately reporting your subjective experience of experiencing yourself believing the thing will happen at probability 80%, while you actually believe it will happen at probability 75%"
?
Maybe? But is "believing thing will happen at probability 80%" really an experience?
A better way to describe what is happening is that you are being wrong about your experience. Or, more accurately, in translating your experience to words you find the wrong words to describe it. In this case, you say 80 instead of 75. I think that works for the meditation example too. The woman was indeed experiencing something, but she picked out the wrong words to describe it "I'm not thinking about anything" VS "I'm thinking about not thinking about anything".
Now, believing that a sentence S corresponds to an experience e, is itself a belief. So granted had the woman said
"I believe that the sentence 'I am thinking about nothing' corresponds to my subjective reality" she would have accurately reported her subjective reality. But that is not what she said.
Instead she just said 'I am thinking about nothing'. Without the surrounding words which makes her statement a false first-order report of her mental state and not a true, second order statement about her beliefs about her mental state.
What happened to the idea that beliefs should be anticipation controllers?
When a person A says they have mental state X, but you object that actually they are wrong, what kind of predictions each of you is making about the future?
Somatic complaints may be something like being wrong about your own internal experiences, since the basic hypothesis is that they are a function of low-insight (regarding the connection between your psychological processes and your body).
But I guess its more like a misfire regarding the underlying "cause" of the physical symptom and wrongly describing that property of it.
He did, and mentions it elsewhere! Great pickup! Notice how he uses a related argument in LWW, even though it is a children's book. (Actually, the Narnian Chronicles are adult books written to be accessible to children.)
Hmm, that doesn't ring a bell from LWW, but it does remind me a little of "things [magic apples] always work according to their nature" from The Magician's Nephew.
All of this seems like "if a tree falls in a forest, does it make a sound?". There is an internal perception of hunger, there are some biological correlates, and there is nothing else. You can call the person (an honest person saying they're not hungry) correct or incorrect, it doesn't change anything.
The trouble is that all introspection is retrodiction.
Take the person who says "I'm not angry at my father", but clearly is. They might be having feelings of anger, accompanied by visual imagery of their father. But they are so invested in the narrative of "I'm not angry at my father" that every time someone asks "what's wrong?" they look back at the memory and come up with a post-hoc rationalization of what they were feeling--any narrative will work, so long as it isn't "I'm angry at my father."
This happens every time we report our inner state (to ourselves or to others). We have the experience, and then look backwards at it and say "what was I feeling just now?" If you watch this process during meditation you can twist yourself into all sorts of pretzels.
"This seems no worse than somebody on drugs scrawling “JOY = JUSTICE * LOVE” or something on a blackboard and believing that it they’ve discovered the fundamental truth of the universe"
Well of course this is wrong, it's MERCY which is JUSTICE * LOVE 😁
Belief, itself, is a tool, not merely a measurement.
We are social and story-based creatures. Consider how we can love money -- not merely the power that money would bring us, but money itself -- even though we don't really understand what money is.
Engineers and scientists, beaten by experience into respecting falsifiability, apply a rigorous test for things they believe. But even they only do this reliably in their practical scientific domain. I know religious engineers who turn from skeptical thinkers to ingenuous ones when the subject changes from their trading algorithms to Jesus. They know what truth feels like when it's rooted in undeniable empirical reality; sometimes I marvel that they don't apply that lens to their religious belief.
But of course, even though I don't believe in god, I do believe in plenty of my own myths, like personal integrity, interpersonal loyalty, familial love. I don't apply any kind of falsifiability test to these beliefs; I believe them out of a mix of deeply rooted values and intentional belief. I **want** to believe in them, and in fact I cultivate my belief in them, steering myself back towards them when I get too infatuated with competing fixations like personal success or petty rivalries.
I've often thought it curious, for instance, that so few people who claim to believe in Jesus make a study of the gospels in their original language. And of course, most of the people in the world who claim to believe in Jesus have never even bothered to learn what Jesus's name was! (No one in his lifetime called him "gee-zus".) If I believed in Jesus the way that I believe in, say, Bayes' rule, or even Mexico City's being a phenomenal city to visit, I would definitely be over on Duolingo studying Aramaic :)... But that's not the sort of belief my colleagues have; theirs is more like my belief in familial love, where I certainly might read the occasional book about parenting, but I also think I more or less have all of the basic knowledge I need already to live in rch communion with that belief.
That is the lens I think we should apply to claims about jhana. There are some Christians who claim to believe in Jesus who would agree they do not believe in Jesus in the way they believe in dental cavities; they might recognize that belief in Jesus is a tool they use so that life is more in line with how they want to live it. They might acknowledge that they could easily find the same truth in most other religions. But these same people are not lying when they say they believe in Jesus -- because our capacity for belief is precisely **for** this sort of contradictory mess! We are fucking great at believing things; belief is like a perfect plumber's tool, that doesn't just tension stiff but can twist and turn itself to find any holes in a pipe and patch over them so the certainty and confidence and connection can flow.
Is there a point before forming such a belief where we are aware of a deliberate choice? In a few weirdos, yes, but for most people, noticing those shimmers of intention is something they're just not used to doing, and the same part of their brain that is so good at forming and using belief is also great at simultaneously erasing its tracks. (For someone who has gotten good at noticing in ourselves how frequently we begin to lie up ourselves, it can be very destructive to relationships to start noticing how often others are doing this!)
Of course, on some level, I know my belief and familial love is a brilliant hack performed by my DNA and brain chemistry, and not something objectively real. But honestly, I sort of suppress thinking about that; embracing the illusion is necessary for it to work, and I am so committed to the illusion that it feels somehow disgusting right now for me to even admit that I know, on some level, that it is an illusion. We are weak creatures designed to team up extraordinarily well; we report beliefs that happen to be rooted in interpersonal story, status, and faith more than in empiricism (beliefs that remote parts of our brains know are sorta bullshit, just in case) -- because **that's what beliefs are**. We're all Trump trying out the belief that it wasn't him on the Access Hollywood tape, until his confidantes nix that idea; we're just usually much more suave in our belief forming then he is, much better at hiding from ourselves, and others, how the belief sausages are made.
Someone who tells us their experience of jhana is better than the best sex imaginable has some part of their brain that could clarify that this is an aspirational belief, mixed in with some observations of sanguine fact that they consider plausibly close. But why would they? Pursuing that sort of clarification certainly isn't what got them to meditate for 1000 hours. Only the occasional weirdo is, like me, even mildly interested in trying to build walls between their aspirational beliefs and their observed sanguine understandings.
This was super interesting to read. It leads me to wonder if there's a useful distinction to be made between the stories we tell ourselves about the values/priorities we hold (the values/priorities of familial love, Christianity, financial success, etc) and the stories we tell ourselves about a thing we experienced.
Whether one experienced a jhana state or not feels to me like an experience question rather than a values/priorities question. So for me it lands more in the realm of questions like "have you had dreams in which you flew?" or "when you go for a long run, does it elevate your mood afterwards and for how long?" or "when you're on ketamine, do you experience a sense of dissolving personal boundaries?"
A person could say "it's really important to me to be someone who has had interesting mind states during meditation and my identity is pretty wrapped up in the idea of performance, even in the realm of meditation, so..." and then we can speculate whether they choose to lie about the jhana state experience, whether they think they had a jhana state experience nudged forward by their "need" to, or whether they actually did in the sense maybe that you could bring ten really experienced meditation teachers into the room, including one that's worked with this person over time, and after interviewing the person about their jhana experience confirm that very likely that was a jhana experience.
Jhana states are a thing that happens when people meditate -- it's not an everyday thing for everyone obviously, but it's not super unusual either. And so in that sense it seems pretty different from ideals or aspirational aims or values. Now I think it's still possible for all kinds of self-delusion in either case, but I don't see that self-delusion based on ideological commitments is essential to describing an experience one has had the way it seems to be in what you describe about familial love or desire for material wealth.
The distinction I'm making isn't a bright line in all cases for sure. It sounds to me though that you're arguing for claims about experience being just as ideologically driven as claims about ideology, and I don't see that.
Put another way, if someone says they're currently feeling hunger, that's a subjective judgment and it's probably impossible to be honestly mistaken. It might be that you're registering a different feeling as hunger (i.e. food won't fix it), but you're the best judge of your own current experience.
But you can absolutely be wrong when you say "yesterday I felt hunger" because your memory can be mistaken.
You can also be wrong when you say "I'm way hungrier today than I've ever been." You're comparing your current subjective impressions to your memories of past impressions.
And you can absolutely be wrong when you say "I was more hungry last Tuesday than I was six weeks ago." The amount of potentially mistaken memory you need to process to make a comparison like that virtually guarantees you're just making a story up about your past self that is agnostic to the actual truth.
I think I have a good counterexample. I like to read books in bed late at night, sometimes when I'm doing this I get tired and close my eyes for a moment, then drift off into a half dreaming state. In this state I still have the sensory experience of lying in bed, but I hallucinate something about reading a book. At this point if you asked me "Are you reading a book?" I'd say yes.
From this state I often fall completely asleep, but sometimes some mental process notices "Wait a minute, my eyes are closed. How can I be reading?" At this point I think about the book I'm reading and realize that I couldn't actually repeat back any of the sentences I've supposedly read. It's like someone stuck an electrode directly on the "feels like reading a book" neuron without feeding my brain any book content. In this moment I realize that I was not reading a book, and furthermore I was mistaking some kind of incomplete dream quale for a much richer experience I was definitely not having. If you had asked me "Can you tell me what the book you're reading is about?" I would have thought "Of course", then attempted to and failed utterly.
This is a little oblique to what you wrote, but reading it returned me to the example of whether say a depressed person can be "wrong about their own experience" and therefore we can all be wrong about our own experience.
Like your reading and then "reading" after falling asleep, there seems to me to be an important distinction between a person describing their felt experience versus a person characterizing their felt experience, though the line is a fine one.
"I am worthless and there's no point in my carrying on" is a common depression thought. The person having that thought isn't wrong when they say they're having that thought. The question is, are they wrong about the appraisals contained in the experience that engendered the thought?
If the depressed person said instead, "I am having the thought that I am worthless," then it seems much less arguable about whether they could be wrong about their own experience.
In the case of book reading or jhana experiencing, if a person said "I think I'm having the experience of 'reading' or 'a jhana state'" then that little bit of "I think" acknowledges the provisional nature of all stories about one's experience.
I find myself thinking along the lines of "okay, but so what?"
Looking back at Jhanas, what those of us who are skeptic are really worried about is not if some person has an unusual feeling of extreme happiness (I think most of us would agree there are some people who just seem to always be happy, even if we can't achieve it), but whether this feeling is transferable. If I do the things they say they did, will I also have this same feeling?
If their feeling makes sense, like being hungry when you haven't eaten in a while, that seems more transferable. If the person has not eaten in 10 hours and reports "not hungry" then I doubt that I will have the same experience. If they have a suggestion of how to achieve the same thing, for instance an appetite suppressant, then I can evaluate whether that's a good option for me. Appetite suppressors are a real thing, but even if we were uncertain of that we could try one out and see if we had the feeling - it's a low time-investment and pretty low effort investment to make a determination. If we still felt hungry at similar levels to before under similar circumstances, we could also determine that the suppressant didn't work (at least for us).
With the Jhanas, one of the biggest issues is that the process is described as taking many hours per week over potentially years to achieve, with no guarantee of success. The fact that there are no guarantees and no standard timeframe means that any failure to achieve the same results would not count as a failure to replicate. Taken in aggregate, there is no way to falsify this belief - only positive cases are counted. This should lower our belief that we are able to achieve this state and lower our belief that the state being reported is actually achievable through the methods proscribed. This is true regardless of the subjective belief of those saying they experienced it.
I got a weak version of it in three tries (try = ~hour-long meditation session)
My advice would be something like
1: it's easy to be practicing the wrong thing and waste time that way. I desperately wish that someone had told me back in college that the 100% alertness/awakeness part was way more important than the calm part and if you're being super calm and meditative and peaceful that's totally the wrong mindstate for hitting it. Maximum attention! There's other stuff like this, advice I wish I would have had to make it easier. Then again, I'm typical-minding super-hard right now and maybe the average person has a brain that makes the requisite calm much more difficult to attain than the requisite attention.
2: Value of information is pretty high on this one. Maybe you have natural talent! Something that 20% of people can get in under a week, 60% of people can get in two months, and 20% of people can get never (warning: numbers pulled directly from ass) can still be very worth trying!
3: If it's not working after a month, give up. I mean, it's not fake, but I really don't think "just anyone can do it", and much less do I think "anyone can do it in a timeframe that makes it worthwhile". Mindstates can be non-universal among humanity and still very real, like the ASMR response (another "no guarantee of success" mindstate). Just stick someone in an MRI and see if their brain is doing something weird, there's the falsifiability for you. This seems like a realistic standard, otherwise you'd have to say that ASMR is unfalsifiable. Also, even if you say "it's not falsifiable", the variant statement "The claimed state can be attained in a timeframe that makes it worthwhile" (more decision-relevant) is extremely falsifiable. You try it and if it's taking too long you go "fuck this" and quit, and if you got it without spending too long on it, you go "woo, it's true!"
I'm confused about the worry around transferability of the jhana experience because it seems to suggest the whole reason a person would take up meditation is to have jhana experiences. I guess there are people out there (and on here) whose main motivation in meditating or being a Buddhist is to experience these transitory states. Certainly for some people being able to access these states repeatedly is a spur to keep practicing. But the benefits of meditation and Buddhism are by no means limited to or contingent on experiencing jhana states.
Not necessarily, but if we try something in the way described and have a different outcome, it should at least lower our expectation that it could be real.
Trivial examples of honest reporting being incorrect is the medical phenomena of referred pain. You have pain in some part of your body but you perceive it as being in another part. Example pain radiating down left arm from a coronary attack or ghost pain from an organ that has been removed such as after a cholecystectomy.
Meditation is good to bring up. The example of thoughts given above is an excellent one, and there are numerous other honest self deceptions in the practice.
This comment may come across as low-effort sniping, but it's an honest question. Isn't this just an (obvious) argument about semantics? The question "Can people be honestly wrong about their own experiences?" seems to me less an argument about the state of the world, or even about philosophical truth, and more quibbling over the meaning of the words "wrong" and "experience". In particular, it is quibbling that can never be resolved satisfactorily because the English language is not that precise: if you want to resolve the confusion, you'll have to be more verbose, but there isn't a non-semantic issue at play.
Couldn't this be tested by crafting problems in 7D geometry / topology which would be intuitively obvious if you could directly experience 7D objects, but would take a lot of very hard maths to work out normally?
I've heard that Robert Langlands claimed to be able to visualise surfaces in 4D space, and I don't know if he could, but he did at least make major field-defining discoveries about them!
Charles Hinton (https://en.wikipedia.org/wiki/Charles_Howard_Hinton) invented a system for visualising the fourth dimension using a set of (three-dimensional) cubes painted in a rather complicated way, whose final version appeared in his 1904 book "The Fourth Dimension". I don't know how much success people have had with them.
I’m inclined to agree with the great British and Liverpudlian philosopher Sir Richard Starkey MBE who, when asked whether he believed in “Love at first sight”, said:
“Yes, I'm certain that it happens all the time”.
Not that he had experienced it but he believed other people’s experience of it.
Unless it’s clearly a lie or impossible, I’m inclined to believe other peoples description of their qualia.
This was a central point of debate throughout the entirety of modern Western philosophy. Descartes, in Meditations on First Philosophy (the foundational text of modern philosophy, for those who don't know), more or less builds off this problem. His eventual question becomes how error in judgement is possible if humans are made in God's likeness. He concludes that humans, being several steps removed from God, are imperfect, and that errors in judgement occur when the faculty of reason is misused: when we assert something a high degree of certainty without necessarily reflecting on how sure we actually are and how much we know.
Two things:
First, I challenge whether you're asking the right question here, Scott. As it stands I think we've overcomplicated it. Why can't we just assert that human perception is highly limited, and that moreover we tend to be somewhat rash in judgement--in other words, people tend to make judgements about things they don't understand, and thus you get all sorts of examples of people making claims about internal experience that aren't necessarily accurate or true.
Second, I think this is the reason that a large portion of the philosophical tradition has since moved away from this sort of subject-object metaphysics. I know we aren't a big fan of postmodernism here, but one thing I find really interesting in the through line from Freud to postmodernism is the questioning of the unification of the subject. There is no unified, single, coherent subject; people are not immediately self-present/present to themselves. This is why psychological defense mechanisms like repression are possible.
Here is an edge case which is very difficult to reconcile with the thesis: the "Self-Torturer" paradox, described in Michael Huemer's excellent book "Paradox Lost":
A person which starts in a state of no pain is repeatedly given the option to increase his torture level by an undetectable increment, in exchange for $10,000. Each time, the difference in pain is undetectable, so it seems rational to accept. However, the end result is a life of agony that seems not worth the financial reward.
Huemer claims that the only way out of this is to recognize that there can be an introspectively undetectable differences in subjective experience.
This is the paradox of the beard. One hair does not make a beard, if N hairs don't then N+1 don't, therefore there is no such thing as a beard.
Our senses categorise everything, even when the thing itself is continuous. We suddenly notice that we are hungry, although the physical state of blood sugar etc. has been changing continuously. We suddenly notice that a friend is showing signs of age, although the seconds have been continuously ticking by. Indiscernable differences add up to discernable differences.
The Self-Torturer Paradox is related to the Beard Paradox, but not quite the same. An argument could exist that resolves the Beard Paradox but not the Self Torturer, because the latter has the added feature of introspection.
BTW the book Paradox Lost also contains a fascinating discussion of the Beard Paradox (under the name Sorites Paradox).
Can you be wrong about your own subjective experience? There are the obvious ways: lying or misremembering. Other than that...
For one thing, it depends on what you call subjective experience. Is it only the things you are consciously aware to be experiencing? Does it include all the stuff that you would experience if only you paid attention? This is about word definitions, not too interesting.
But I think there is also room for being wrong about your own subjective experience that is not about word definitions. There's also how you frame and end up translating that experience into a description. This can be affected by drugs, sleep, elephants in the brain, priming, the models you have...
When the drugged person reports an experience and describes it in a way that makes no logical sense, he just can't be right. Sure, as a listener you can always patch it and when someone says "I am tasting love and it tastes like the moon", or "I am not alive", and take it to mean "I feel compelled to describe my experience by the nonsensical description [...]". Then sure, if she's not lying or misremembering, I guess she can't be wrong. But imo, that's just being wrong, and you are going beyond charitable and changing the meaning of what was meant to be said when you add the extra meta layer. If you report a deja vu and report it as "I've lived this moment before", without conscious awareness that it's an illusion, you are plain wrong.
Maybe you can't be wrong if you don't add this extra meta layer between the raw experience and the description. That sounds like what meditation people kinda want to achieve, right? Then you can't be wrong in this way but only because you also can't make any claims at all, there's nothing to be wrong about.
It seems to me that all these examples assume immediate reporting of the experience. But often that is not what happens. Rather, people are reporting the experience at a later time, and what they are really reporting is their memory of the experience. When the memory is created, or recreated, there is ample opportunity to add a story to the experience. One might even argue it is nigh impossible not to add a story. The story is our, or often someone else's, interpretation of the experience. So I think the more important question is whether it is possible to alter an experience with an interpretation and then remember that as the actual experience, and then later to report it as such. And I think science has shown the answer is a resounding "yes."
On further reflection, it seems to me that this post could be interpreted as a wholesale rejection of the rationalist paradigm. If we are supposed to believe that people cannot be wrong about their subjective experiences, and if we assume that for the most part people are honestly reporting their experiences, then what are we to think of religious experiences, for example? How can we call them delusions, if people honestly think they happened? Also, googling around, here is an interesting article:
I think there was a study once where people were shown faked photos of their father taking them for a ride in a hot air balloon or something, and many of them could recall the experience including sometimes things like how they felt scared or happy at the time - despite the researchers checking before hand that these people had definitely never been in a balloon ride.
So it seems that people can definitely be wrong about their past feelings and experiences, as well as more mundane things like "the car in the accident was definitely red" - and that's before we get to "repressed" memories recovered through therapy.
My priors are very high on "yes, people can be wrong about their own experiences".
I would put it differently. I don’t think people are wrong about the fundamental experiences, but they’re open to any narrative that you might want to attach to them.
Perhaps they should show pictures of their father throwing them off a cliff and see what happens then.
I don't think it's necessarily impossible that our brains could visualize things in 7 dimensions. Consider a thought experiment where you have someone participate in a VR simulation, but the world you simulate is 7D, and you transmit sensory information from different directions in the 7D world into distinct input channels that connect directly to the brain. I think it's plausible that someone immersed in this simulation for years would be able to adapt to it quite well. It's also possible that they wouldn't, if the 3D nature of our world is hardcoded in some way into our genes controlling perception. But it's not obvious to me that this is the case.
The Fields medal winning mathematician Bill Thurston apparently claimed to be able to visualize objects in 4 dimensions and that this helped him come up with difficult theorems that turned out to be true. The full anecdote is here-- https://qr.ae/pvEXSf . This seems like the best evidence to me that this is possible in principle.
Meanwhile, under normal circumstances, we can *barely* say to be able to "see in 3D", or at least we perceive one dimension quite unlike the other two :
- light geometry assumed to be a 1D beam (and hot desert air floating oasis mirages resulting from that assumption being violated)
- those beams being projected as a 2D image onto the retina (see also : the screen you are reading this on, and you might visualize "3D" models on)
- depth perception coming from a comparison between those 2D images (and even then, not only !), and it's not like we have eyes on our index fingers, they're pretty fixed and pretty close relative to each other !
- compare with the "4th" dimension, the non-spatial one of time, which seems to have an informational content much closer in intensity to those of the 2D retina images than the stereopsis that adds the 3rd one.
(Of course all 4 of them interact with each other to form the full visual perception.)
Scott, a request. People are dropping all kinds of interesting links to philosophy papers in the comments. Would it be possible to get a post with all the links in one easily accessible place?
The mind is a black box, and even from the inside it's totally dark. It's very easy to couple words and concepts with your internal experience in ways that do not hold, do not match up when probed by experts on the outside. I love this lesswrong article on thinking you're great at emotions when really you're totally disconnected ( https://www.lesswrong.com/s/g72vrjJSJSZnqBrKx/p/qmXqHKpgRfg83Nif9 ). I have friends like this, that think they have a good grasp on their emotions but are clearly actually suppressing them. I've watched one of them run around in an anxious frenzy when having many unexpected guests over and when I asked whether he was anxious, he told me no, even though we was behaving like it and physically trembling. I have another friend who claims to be very emotionally minded, more connected with her emotions than with her thoughts, but when I ask her to describe her emotions during some past event, she immediately starts conceptualizing instead of covering her emotions. Maybe she's just terrible at describing them, who knows, but from the outside it looks like she's way more connected to her thoughts than to her emotions.
Well, first of all, we know that people can have false memories, so it's absolutely the case that people can be honestly wrong about their internal experiences that happened in the past. If somebody said they met aliens 10 years ago and had a joyride in an UFO, there are at least 4 possibilities:
1. They actually met aliens.
2. They lied about meeting aliens.
3. 10 years ago, they hallucinated the experience of meeting aliens, and they faithfully reported their experiences of meeting aliens.
4. This was a false memory and they did not have the experience of meeting aliens.
To say that people are always accurate about their subjective experiences unless they're lying, you have to say that all apparent accounts of #4 is either #2 or #3, which is just implausible to me given how frequently people have false and easily primed memories (see e.g. court witnesses).
Now I assume that your ninja trick in the above essay will look something like "people correctly reported their experiences of remembering meeting aliens, even if they were mistaken about their remembered experiences of meeting aliens." But to me I think this definition is not the most intuitive one, or the best way to carve reality. *And it actually matters.* If there's an anesthetic that removes memories of pain, that'd be valuable, but nowhere near as valuable as an anesthetic that removes the ongoing experiences of pain!
___
POINT 2
More central to the debate, I do think perhaps there's not a "truth of the matter" to our discussions here, but more of how we choose to interpret things.
Your interpretation of the time question goes:
>If you properly differentiate all of these, you can say things like “people are accurately reporting their subjective experience of internal clock speed, while being wrong that their internal clock is actually slowed down relative to wall clock speed”.
Whereas my interpretation of it is
> your subjective experience of your subjective experience of time is slowed down, but your actual subjective experience of time is the same as before (or even sped up).
There might not be a truth of the matter here, just a difference in framing.
> (though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)
Incidentally, this is my best guess for how (not always, but often) subjective experience of the experience time can seem much faster than what *I* call "subjective experience of time" (ie, clock speeds).
Basically, here's the chain of reasoning:
1. You don't have "true" instantaneous experiences because humans are implemented on wetware with discrete time jumps.
2. So all instantaneous experiences are a bit of an illusion anyway, and relies at some level or another on memories of subjectively-present, objectively-past events.
3. More so than most, the experience of *time* necessarily relies on memory.
4. Drugs that make time seem to pass slowly (or high subjective clock speeds) often work by making you forget recently past events. So time seems to go by slowly because (compared to baseline), 30s ago in objective time feels subjectively "murkier" and further away. This explains why when stoned, music feeeeellllls like it's going reeeeeaaaallly slooooowly.
5. There might be a countervailing effect where very-near-past events (say 1s ago) feel more crisp and you remember more things (this intuitively seems like a plausible model of adrenaline). In that case, you'd have nearly the apparently opposite effect that still results in altering your subjective experience of your subjective experience of time, without actually changing clock speeds.
5a) I think it's much more plausible that there are drugs that massively slow down clock speeds, than that there are drugs that massively speed up actual clock speeds. So I'm much less suspicious in the other direction. Though *small* speedups from amphetamines or w/e don't seem insane to me.
To some degree, this model is empirically testable. If you have friends who take drugs that make time appear to go slower, you can do a single-blinded study like this:
1. Ask them to watch something where time is tracked very objectively with relatively short jumps (e.g. the second hand of an analog clock).
2. Ask them to take drugs, and then report what their experience of time is like.
3. While drugged, ask them to now look at the clock again, and in "real time" report whether the clock hand appears to be moving more slowly or quickly than before.
4. If my model is correct, they're more likely to report that the clock hand is moving either the same or more quickly than before, rather than slowly.
___
POINT 3
While at some level there's no "truth of the matter" to our debate, and this is mostly a semantics question or one of preferred framings, I do think (unless we're very careful) what we choose to emphasize can have substantial real world implications in the future.
Consider the question of subjective experience of time. If we live in a future with digital minds, what I call "subjective experience of time" or "clock speed" matters a lot. A life on 100 subjective years matters roughly as much as 100 calendar years, even if it's only instantiated in one calendar year. In contrast, I care much less about what you call "subjective experience of internal clock speed."
Getting this mixed up is pretty bad on any calculus that cares about internal experiences. Since (by my natural ontology) we want people to have actual subjectively rich and valuable lives, not just to falsely believe they do. On the other end, being tortured for subjective millennia is (in my opinion) much much worse than being tortured for one internal clock second but thinking it lasted millenia, or having a false memory inserted of long-lasting torture.
I don't expect this question to be *very* important before we get digital minds (because we probably don't have massive OOM differences of speeds in biological minds), but I think it matters a bit for animal welfare. There's some earlier work on internal clock speeds variance across species by Rethink Priorities (disclaimer: I work for Rethink Priorities. I did not work on that report).
I find differences in experiences of time most intellectually interesting, but of course, differences in other "honest" reports of subjective experience vs actual morally relevant subjective experiences matter too. We'd much rather people not suffer than just think they're not suffering, for example.
I'm confused why you don't talk more about memory and fallibility thereof. Unless you're narrating your current experience (and maybe even then), you're reporting a memory of an event and I think it's broadly agreed that people can be honestly wrong about memories.
I'm a little unclear on the boundaries of the term "qualia", and the discussions above seem to address questions that are at least slightly different, although they examine them in depth.
This term seems to bundle raw sensory data together with some amount of inductive post-processing, provided that the processing isn't conscious or deliberate. But that means that post-processing errors don't invalidate a "quale", provided they are unconconscious? So stating that a report of "qualia" is inherently correct as far as it goes presumably means it doesn't go very far? What happens if some kind of careful self-review isolates and identifies a previously-unconscious leap as an error, and brings it into consciousness? Did the quale retroactively change, or is it still valid as originally reported?
If you sincerely perceive a silent beckoning figure at night out of the corner of your eye, but everyone (including you) concludes it was a random movement of curtains in the moonlight which you unconsciously
converted into a familiar shape, is the correct statement at the qualia level that yes, you saw a ghost, even if that only entered your head for a moment? What if someone took DMT and formed the belief they were looking at a 7D object, but after examining the memory retrospect they conclude on their own
that it was actually an invalid analysis applied to an ordinary two-dimensional visual halucination? At the qualia level, did they in fact see a 7D object? This example sounds like the example above of "people retroactively remembering things as lasting longer", and I'm not sure which part of that is a quale, either. But if "qualia" explicitly include untested assumptions about raw sensory input (provided that those assumptions aren't made conscioiusly), how relevant would "qualia" be?
I apparently don't get hungry. I use the word "apparently" because this is the most likely explanation I can find for comparing personal experience with reports of the experience of others. Personal experience would presumably be categorized as qualia here, and reports would be reports of qualia, but there's an objective truth here I would like to identify, and the weakest part of the argument rests on this "qualia". I therefore wouldn't say that "I don't get hungry" is necessarily true, it's just a likely interpretation of the information I have available.
To me, Qualia is nothing but the physical embodied experience of whatever it is: fear, amusement, anxiety, exhilaration, orgasm, all The infinite varieties of pain and pleasure that our bodies are capable of sensing.
All these words are our best attempt to communicate Qualia to someone else. As soon as words get involved, inductive reasoning at some level is a given. I don’t see a way around that.
It is entirely possible different people would choose different words to describe the same physical sensation, not to even get into the listening side of that equation because let’s face it words are only as good as what others hear. It’s also possible that some people listen to their bodies better than others.
For example: Something makes a person uneasy. They have a resistance to feeling uneasy, probably because of their toilet training. Here’s where inductive reasoning comes in;
I will construct a mental framework to contain my feelings of uneasiness, and to compensate for them, because I would rather not listen to them. So I become overly assertive, grandiose even, and all kinds of other behaviors that are constructed to deny my feelings of uneasiness, or on the flipside, my feelings of uneasiness lead me to build a mental construction where I am inferior, and other people notice that and are out to get me or 1 million other constructions that one can use to intervene with the feeling itself.

I think that’s why a big part of the Buddhist philosophy is listening to one’s body and not jumping out to name something too quickly. (I think it’s worth noting that those philosophies rows at a time when human beings were first negotiating this change in the weather, and we’re a little closer to the originalist state. As the world has become increasingly more complex, in terms of abstract, thinking, and constructions of word and metaphor there are several layers to cut through)
I am sure I am not alone when I say I have had experiences that I interpreted one way in the moment, and later came to realize that something else was going on entirely, but that did not change the Qualia in my current use of the word. what changed was the significance of it.
I remember when I was in my 20s and I was in therapy for the first time that the therapist would always ask “how did that make you feel?“ And I couldn’t say anything because I wasn’t paying close attention to my qualia. I would say objectively now looking back, that I was extremely neurotic at that age. And that, given my circumstances, it was entirely reasonable to be so. 
In short, I think Qualia in human beings is pretty similar, across-the-board, and pretty universal. Physically, we are really not very different from one another, but the interpretation and the recognition of those things is an open season for distortion, misunderstanding, and confusion.
 all capitalizations of qualia courtesy of Siri. 
Identifying qualia with "the physical embodied experience" of something doesn't really resolve what I was wondering about.
Consider the bearded-man/woman-under-a-tree illusion. The raw sensory data starts as a firing pattern of retinal neurons, but then gets processed in layers into abstractions: relative light and dark, connected edges, component shapes, and recognizable images. Every level operates probabilistically on evidence to draw "conclusions", which become the input to the next stage of
processing. If the viewer is not paying explicit attention to the image, this process continues unconsciously up to an assignment of abstract concepts (like "simple work of art", perhaps associated with a speculated purpose or artistic intent), and it may or may not be given enough
priority by the unconscious mind to demand conscious attention. If attention isn't demanded, the event might still be saved in episodic memory, and if the recalled later, the memory might be "there was a drawing of a bearded man" or "there was a lady under a tree", but almost certainly not "there was an ambiguous optical illusion" (because the viewer never realized it wasn't a simple drawing).
On the other hand, if the image attracted the viewer's conscious attention, then after the further processing of a conscious examination it might be added to episodic memory tagged as "a deliberately ambiguous drawing".
I might have the processing layers wrong, but there will be processing layers of some kind, and that leads to my question:
At what point on that pipeline do we have a "quale"?
It sounds like in the inattentive case we have a quale at the inaccurate partial-interpretation of the image, but in the attentive case the quale includes the additional reasoning done after reacting to the surprising nature of it. But that seems to contradict the definition of the term "qualia".
If the reasoning is excluded from the quale in the latter case, then is it a quale before it's a discrete memorable event?
I would expect the human brain to have more parallelism than this, and that there would be unconscious processing resolving it in multiple ways for multiple purposes, but that just makes "qualia" harder to grasp.
Also, I used the phrase "inductive post-processing" rather than "inductive reasoning" intentionally, because I wanted to refer explicitly to the more general principle of aggregating evidence to produce a probable conclusion rather than a high level cognitive process that might require
conscious intent. The former can be done by small numbers of neurons, and would presumably be involved repeatedly in every layer of processing above the signalling of individual sensory nerves.
Essentially, I think of qualia as everything to the left side of the continuum between inchoate and loquacious.
The primitive form of the word is an utterance of sound. What then is the impulse that motivates an utterance of sound? That for me is the realm of Qualia. Clearly, a grunt, or a scream is the origin of language. After that, it is one abstraction built on another leading away from the original impulse until it is lost in the mists of time. Kind of like the way a solid object dissolves into quantum fluctuations, as its essential nature is more closely observed.
This would change the "fundamental experience" under consideration from receiving perceptions to performing voluntary actions. The definitions of qualia I've found so far don't extend the term that way: all the examples of "experience" have involved sensory information or the cognitive equivalent. Considering a discrete voluntary action would obvously exclude some of the perceptual processing/cognition-boundary issues, it might open an analogous can of worms relating to levels of conscious control broken down into bundles of simpler directives automated by training or reflex.
For having no priors visually, that wouldn't happen to an adult, and might not happen to humans at all: the visual system might bootstrap from evolved primatives.
Hearing something would begin with nerve impulses from the ear (cochlea presorting information by frequency), and at higher level they are organized into presumed location of source based on combining the ears' information as well as timbre, which varies front to back due to the external ear, but can't be integrated without referencing memory of similar sounds, as well as the current model of the immediate environment (something is more likely be sorted and tagged as an expected element of the environment-model than an unfamiliar one). All of this happens before becoming accessible to conscious thought, although it seems like a deliberate focus of attention can relabel choices made by later stages of the process. I would argue that concept-forming comes in quickly with sound as well, although I don't know as much about how primatives are assembled into higher level abstractions in the processing stack.
I am coming back. at this cause I think its an important question and I clearly missed before. I have been pondering;
I feel like the question you're asking is, At what point is the"qualia" identified as a "qualia of the type [X] and then become available for further classification? And my immediate thought about that is, Does a Qualia not exist until its named? I don't know that this brings us any closer to the issue you are trying to clarify. Let me know.
This bears directly on my original question. Superficially, whatever a quale is, it wouldn't have to be named, since experiences can presumably be legitimate, discrete experiences without needing be named at all, even if they are part of fully conscious, deliberate thought. More to the point, I'm presupposing that a majority of high-level thought operating on abstract entities occurs outside of conscious awareness. By the time conscious consideration is applied, it's applied to a constructed mental model populated by various levels of abstraction (I haven't studied philosophy, but it's possible that is what is meant by "an ontology"). In fact, consciously perceiving something as a raw experience when it's actually a pre-packaged abstraction assembled from a few fragments of sensory data, associative memory, and the current model of the world-state seems to be unavoidable. One could call that packaging process "naming", and then that would always be happening while a person is awake/aware. But since information about the construction of the "pre-packaged abstraction" is available (it can be partially disassembled in realtime by applying conscious attention), and further information can be derived from a longer attentive examination, the questions of "when is it a quale?", and "what part is the quale?", become confusing.
Note that the issue mostly goes away if you're not asking "could a person be honestly wrong about a personal experience", which manages to highlight the question of the processing stack: which part would have to be in conflict with which other part in order for a person to be "wrong"? And which in order to be "lying".
> In fact, consciously perceiving something as a raw experience when it's actually a pre-packaged abstraction assembled from a few fragments of sensory data, associative memory, and the current model of the world-state seems to be unavoidable.
Imagine you are sitting quietly in a peaceful place lost in thought or reverie. Stay there a while. And now imagine a sudden loud noise occurs right behind you.
Compare this to what might be the experience if this had happened to you more than once, and you had a solid notion of what causes the noise.
The first time anything like this ever happens (perhaps to a baby?) I would guess an expansive update occurs at every level of sensory processing to incorporate the novel event type. This is likely to be very disorienting and unpleasant to the baby.
Later, any "sudden loud noise" will be subject to various processing/sorting steps that create the abstract event that impinges on consciousness, and the unexpectedness of the event will be restricted to the novel aspects (e.g. it happened in a context modeled as "peaceful"). Perhaps a substantial update, but much smaller than the baby's. If the modeled environment includes the memory of a parked car, and car alarms are common, then tagging the noise as "car alarm" might be at a low enough level that there is no conscious perception of undifferentiated "noise" even for a moment, just "Car alarm!", maybe with context modifiers.
If it happens often enough, a person might get to the point where it's not even noticed because it never exceeds the threshold of drawing conscious attention. That person might or might not be able to recall an instance as a discrete event. If not, was it a quale? What about the background hiss of air movements that might have been unnoticable even if the person was listening for them, but which only caused slight vibrations of the eardrum and triggered some nerves in the cochlea? What about the continuous distant rush of a highway that's you hear if you listen but not if you don't?
This reintroduces my original question about where the boundary is between "a process yielding a quale" and "the process of thinking about qualia". It also bears on the article topic: Can people be sincerely wrong (as opposed to lying) about their own experiences as defined as "making sincere but incorrect propositions about one's own qualia"?
A person can obviously "lie" by deliberately making statements inconsistent with qualia -- we don't need to know where the boundary is for that, we only need to assert that there is one. Likewise, a person can make "logical errors" by making mistakes in higher levels reasoning operating entirely above the qualia level (thus the person is "wrong", but their assessment of their qualia was correct).
Inconsistencies can also obviously be incorporated into the qualia themselves at the raw data level or the simple inductive processing levels directly above that. These would be nerve misfires or "sensory illusions", respectively. Sensory processing being probabilistic and inductive, it has to yield invalid results at least some of the time.
But in order to be "wrong about experience" as defined by "qualia", a person must make a genuine error exactly at the processing step that crosses the boundary between the low level processing that constructs the qualia and the higher cognitive processes that constitute "thinking about qualia". If no one agrees on where that boundary is with enough precision to identify a boundary-crossing step, then no, a person cannot "be wrong about personal experience" in terms of qualia because the definition of "quale" is insufficient to determine whether that happened.
That wouldn't so much be an assertion about the human mind as an observation about the language expressing the assertion. It would also mean that disagreement about the assertion will be very hard to disentangle from conflicting unstated assumptions about the words used.
I feel like the major issue with statements about meditative experiences is about the meanings of words. If you take Wittgenstein's idea that words aquire their meaning through use, it becomes clear that the more private and subjective an experience is, the harder it will be for us all to arrive at the same meaning. Think of that guy you wrote about who had no sense of smell but didn't realize it for years - was he wrong about saying that something 'stinks'? I'd say he was just wrong about what 'stinks' means. It's easy to be wrong about what words like 'concentration' or 'piti' means. You think 'oh piti isn't all that' for ages and then one day you hit a jhana properly feel like you're thrumming like an electrical pylon and you say 'oh, i was wrong, THATS what piti means'.
>If you take Wittgenstein's idea that words aquire their meaning through use, it becomes clear that the more private and subjective an experience is, the harder it will be for us all to arrive at the same meaning.
I very much agree with this. I have a strong belief that the meaning of a word is 100% a function of the listening component and nothing to do with the intention.
I think at least some of this is solved by the observation that the experience and the report/thought/interpretation are not the same; you can be wrong about the latter but not the former
When you report that you "feel happy", you are aggregating a lot of complicated sense data (e.g. heart rate, temperature, dopamine, whatever) into a story that makes sense to you. But the story is not the sense data
Similarly, if somebody says they experienced a 7D object while on DMT, likely this is just the story that fits them best; it was probably not "actually" a 7D object, but the sensation they experienced was whatever it was, independent of the story.
Can someone explain the John Edwards reference? I first assumed it to be one of those Berenstein Bears things where people tend to remember the wrong spelling better than the real spelling, but when I Googled "John Edwards" the John Edwards I was thinking of was spelled "John Edwards".
Ludwig Wittgenstein addresses the phenomenology of being wrong in his Philosophical Investigations as part of a wider discussion of the building blocks of language.
He provides the example of a person teaching another a series of numbers that follow a production function. Providing several examples of wrongness (all wrong, a mistake part way through, understanding but failing to catch an edge case, getting it correct accidentally) W. considers whether the person can be said to have learned that expansion correctly in each case.
I will not claim to fully understand what his conclusion is since I am still in the book but it seems to be leading to his larger point that all language is antisystematic and idiosyncratic. As he so eloquently puts it, to imagine a language is to imagine a way of life.
"This woman wasn’t lying - if she had been, she would have just continued the the charade."
She might not have continued the charade because she was embarrassed about being caught out in the charade. Or because she was embarrassed by the realization that she had been fooling herself about her alleged week of no thoughts.
There is no real dispute that people will exaggerate pain for monetary gain. And people will exaggerate both negative and positive experiences as attention seeking behavior.
There are many many times I've read a medical report that says: the patient is a poor historian.
The question is always about the line between being skeptical and being cynical.
I think there’s a motte and bailey here. If someone says they’re “not angry about their father” and then rants at you for an hour, then yes, they are angry about their dad.
Maybe they truly believe they’re not, I’m very willing to believe that. But they are angry. Being triggered into an aggressive rant when someone mentions X is, for basically everyone, the definition of being angry about X. Calling it “stress-related behavior“ is equivocating. This is true also for the person making the statement; the phrase “I’m not angry about X” virtually always means “I am not in a state where X will trigger stress-related behavior.”
You could say, as many people here have, that there are different levels to your subjective experience, and that the top level is never wrong. Provided you define that top level narrowly enough, then there’s no way to argue it. But that’s the bailey, it’s taking the framing to such an extreme that no one could possibly disagree with you.
The people who are saying that you can be wrong about your own experiences aren’t talking about the top level, they’re talking about the deeper states underneath. They’re saying that people don’t always have a very good understanding of their emotional states, and there are countless examples of this behavior.
>I think there’s a motte and bailey here. If someone says they’re “not angry about their father” and then rants at you for an hour, then yes, they are angry about their dad.
Probably been said before, but I don't think there's much here beyond "it depends whether the person making the claim is claiming something falsifiable, or purely reporting an inner experience". There really is no middle ground, logically speaking.
I feel that if someone would say "being on DMT feels like how I would imagine seeing in 7 dimensions would be like" nobody would contest that claim. There's really little to contest. What a person experiences, unless they are intentionally lying, is tautologically true, and this statement should really be trivial by now.
But if someone says they literally see in 7 dimensions when on DMT, you'd call them a liar, because there's an objective, falsifiable claim that they're making, and you can probably test that (given a definition of seeing in 7 dimensions) and prove that they're wrong.
To tie that to the tangent of the Jhana talk - the problem with a lot of debatable claim-groups, like spoonies, psuedo-DIDs, Jhana enthusiasts, and so on, is that they are sort of doing neither.
Take the DMT example from above (when someone says they "see in 7 dimensions when on DMT"). Imagine the conversation you'd have if a friend of yours made that claim. It would probably start out with you trying to figure out what they mean, and the more it will seem like they're saying that they ACTUALLY SEE THAT WAY the more it'll turn into an argument. In the end, probably, when you'll sit down to well-define "seeing in 7 dimensions", that person would give a weaker definition that ends up being strictly internal, and you'll both call it a misunderstanding and carry on with your life.
Problem is, this probably happens to them reliably with smarter/more argumentative people, and to the others they're kinda of being misleading.
With some people from what I called debatable claim-groups, it feels like when out and about, they use language that is very suggestive of objectivity, cause it makes their experience sound cool, or real, or many other desirable traits, but when confronted on what evidence they have they will retreat to "of course I'm not making any objective claim, and if you think what I'm describing is basically nothing then ok man, your entitled to that opinion." And I think that's kinda bad, on their part.
I want to make it abundantly clear here - I'm not saying people like that are using that strategy deliberately and running some long con that earns them social capital on net, or anything calculating like that. Maybe some do. But more likely, they do have a strong inner impression that something "real" is going on, and are reluctant to admit "its merely subjective" even to themselves, cause that might hurt their sense of identity, or any of the other innocent usual suspects for irrationality.
To illustrate this with a story - I sort of have synesthesia, smell to vision. Ever since I can remember, I get these split visions of colors, like spots on my eyes, when I smell things. For my entire life I wasn't sure how "real" it is, since the experience is very fleeting and its not like I'm seeing a 3 dimensional illusion taking space in the room I'm at, or something. Most of my life I didn't find it very exciting (only for a brief period when I started learning neuroscience), I never even shared that on a date, even though that's a stellar ice breaker with the sort of nerdy girls I tend to like (Its not that I'm so humble and above such tactics, I just didn't figure it would count as cool).
Anyway, during my degree I had to participate in some experiments (as a subject) to get credit on some psych/neuro classes, and had a bit of a hard time finding any for left handed people, cause, you know, we're sinister like that.
I eventually found one testing for synesthesia and then having the subject, if they do seem to have it, answer some questions on a computer test.
I talked to the guy who ran it for me about a year later, and he told me that I wouldn't believe the amount of people coming in sure that they had a measurable phenomena, but that testing them showed they're reports (mostly colors seen when hearing sounds, or stimulating other senses) weren't coherent at all, certainly not to the point of statistical significance, on any relevant axis.
We both also agreed that they probably weren't lying, first because they really have nothing to gain coming to an anonymous experiment with false claims, and second because normal people don't generally lie about big things like that very often (at least to my experience).
So what was going on here? Well, I would reason that a big chunk of those people started up with some random phenomenon that could maybe seem like synesthesia if you squint at it, then read up about it online, or where exposed to it in a context were it sounded cool, and decided they might have it. I believe, much like Scott's views on placebo, that when dealing with high-noise signals, with a lot of degrees of freedom, the brain has some leeway as to what conscious experience ends up manifesting, and this somewhat depends on priors. So I believe (not with high confidence, mind you) that if you keep leaning into the narrative of "I see colors depending on what note I hear being played", you can end up in a cognitive equilibrium where it really does feel like that, even strongly.
Same goes for having a "module of a different person inside of you, so distinct its more interesting than a normal person thinking "what would X do?"" (I'm alluding to Scott's description of what the DID people in question say when asked about it). I'm sure its very possible to feel that way, especially in an environment where other people feel it too and think its cool, but the question is, if you sit down to define how exactly this module is different from "what would X do?", do you end up with something falsifiable or not? If you do, then test it, and we're done. If you don't... well, again, its seems to me that making tiktok videos of different personalities of yourself conversing with one another, or, just the sense I got from reading these people's descriptions of their experiences (I didn't even know that was a thing before the post, btw)... its all pretty misleading. Like, its pretty clear that without A LOT of context, the median listener would be convinced those people mean something very distinct and falsifiable (even if the listener can't put that in those exact words). I get the sense that if [the average way one of these people talks about the phenomenon] was the pitch for a payed course that helps you develop such a personality, some people will end up suing the advertisers, and the judge/jury would agree that the customers were being mislead.
On more of a side note: I also think Scott's general approach is a bit uncharitable towards anti-spoonie claims (though I myself am not that far on that axis). The talks about this really reminded me of the deliberations on the term "lazy" in the old "the whole city is the center" SSC post. I think its not unreasonable to say that when feeling kinda of under the weather, or even having some pains, or a mildly upset stomach, or whatever, you can choose to lean into it, feel a lot of it, or try to shake it off (usually by going on with your day and doing other things). If the discomfort is not intense enough that you can't really function, I think most people would agree that there's a correlation between the decision to not shake it off, and how bad you end up feeling that day. If you're the kind of person that really lets themselves lean into it, and also have some natural tendency towards some particular discomfort, you can end up, using the same logic as I detailed above, in an equilibrium where this discomfort is pretty chronic and is a big deal. If we assume for simplicity's sake that there's a trait such as "self-suggestibility", and that trait is distributed normally across the population (again, I'm aware this is a gross simplification), it seems very reasonable that if you take the tail end of really self-suggestable people, and intersect it with people who for different reasons have slight medical discomforts, you get a lot of spoonies.
Do such people take up a significant chunk of those who call themselves spoonies? I have no idea, but I think its worth considering. And if they are, is that in anyway their fault? Should we as a society wag our fingers at it? Well, maybe a bit. First of all, it would be effective to maybe give them different treatments (like maybe try hallucinogens to relax their priors). Second of all, perhaps once you're stuck in a bad equilibrium its no longer a matter of choice, and blaming you is pretty pointless. But if you are the kind of person that systematically makes choices that lead to lesser functionality, and ends up hurting yourself or people around you thanks to it (e.g. cause your partner has to take care of you, or cause your colleague has to frequently cover for you at work), that's a pretty good steelman for the word "spoiled". Again, I myself am not that enthused by this claim, but I think its worth thinking about.
We do know that *memory* can be wrong. So regardless of whether one can be wrong about what one is experiencing now, one can certainly be wrong about what one was experiencing five minutes ago, or even thirty seconds ago. (Once we get into durations shorter than the amount of time it takes the brain to do things like give the correct answer on a Stroop test - in which you're given words like "green" written in non-green letters and have to say which color the letters are - then the distinction between "now" and "memory" stops working.)
I think someone can *remember* their mental state incorrectly.
Once, after a trip, I had a clear visual memory of taking my passport out of my bag and putting it in a drawer, but then when I looked it wasn't there. I panicked for a minute before I found it in the bag. I realized i must have just *thought* about taking it out, pictured it in my head, and for whatever reason this was encoded as a "real" memory.
I don't think I ever had the subjective experience "I am currently taking my passport out of my bag." I assume that at the time, my subjective experience was "I am imagining taking my passport out of my bag."
Perhaps similarly, I remember an incident of randomly "astral projecting" when I was a child. I remember feeling somehow separate from my body, and have a visual memory of seeing myself from above.
However, I'm skeptical that my subjective experience at the moment was of *literally* seeing myself, as opposed to *feeling* outside myself and *imagining* what that would look like. In contrast, around the same age my vision briefly got stuck seeing triple. In that case I knew something was wrong and told my parents. I think if I had "really" seen my body from the outside in the same sense that I "really" saw triples, I would have acted like it was a bigger deal.
> "My favorite example is time perception. You can meditate or take drugs in ways that make you think that your clock speed has gone up and your subjective experience of your subjective experience of time is slowed down."
FWIW: I get this **a lot** in my ketamine infusion therapy sessions.
ETA:
> "(though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)"
I dunno. When I am having said sessions, the experience is very interesting, internally. There's part of my brain that is hallucinating like a wombat out of Hel, and there's also still a little "me" in there (is that the "Ego"?) that is seeing what the hallucinations are and are able to comment on them and I am definitely having perfectly coherent (if odd) thoughts in English sentences. And that person is the one who goes, "Man, I swear it feels like time is *actually* somehow moving at a different, far slower pace, even though I know that is physically impossible".
I've even run experiments (my ket doc is awesomely open minded) where I had a timer running on my phone, and we keep the infusion device and IV drip set so that I can see them (ADD OCD ex-EMT, among other things) and even when I look at the time and note that yes, time is in fact passing at the normal rate, I can close my eyes again and go back into that molasses-flow.
Regarding "see[ing] seven-dimensional objects on DMT" I would absolutely believe that he felt that way. I've thought a lot of various interesting things about the way the universe should be able to be molded plastically by my mind when I'm on ket, so... even though that little "me" in there that's forming full coherent English sentences exists, he's not always thinking too clearly, either. ;)
I have definitely on many occasions wished that C'punk 2020 had turned out to be real so I could show my friends these amazing things I was seeing, though. I mean, among all the *other* reasons I have for wishing for that. Alas, no full skeleton replacement for me. :'(
ETA, again: And then I get down to the bottom and you're talking about a "homunculus fallacy" and now I don't know **what's** goin' on in my noggin.
Don't children have to be taught to recognize when they are hungry or sleepy? It isn't always obvious what one is feeling. There are all sorts of sensations that one has to learn to recognize as part of growing up. It isn't just the basics either. It can be thing like jealousy, anger or trauma. There's a whole branch of psychiatry which tries to help people understand what they are actually feeling. You can't necessarily argue with people's sensations. They are feeling them. You can argue with how they are interpreting them.
I've several times had the experience of being entirely mistaken about which external label matches my internal experience. I remember an aha moment of "so this is what anger feels like!" I'm still not totally sure if I experience sexual attraction or not. So I could easily imagine someone storming off when I mention their dad AND internally experiencing something I would call anger AND honestly saying “I’m not still angry about my father” because they map "anger" onto a different internal experience than I do. Definitively not saying this is the only explanation for the example -- just that translating internal experiences through words adds an extra layer of complication.
I'm quite sure not many people will come all the way back here, but I would like to report that as a meditator in the progress of deepening their practice, I have also attained some of these deeply pleasant unfalsifiable states, which, according to some commenters, I have not experienced. Thereby I would also like to report that these deeply pleasant experiences, which I apparently have not had but of which I have a solid recollection, are likely attainable for almost anyone who is simply willing to practice and able to do so well (perhaps under proper instruction).
As for why meditators wouldn't, then, spend their entire lives in jhanas - something which meditation teachers sometimes do mention being a risk - I recommend both John Culadasa Yates' The Mind Illuminated and Daniel Ingram's Mastering the Core Teachings of the Buddha. Both of these go into detail on why jhanas are not, and often should not be, an end in and as of themselves.
Well put. I meditate not so often and by instinct and I dont think I’ve hit jhanas (unless feeling particularly joyful and smiling widely during insight meditation counts), but i have no difficulty believing they’re real, given the surprising and wonderful experiences ive already had in meditation. I’m surprised at how unbelievable people seem to find them.
On the “leaning into pain lessens the suffering of it” point, I did accidentally discover it outside of meditation and also found that applying it to mild anxiety has the same effect, really any discomfort. I think it’s something known amongst psychologists, because when I told my therapist about this weird way I reduced anxiousness, she knew exactly what I was talking about.
What is this "objectively girly"/"boyish(?)" spectrum about, exactly ?
(Asking for a friend.)
This is not helpful in clarifying what you meant XD
Mostly that I keep getting correlated with INTP on Myers-Briggs, which are introverts (and I used to be very shy, which seems to be traditionally a "girly" characteristic ?), and also (in)famously are very bad at analyzing/expressing their feelings...
Wittgenstein is mandatory reading here.
If the quality of your experience can't ground meaning for you, then there's even less hope that a "language community" can.
This seems like one of those philosophical questions where it's probably possible to conjure up a clever counter-example if you think on it hard enough, but in general the proposition tends to hold. And in these kinds of cases, I guess my question is-- how does finding a counter example meaningfully change our understanding of the human experience?
Well said
Raymond Smullyan's dialogue "An Epistemological Nightmare" is a very well-done exploration of this question. It features an "experimental epistemologist" who uses a brain-reading machine to contradict a patient who claims that a book seems red to him. According to the epsitemologist, the machine can tell, objectively, that the book doesn't seem red to the patient. It spirals hilariously outward from there.
Wow that was a fascinating read. Thanks for pointing it out.
really fascinating read. Sent my head for a spin for sure.
Thank you, that was very fun and fascinating read.
Link: https://www.mit.edu/people/dpolicar/writing/prose/text/epistemologicalNightmare.html
Great, thx.
That’s a great story! Very mid century analytic philosophy. It’s clearly engaging with the Ryle/Sellars discussion on “the myth of the given”, whether there is anything people can be infallible about. It has a good chunk of Smullyan’s own primary interests in logic. But it makes the mid-century move of assuming that epistemology and ethics are just totally separate things, and that there is no real “should” of belief.
I think the story is wrong. At least sometimes. For example, "The machine never claimed to be untrustworthy, it only claimed that the epistemologist would be better off not trusting it. And the machine was right." The machine didn't claim that epistemologist would be better off not trusting it. The machine claimed that epistemologist thinks that he could save himself from an insanity if he stopped trusting the machine. The machine doesn't think, it just shows psychological states.
I mean it's kinda important, because epistemologist was mistaken to believe that there was a paradox. It would go away if he assumed that not only machine could be wrong but he himself also. He thinks that it would be best to not trust his machine, but it doesn't mean that machine is not trustworthy, it even doesn't mean that it would be best not to trust it (though it probably would be), it means exactly this "he thinks that it would be best to distrust the machine".
Logic can be a bitch sometimes. When applied to psychological states at least. When I was in love for the first time, I constantly thought that my head is going to explode. "It doesn't compute". Yeah... I managed to reduce the problem to a circular reasoning loop of three statements, and decided that the only way out it to stop pursuing my love. It does seem sad now. I wish to move in time and to try my current mind on the same problem. Without a memory about the first try of course, or it will be a different problem. I believe I'd crack it if I got a chance.
I love Raymond Smullyan. He is one of my favorite people - I just love the way his mind worked. His essays are insightful and often funny, and if you like mathematical and logical puzzles, his writings can keep you busy for years. One of my favorite pieces of his is this musing on free will: https://www.mit.edu/people/dpolicar/writing/prose/text/godTaoist.html
When schizophrenics describe thoughts being inserted into their minds are they lying or reporting their internal state faithfully? We label those as delusions (lying) since it can't be true. When someone genuinely believes in an internal state which is not believable, are they temporarily suffering schizophrenia? There seems to be no firm ground to stand on, in this discussion. Would diagnostic neurological data (sometime in the future) help disentangle this confusion? Clearly not according to this! Thanks for the cool share.
This feels related to the claim I once heard that “polar bears aren’t actually white, their fur is clear, and it just happens to appear white”.
Guess what? If something appears white, it’s white. If your clear fur happens to reflect photons across a balanced spectrum of wavelengths, then your fur is white.
I think the point is that it doesn't reflect photons, it scatters them.
And the fur only scatters light in this way when lots of transparent hairs are near each other, scattering the light repeatedly between themselves. If you looked at a single hair, it would look transparent.
Then I believe the fur is white, a single hair is transparent and the incorrect implicit assumption is that "an object which is composed of multiple smaller, identical (to each other) objects is the same color as its component objects".
But Robert said the bear's fur is clear, which is not the same as transparent.
Transparent defines some object which light passes through unaltered. Clear on the other hand defines something lacking color. Something lacking color i.e. water, can refract and reflect light, thus changing the appearance of the object.
So the fur isn’t white (exactly) but the polar bear is.
We perceive it as white.
If you held a yellow book (reflecting yellow light), but I illuminated the book with a green light, you'd perceive it as a green book.
Is this a yellow book, or a green book?
Yellow under normal full spectrum light, so yellow. Green under green light but yellow.
The polar bear would also be green under green light, so not sure where that gets us.
Clear case of needing to sharpen the terms we use. Does a red ball (one we all agree is red under normal lighting conditions) become a black ball in the dark, or is it a red ball that appears black? It quickly becomes obvious that both ways of talking have a point, and now you can talk more clearly by knowing which one you mean (apparent colour right now, or color under typical lighting conditions).
Oh dear, what color is my kitchen window? It reflects light almost equally across the visible spectrum.
Is this the experiment, to see if Scott providing the model i.e. the Epistemologicist, we all become miniature tyrant Epistemologists? [in the tone of 'The First Rule of Epistemology Club is We Don't Talk About Epistemology Club.'
Another way of arguing with the "polar bears aren't white" point: if you shave a polar bear, their skin is black. If you put something transparent over something black, it should look black. Given that polar bears don't look black, their fur is not transparent, polar bears look like the colour of their fur.
That's kind of true in the way that clouds are white. The water droplets themselves are clear, but the way they shatter light makes them appear white.
It's not the "standard" wavelength absorbing/emitting property, but the way humans determine (guestimate?) color still shows it as being that color.
That said, that's more of a schematics issue and there's plenty of that.
Even with just color there's things like how white/black/pink aren't colors or why the sky is/isn't blue/sun is/isn't yellow/white.
It does NOT seem to be related to the difference one's conscious and what is likely to be the reality of the situation.
In these cases, they are the clashing of one's conscious thought and what is happening to an outside observer. It's obvious that one's opinion does not need to match with that of the objective truth, but here even with things of purely subjective nature, it still might not "match".
However do we want to consider these "false beliefs"/"lies" or whatever?
I feel it would be easiest if we choose to keep them the same as non-subjective items but just not use the connotation of lying. Like you can hold the thought/idea that the moon is made of cheese without "lying" even if it doesn't match the outside objective view.
"Experience" can be complex and mean a lot of things. But if we're a bit more precise and say "qualia," then Scott's position is absolutely correct. You can't be wrong about your own qualia, because qualia just are the immediate experiences. You can lie about them, you can subsequently forget and misreport them in all kinds of complicated ways, but you can't be wrong about them in the moment.
So if someone says they see a 7D object on DMT, do you agree this must be true, or do you think that dimensionality of objects is not a quale? What if someone says they saw an impossible color (ie as different from any color we know as green is from red) on DMT?
First, I'd like to factor out the issue of third-person reporting by assuming that I myself have taken the DMT. That way, concerns about language and the possible distortion of reporting can be put aside.
If I perceived a 7D object or a novel color, then, yes, it is true that I did. This seems perfectly coherent and possible, at least in principle. We know that 7D spaces are mathematically well-formed, so there's no contradiction or incoherence involved. It seems pretty clear, as a general matter, that our neurophysiology can enter into unusual states in which perception is greatly altered or extended. That's a common experience on psychedelics (unlike the more extreme examples here). So, yes.
Is there room for incorrect attribution? We can say "I had an experience that was like this and this and this" but to characterize it as "perceiving a 7D object" is a kind of assessment/appraisal that we may not be equipped to make.
Like the woman claiming enlightenment, the person on DMT is an expert of their own true experience to the extent they stick to their experience. At the point that they characterize it as "enlightenment" or "jhana" or "7D object perception" then it seems like we need other tools to assess the truth of that claim.
I see that as part of the language/description problem that I'm trying to factor out. It's really not interesting to quibble about whether the words someone uses to describe their qualia are the right words. It doesn't get to the core question of whether someone can be wrong about their experience. (Wrong because they couldn't find quite the right words isn't the relevant kind of wrong.)
Take Scott's color example. I might have no idea what to call the novel color. I might just say "some novel color for which I have no name." That's perfectly fine.
If I say something as precise as "a 7D object," that would only be because I could count the dimensions, say, by repeatedly turning a rod at right angles. If I do that carefully with my DMT brain, and keep coming up with 7, then it's a 7D object! (Mind you, I find this example quite implausible, but that's irrelevant to Scott's question.)
What happens if you had a memory recorder. You watch your memory again the next day when sober. Then you notice that there were only 6 dimensions, but they moved around a lot. Is that another sense in which you can be wrong about your own experiences?
When you start to factor things, it seems like we can be wrong in a lot of different ways.
OK, then you really saw a 6D object, which works perfectly well for what Scott is trying to get at. It isn't that 7 is a magic number that you have to get right. It's "did I really see that weird-ass thing?" Answer: yes.
Oh that's interesting about trying to factor the language/description part out. I think I'm having trouble separating the language/description from the knowing/experiencing because we humans seem to know/experience a lot by telling ourselves stories about our experience. The story-telling (to oneself or others) is very often baked into and reinforcing of the experience.
The example "I see a color I haven't seen before" and it not mattering what we call the color makes sense to me. The story part of this seems to be that it's a color "that I haven't seen before" and assessing the truth of that statement will depend on the same kinds of ambiguities of memory, perception, and attribution that the other examples do -- is this color "new"? is this a "jhana" I'm experiencing? Am I "angry"? I don't really know how we get free of language/description even if we dispense with the specific word mattering -- it seems to me we're still dealing with the uncertainties of attribution, which may be mistaken in all kinds of ways.
If we ask the person who took the DMT a week later what they think about the novel color they saw, might that attribution change over time? "I'm not sure anymore that it was really a new color or just an idea my head cooked up while I was high."
The Buddhist idea of "no self" is coming up for me here, the idea being that we do experience a somewhat solid sense of self in everyday life, but on further examination, that self turns out to be much more provisional than we often credit. It's not that the self has no reality whatsoever and that the self we experience is a total lie -- but also it's not as "really real" as we often experience it to be. The stories we tell ourselves about our experience -- as we are experiencing it and later -- seem likewise provisional in all kinds of ways. It's good enough most of the time, but once we start to ask "is this thing I experienced really true?" I think the whole structure starts to break down.
We do have a variety of examples of this sort of thing, either people writing down their miraculous insights, or people making claims as to, eg, where things are (because they can see through walls or back in time, or whatever).
The actual validation after the fact is uniformly disappointing. The scribbled words are meaningless, the lost key remains lost...
"A strong smell of turpentine prevails throughout."
https://quoteinvestigator.com/2012/03/31/turpentine-prevails/
I think that's exactly it.
If I were raised a certain way and were, say, a closeted gay, when I see a man who excites me, I might claim the experience as something like "a personal encounter with satan".
If I were raised a different way, still gay (but now happily married and faithful), and see a man who excites me, I might claim the experience as something like "evidence for the non-unitary nature of the self".
Same experience, very different framings (and everything else) around the experience.
Even in that case, might there not be 2 very different subjective experiences? In the initial moment perhaps things are the same, but the sequent factors might very well also have physiological impacts. The sick-to-your-stomach feeling of shame feels different than coy delight. So much so that I'm not sure most people could actually separate the instigating experiences from the second-order experiences let-alone the interpretations place upon them after the fact.
I guess I'll throw in my too-little-sleep experiences of seeing subtitles under people in real life. Never could get the text to outpace their speaking speed, though.
This underlines the point that correspondence to the external world is orthogonal to the question at hand.
Were they accurate renditions of the things the people were saying? I've never heard of this before, but I wonder if it could be a new weird type of synaesthesia.
Yeah, they were accurate. I don't remember if they were being typed as people spoke, or if they appeared as a finished block of text like in movies, but I could only read the parts of the block they'd said already. Definitely some kind of sleep deprivation effect, it's happened a couple of times.
This sounds like ticker tape synesthesia to me, which https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00776/full estimates at a 7% prevalence.
You PERCEIVED a 7D object?
Or you believe you perceived a 7D object?
Are you capable of actually answering theorems about 7D objects? For example if I asked you about optimal sphere packing in 7D, or whether a knot could be untied, or whether one 6D shape could be rotated to a second 6D shape, would you be able to answer?
My guess is not...
C'mon. Someone who's really bad at geometry, or just dumb, could see triangles and not be able to answer theorems about them. This has nothing to do with whether they really saw triangles.
I call BS. The meaning of "seeing" 7D is "seeing *facts* about 7D".
Otherwise it's meaningless junkie talk.
Do you say the same thing about triangles, seen by the person bad at geometry?
Remember, the 7D doesn't really have anything to do with the core question Scott is discussing. He just used that as a weird example.
I'm very sceptical of the claim that our neurophysiology can enter a state capable of visualising, let alone perceive, 7D objects. Sounds like a very fantastical claim to take without less than solid evidence. Is it not much more likely that you are perceiving some exotic combination of lines or whatever and are interpreting it as a 7D object - wrongly?
It's not worth over-indexing on the 7D example. I don't think the details of this example get at the real point. Even if we could do experiments that strongly suggested that someone really could visualize a 7D object on DMT, one could just turn around and say, "OK, well how about a 100D object?"
Do people in fact sometimes see 7D objects DMT? I have no clue, and for the purpose of what Scott was discussing, it really does not matter.
The novel color example is much more plausible and illustrates the underlying point well.
> We know that 7D spaces are mathematically well-formed, so there's no contradiction or incoherence involved.
So what if someone told you they saw a square triangle? No fancy semantic or non-euclidean trickery, just a square triangle. Is *that* report wrong? (And if so, why would anything seen during the same trip, generated by the same process, be any more trustworthy?)
Yeah, that was my first reaction as well. Why would the "mathematically well-formed" property of a shape have any impact on whether or not your drug-fueled brain is perceiving that shape?
As I've said several times now, I really don't like the 7D example because it leads people down rabbit holes that aren't actually relevant to the underlying question.
The novel color example is much better and less distracting.
But to answer your question directly, the reason I said "mathematically well-formed" is simply to note that 7D objects really are a thing, as opposed to "square triangles," which is nonsense phrase literally referring to nothing. I'm personally highly skeptical that DMT would actually let one see 7D object but, once again, that's irrelevant to the underlying question.
If someone reported seeing a "square triangle," I would certainly conclude that the report was false, because we know that there can be no such object as a "square triangle" that one could possibly see.
But the question here has nothing to do with reports! That's why I began by saying "I'd like to factor out the issue of third-person reporting [so that] concerns about language and the possible distortion of reporting can be put aside."
If someone is bad at description, or giving bad reports for any other reason, that's simply not relevant to the question at hand.
Assume you took the DMT yourself and had weird experience X. The question is: did you really have experience X? Forget what you might say about it afterwards; that doesn't matter. Just: did you really have experience X?
What about someone describing optical illusions, like a devil's fork or an endless stair? Those are mathematically impossible, but my experience in seeing them is very real and doesn't even involve any mind-altering substances.
What about it? There's no problem here. You see what you see, as you see it. Whether it corresponds to something in the world isn't part of the question. (It's obvious that an experience can correspond to nothing in the world. The mention of DMT was suppose to prompt that understanding.)
I've spent some time thinking about this and now I can visualise a square triangle.
Try this: Visualise a wireframe square in three dimensions. Keep the bottom edge fixed, and rotate the top edge ninety degrees clockwise around the Y axis, so you're looking down along that edge and it turns into a point.* Now the shape appears to be an upward-pointing triangle. Spin the whole shape 45 degrees counterclockwise around the Y axis, and it appears to be a square (well, rectangle). Keep spinning, it becomes a downward pointing triangle, then a sideways bowtie shape, then back to upwards pointing triangle.
Now simultaneously perceive the shape from the original angle and 45 degree rotation, voila, a square triangle.
* Sunnyboy Iceblock shape for those who grew up in Australia
NO. TRICKERY.
OK a bit late but I have a better example.
A "Set Square" is a square triangle.
I would argue that perceiving a 7D object is an interpretation of some experience. It is a story you told yourself about some collection of visual impressions. You did not perceive a 7D object, because such a thing does not exist in 3D space.
In the case of a 7D object, that could well be. But that simply speaks to why the 7D object is not a good example for getting at the underlying question, which isn't about the stories one tells oneself in language, but about the experience itself.
The novel color is a better example. I see no reason to suppose that one could not see a novel color when on DMT.
Some people have based whole spiritual systems on such a color: https://www.sacred-texts.com/eso/chaos/octarine.txt
EDIT: Here is an interesting philosophical article about novel colors: https://www.jstor.org/stable/4320359 (accessible via scihub)
Related: In Oliver Sacks' brilliant, beautiful essay 'Altered States', he claims to have seen a "magical color"—pure indigo—and the man was such a good writer I believe him. (https://www.newyorker.com/magazine/2012/08/27/altered-states )
I miss Ollie. He was a gifted storyteller. I was first introduced to him through Radiolab.
> But on the weekends I often experimented with drugs. I recall vividly one
episode in which a magical color appeared to me. I had been taught, as a child,
that there were seven colors in the spectrum, including indigo. (Newton had
chosen these, somewhat arbitrarily, by analogy with the seven notes of the
musical scale.) But few people agree on what “indigo” is.
> I had long wanted to see “true” indigo, and thought that drugs might be the
way to do this. So one sunny Saturday in 1964 I developed a pharmacologic
launchpad consisting of a base of amphetamine (for general arousal), LSD (for hallucinogenic intensity), and a touch of cannabis (for a little added delirium).
About twenty minutes after taking this, I faced a white wall and exclaimed, “I
want to see indigo now—now!”
> And then, as if thrown by a giant paintbrush, there appeared a huge, trembling,
pear-shaped blob of the purest indigo. Luminous, numinous, it filled me with
rapture: it was the color of heaven, the color, I thought, that Giotto spent a
lifetime trying to get but never achieved—never achieved, perhaps, because the
color of heaven is not to be seen on earth. But it existed once, I thought—it
was the color of the Paleozoic sea, the color the ocean used to be. I leaned
toward it in a sort of ecstasy. And then it suddenly disappeared, leaving me with
an overwhelming sense of loss and sadness that it had been snatched away. But
I consoled myself: yes, indigo exists, and it can be conjured up in the brain.
> For months afterward, I searched for indigo. I turned over little stones and
rocks near my house. I looked at specimens of azurite in the natural-history
museum—but even that was infinitely far from the color I had seen. And then,
in 1965, when I had moved to New York, I went to a concert at the
Metropolitan Museum of Art. In the first half, a Monteverdi piece was
performed, and I was transported. I had taken no drugs, but I felt a glorious
river of music, hundreds of years long, flowing from Monteverdi’s mind into my
own. In this ecstatic mood, I wandered out during the intermission and looked
at the objects on display in the Egyptian galleries—lapis-lazuli amulets, jewelry,
and so forth—and I was enchanted to see glints of indigo. I thought, Thank
God, it really exists!
> During the second half of the concert, I got a bit bored and restless, but I
consoled myself, knowing that I could go out and take a “sip” of indigo
afterward. It would be there, waiting for me. But, when I went out to look at
the gallery after the concert was finished, I could see only blue and purple and
mauve and puce—no indigo. That was forty-seven years ago, and I have never seen indigo again.
It's hard to say what he meant exactly by "indigo"... :
1.) Could have just been a regular color, the magicalness of which having been enhanced by his altered state (see other reports of "heightened" senses, "vibrant" colors...
and/or :
2.) a color with more lightness than physically possible
3.) a color with more chroma than physically possible
Note that colors are very complex, and changes to lightness, chroma, hue are often nonlinear, so they tend to be, and historically have been confused :
www.handprint.com/HP/WCL/color6.html
and the words we tend to used for them often confuse the matters, you have several examples in this very discussion about how "white, black, pink" are "not colors", even though white and black are definitely "color as visual qualia", and pink does correspond to a hue.
4.) finally, it *might* be possible (but IMHO unlikely) that he saw a completely new (to him) hue
I'm not sure how it's even possible to imagine something like 4.) if we haven't experienced it, like if we were colourblind - and then even the lack of color qualia, like in "red/green" colorblindness might hard to imagine *correctly*, since it's probably not just the absence of "red" or "green" qualia, but the difficulty of discriminating between them ?
http://www.handprint.com/HP/WCL/color1.html#dichromat
Always feel vaguely embarrassed to admit to it because I’m a weird guy already but I had a strange/religious experience where I saw what I perceived to be God. Not trying to convert anyone but sharing because it is of interest to poke at.
I’ll strip out the personal stuff and just relate how it felt phenomenologically.
I was having an early morning jog and suddenly my dead grandfather was walking next to me. I was twenty six years old. Somehow my grandfather walking and my jogging didn’t cause us to become separated. I didn’t have any emotional reaction to him being there. As soon as I noticed that we were going different speed but not moving apart it felt like space sort of unzippered or unfolded or what have you. Like when you are unrolling a carpet and you can see it spin around an axis except it kept shifting 90 degrees. I don’t know exactly how many times it did that but I do sometimes wonder if I experienced that because it seems like the sort of thing you should experience when you’re having that kind of moment and my mind just sort of filled in the details.
There’s more to it but however interesting it is to me personally it’s pretty standard to what I’ve read after this happened (I also sometimes wonder if things I’ve seen in popular media played a part in my experience). Seems very standard but I sort of felt like I was in a “world” outside time and space but it was more like I was in the “place” that ideas or forms or qualia come from and that I was there as the idea of myself.
I’d been having a very hard time at that time in my life and felt immediately better although I still felt quite bad and immediately found a therapist to start seeing because seemed like the thing to do. Never happened again.
So I guess I experienced what I think n dimensional space feels like. But it’s not like I could immediately solve any higher dimensional geometry problems because I felt like I experienced something once. I guess your brains world model can just divorce itself from reality sometimes.
Don't feel embarrassed about it! The standard model of reality is about keeping everything neat and tidy in its own little box.
Sometimes things don't, can't, or shouldn't fit. God, a spirit, hallucination, or just you breaking yourself out of a bad cycle. Whatever it was, keep moving forward and appreciate the bit of magical headroom.
It did cause me to question a lot of what I believed as I was an atheist/empiricist prior to the experience and I guess now I’d maybe describe myself as a pan psychist but even that isn’t quite correct. I have had dream states where things felt unreal but that was the first time where the reality dial went above the way being awake feels. Sounds like privileging a hallucination and I do question it quite a lot, but I have trouble honestly dismissing it.
[epistemic status: Afternoons spent pondering higher dimensional spaces instead of doing my discrete math homework]
True and false do not apply. The standard for whether you see something is:
Does it minimize prediction error. Can you predict what comes next?
Often that's just: Will it prevent you from bumping into things?
Human vision creates 1-d objects/lines and 2-d mental objects/surfaces from processing ~100 million rods+cones. But each individual receptor is 0-d. 1d perception is stacked 0d. 2d is stacked 1d. 3d vision is a bonkers way of talking about depth perception.
Depth perception is just heuristics applied to your 2d surfaces and their relations generating useful metadata (for not bumping into things).
If you can visualize an onion, mentally spin it around, pan a mental camera, flip it, you are still only animating a set of a 2d frames. The picture "has" depth, but that depth perception is just metadata.
A truer form of 3d visualization of an onion, would be being able to see all the layers at once. How each fibre connects to the next in 3 perpendicular planes. If you cut onions a lot and looked deeply within, and practice generating the associated memories on demand extremely quickly and accurately, you may claim that you can visualize an onion in 3d.
Higher than 0d vision is all about filling-in-the-blanks and using heuristics to minimize prediction error. This applies to dimensions beyond 2d as well.
It gets very difficult to extend "vision" to more than 3d.
Because each dimension multiplies the maximum potential info of a mental object. At some point the objects become so complex, that when we grasp the fifth sub-component of it, we've already forgotten the first four. To claim that you "see" something higher-dimensional, you would need to solve that problem.
The trick is to study something so deeply, that you become extremely familiar with. Something, that's not too complicated. Start using discrete coordinate systems.
Learn how to visualize the 5-prime field in 3 dimensions. Only 5^3 = 125 points to consider.
Mentally project it into space and walk around it, make yourself see all the diagonals.
It's like a see-thru box.
Now for adding a fourth dimension:
You cannot mentally project it into 3d-space anymore. Not directly.
But you can still project any 3d-subspace. 4 subspaces with 125 points each.
And you CAN project all those subspace into 3d space next to each other.
(visualize them as 4 different "boxes")
Do this super fast, you can say that you're seeing it in 4d, even if a line (or some other shape) passes thru 4 dimensions at once. Use a bigger prime-field and you can simulate something closer to continuous lines and circles. If you mastered "seeing" 4 dimensions, you can extend to 5d in the same vein. Visualize a 5d space as 5 different 4d spaces. So 5 rows of 4 boxes.
It's a headache, but with enough coffee and sheer bloody mindedness... I'll let someone else volunteer. The standard for "seeing" is the same as for normal vision. Does it minimize prediction error? If you can quickly answer questions/navigate that space, then we may agree that you "see" it. If you cannot fluidly play TicTacToe (or chess) in it, you don't really see the space.
Now Emilsson said, that he could see something complicated..... He helpfully provided diagrams. Seems like he can usefully apply the concept of dimensionality on an object that he's extremely familiar with. Eh.... sure why not. Many objects can be usefully compressed/abstracted into orthogonal dimensions, allowing for reasonably compressive abstraction in that way. Does not sound too crazy.
Much like the "JOY=LOVE*JUSTICE" person experiences the emotion of profundity but in a sort of inappropriate, contentless, drug-induced way.. the 7d guy is experiencing "galaxy-brain intellectual euphoria" from drugs, probably because they induce trippy visuals so he's gonna think about that stuff, but no he didn't have a 7d quale
Disagree, but I think we're going to quibble about what 'wrong' means.
I don't think you can say someone is 'wrong' about something unless they have formed a specific conscious belief about that thing.
The process of forming such a belief is very different from the process of experiencing a qualia, and takes much longer. There is plenty of room for error or intervention in that process which could cause your specific conscious belief about what you experienced to differ from what you experienced.
Tue, people experience the qualia they experience, by definition. But experiencing a qualia is not something that can be either 'correct' or 'wrong'; it's not a belief or a statement that can be evaluated in such a way. You need to actually form a belief before it can be evaluated for truth, and that's a whole other fallible process.
I think the thing Scott is trying to probe is just the part in the last paragraph, which you nicely summarize as "people experience the qualia they experience, by definition." I think that's exactly what many are balking at, and it's the essence of the point I'm emphasizing.
The fact that one can be wrong about *the beliefs one forms* about one's qualia, I take to be obvious, and not what the discussion is about.
I don't think anyone claims people don't experience their qualia. Wouldn't that just be disputing the definition of qualia? But for someone to discount your qualia, it needs to be reported to them. And if they don't believe your report, they can say "You're wrong about that." I think the question being debated is; can you honestly misreport your own experience, even to yourself. It definitely seems possible to me.
It's trivially obvious that people can misreport experiences, in that the mapping from experience to language is imperfect and fallible. This simply is not interesting.
But Scott makes it clear in his second paragraph, using the example of hunger, that the real issue here is not misreporting in language, but the experience itself: "if someone says they don’t feel hungry, maybe they’re telling the truth, and they don’t feel hungry. Or maybe they’re lying: saying they don’t feel hungry even though they know they really do (eg they’re fasting, and they want to impress their friends with how easy it is for them). But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are."
"But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are."
Sure there is. Their stomach hurts and they think it's nerves, but then they eat and realize they were just hungry. The feeling was there, but they misinterpreted it. This is pre-language, but a misinterpretation of the feeling. It doesn't mean the "feeling" was wrong somehow... I don't know what that would mean. But their conscious interpretation of the feeling was incorrect.
Scott discusses precisely this objection in the following paragraphs. TL;DR "hungry" is the name of the experience, not of the causal basis of the experience.
Indeed. Qualia are immediate experiences.... and yet many people have memories of their past where details have become altered, sometimes radically, from what factually happened. For instance many people 'remember' events from their early childhood - but it was other people who told them about the events.
In which case 'red' will be 'red' but it may prime many other associations for cognition, and some of those associations may be wrong or disproportionate.
This is all correct. Memory is complex and is far from infallible in relation to the external world.
BUT ALSO: What Scott was discussing has nothing to do with veridicality in relation to the external world. He was specifically discussing present internal states.
If you define qualia as the things people can’t be wrong about, then sure they can’t be wrong about them. But then they may well not exist! It isn’t trivial that there exists something you can’t be wrong about.
Can the discussion about the existence of qualia be avoided here ? (Hopefully, yes, otherwise people should be upfront about their metaphysics...)
There are probably a bunch of predicate issues you need to get past about the nature and existence of qualia that are going ti determine one’s answers to these types of questions. For instance, if I’m a behaviorist who simply defines hunger as a probability of engaging in eating behavior when presented with food, I’ll get an easy answer to that one hypothetical.
There’s no logical problem with the idea that one is in error about one’s own subjective experiences. Start with the harder case about being wrong about what one believes. The following two statements are logically compatible with each other:
1. X believes proposition p.
2. X believes that X does not believe proposition p.
This is the foundational observation of some of the literature on self-deception. One can believe something while believing that he does not believe it, because the objects of belief in statements 1 and 2 above are different kinds of things. The first relates to a proposition and the second relates to X’s mental state. And if the two statements are logically consistent with each other then there are possible worlds where both are true at the same time.
It’s all the easier for 3 and 4 to both be true:
3. X is experiencing sensation s.
4. X believes that X is not experiencing sensation s.
Unlike the first pair of statements, there isn’t even the appearance of any sort of logical contradiction there. So unless we’re going to go with synthetic a priori I think we have to admit the possibility of being wrong about one’s own qualitative experiences, whether it actually happens or not.
I don't know about this. I do see both pairs [1,2] and [3,4] as being contradictory. I think this boils down to the idea of an unitary conscious entity being an illusion. From a neuroscience perspective it seems more likely to me that there's multiple agentive entities within a single human brain conspiring to create an illusion of a single entity that has beliefs.
I have been reading your blog for about 8 years now. Of all the times you've pulled the double 'the' thing. I've never caught it on the first read. I feel like Charlie Brown going to kick the football.
It always overlaps the end of a line and the beginning of the next. lol
nah, he didn't have a line break between the thes. Change the screen size you view it on, or zoom in to change the text side, and it won't always be on a line break. I miss them either way.
I missed it even while actively searching. Had to ask the browser.
That's a great one to put into an article, but my favorite illusion is probably one involving our eyes' blind spot :
http://people.whitman.edu/~herbrawt/classes/110/blindspotdemo.pdf
What about cases of Sartreien bad faith, or when people reappraise past experiences and claim that they didnt actually feel what they claimed they felt at the time. Aella made the claim that she's seen a lot of sex workers who "liked" doing the sex work when they were doing it, but then later claimed that this was some sort of false consciousness and they actually hated it, they just didnt know it. Are they "lying" or is this two separate selves making two independent appraisals?
I intuitively would go with the two separate selves hypothesis. To make up an absurd example, someone who's never seen the sun might say moonlight is incredibly bright and believe it, but upon seeing the sun they might say "nevermind, my notion of brightness wasn't well calibrated and now I don't think I did experience brightness when I was under the moon".
This example is perhaps a bad analogy, since presumably a sex worker can have known what it is like to like something before or during their time as a sex worker, so it's not simply a matter of having uninformed priors / expectations / calibration.
An alternative hypothesis is that it's a defense mechanism: you believe you are happy because you don't see a way to improve the circumstances, and so might as well not suffer your own unhappiness on top of that (is this even remotely what is going on with Stockholm Syndrome?)
Anyway, I've done this thing (ie, describe my reactions as positive and much later realize they were actually not positive) and these are some of my hypothesis for what leads to this.
It’s not clear Stockholm Syndrome is a thing.
It's easy if you think of perspectives/attitudes as (unconscious) strategies that fit the current situation.
Take a similar example, dumpster diving. Is it gross and shameful to touch garbage, or awesome to find free food and stuff? Depends on a lot of factors (need, ecological attitude, getting over the initial hurdle), that can change over time.
Or the male refractory period. Am I lying when sex goes from the most to the least important thing in 30 seconds?
I think either description of the situation could be correct, and it would be very hard for someone to know which is.
It's possible that they're "lying" in the form of blurring together two concepts. Last night I played videogames until 1 AM and at the time I felt like I was having fun. This morning I woke up at 7 AM and hated the fact that I hadn't gone to bed until 1. If I were a little sloppy with language, "I hate the fact that I stayed up until 1 AM playing videogames" might turn into "I hated staying up until 1 AM playing videogames", even though I'm not reassessing how much fun I had.
I think that type of experience isn't uncommon. An example: You go to your office holiday party and spend a few hours talking and smiling and laughing. You are so caught up in performing having fun that you may think to yourself in the moment "this is fun", but immediately upon leaving you're thinking about how boring the conversation was and how you resent having to attend such events.
Or: how many people deny to themselves that they have some type of sexual attraction, and then later realize that they had be deceiving themselves.
The multi agent theory of mind explains this particular subset of difficulties with self consciousness.
For the final happiness example, what would be the equivalent of asking the woman to tell the listeners when she next had a thought? I'm not sure there is one...
Being "wrong about one's experiences" is an ambiguous phrase.
It's possible for someone to be wrong in the sense that they *misclassify* their perception, so they can say "I'm not hungry" even though they're experiencing hunger. That also seems to apply to the person who says that he perceives a 7 dimensional figure when under psychedelics; he's misclassifying his perception.
Note that one comment brought up qualia, but qualia can't be magically communicated. You need to decide what categories the qualia fit into, and communicate information about those categories; this decision process can be wrong.
This is basically what I was thinking. There's a whole additional messy layer of language here. When I hear one commenter say "I've experienced jhana, and it's the most intense pleasure conceivable" and another commenter say "I've experienced jhana and it's nothing special", I'm deeply skeptical that they're actually describing the same thing. Not necessarily because either is lying, or even mistaken, but perhaps because they're using the same word in different ways.
As Marvin Minsky says, the problem isn't just that our words to describe something are ambiguous, it's that thoughts themselves are ambiguous!
Sure, but we have no way to analyze the thoughts of anybody outside ourselves without communicating in language. In the absence of a brain-swapping machine, if the words to describe something are ambiguous, then we have no way to determine if someone else's thoughts are ambiguous. We also have no way to communicate whether or not our thoughts are ambiguous to others.
Did I just reverse-engineer postmodernism?
Or maybe different mind just have different pleasures?
Plenty of people claim that sex is the greatest pleasure ever and OK, whatever, to quote Woody Allen, as meaningless experiences go it's one of the best. But I've been far happier with more cerebral achievements, both constructing abstract things (a program, a book, ...) or understanding something that's taken twenty years to finally figure out.
This may be a liking vs wanting thing; the body wants sex, but the mind doesn't have to especially like it and may prefer to just get it out the way so it can get back to more mind-ly pleasures.
I don’t know I am a pretty cerebral guy into a lot of pretty central pleasures. Fucking playing video games that are just big databases where you run a car company, boring shit like that. Also some pretty awesome academic accomplishments at times, cool professional projects and achievements.
None of it is a stitch on good sex, which is only really devalued by how comparatively easy it is to have. But on an absolute scale it absolutely trounces anything else (though I haven’t experimented much with hard drugs).
I think that's almost inevitable on such matters.
If someone asks me what a labradoodle is, I can not only talk about it, but point to one the next time we see one. The labradoodle exists outside us both, and we can both see it, and agree that we are looking at the same thing.
But no-one can show their mind to another. The words that we use to talk about our inner experiences point to places that no-one but the speaker can see. Given how different people are on the outside, I see no reason to think they are any less different on the inside. Following someone else's description of how they learned the jhanas may be like trying to find one's way round Berlin with a map of Paris.
I have not experienced anything like the jhanas, despite my customary method of meditation happening to be quite similar to what someone reported as producing them very easily.
You're absolutely correct that they're not describing the same things.
I checked it out, managed to verify that yes, indeed, First Jhana is a thing that exists, and also found out that depending on precisely what you're initiating the pleasure feedback loop on, the nice feelings can behave quite differently. To give one example among many, there are more and less sexual variants of the state that you can get to show up.
The various different pleasure-feedback-loop states *do* have prominent aspects in common (nice physical body feelings, high energy levels, muscle tension, intense focus needed or else it falls apart, the happiness is the "eeeeee!!" sort and not a contentment sort). This is what justifies lumping them together and denoting them as "First Jhana", but one could just as easily say "this phrase is uselessly vague" and start inventing a whole bunch of new terminology to classify the variants of the pleasure feedback-loop states.
And this isn't even getting into intensity! Apparently there's a bunch of different depths you can go during First Jhana, and I've only been able to figure out the easy version that's about as good as a pan of fresh oven brownies. The "I was filled with luminous joy for an hour and it was 10x better than sex" person is describing First Jhana at a very very different intensity level than the person (me) saying "I got a few full-body chills and random giggle fits and a prominent sort of lifting helium feeling for about two minutes and walked downstairs with a sex-like afterglow", and the overall situation is much like describing both racecar and a go-kart as "car".
And before you ask, yes, this does cause a whole lot of arguments between Buddhists who are like "no, only this particular type/intensity counts as Jhana, the rest are just fake approximations"
So, yeah, "Jhana" is a real thing that can happen but it's a pretty vague word and the people describing it differently are indeed having very different experiences and it's always a good idea to clarify exactly what experience people are claiming to pulled off. But the elements of (nice physical body feelings, high energy levels, muscle tension, intense focus, "eeeee!!!"-like happiness instead of contenment happiness) do tend to be commonalities in the experiences.
Yeah, exactly this. If we call back to the predictive processing/perceptual control model, most of us (without pretty extreme meditation) do not experience pure sensory experience. We are experiencing (an interpretation of (an interpretation of (an interpretation of (... (sensory experience)...)))), with the nesting not being recursion but non-recursive processing layers, and some of the top layers possibly being extremely sophisticated concepts usually shared with others.
In this model there are then at least two distinct ways to be perceiving something "wrongly" - first, where a higher layer in some sense overwrites the output of some lower layer in some opposite direction. For example, a fear output is clumped into some sort of more abstract "not afraid" stance, even though all other bodily fear and fight/flight mechanisms are firing - claiming you are "not experiencing fear" in this state would be both true (evaluated at the top layer reporting into your conscious) and false (evaluated on your overall state, including the outputs of lower layers that you are not conscious to in the moment). A longer-term, ossified form of this error is often referred to as Layering in postrat meditator circles. I can believe this might result in the anecdote with the "enlightened" woman who merely refactored out the idea of thinking without refactoring out actual thinking.
And the second way has been mentioned by others several times already is a failure of communication of characterisations, communicating as if your characterisation is some shared concept when it is (unbeknownst to you) not accurate to the shared understanding of it.
> It's possible for someone to be wrong in the sense that they *misclassify* their perception, so they can say "I'm not hungry" even though they're experiencing hunger.
Furthermore people can be actively *unsure* of whether their perceptions are what they think they might be—the example of hunger reminded me of someone the other day who reported not being sure whether they were hungry or whether their GERD was acting up. They associated the same experience with both causes, and the only available labels they had for the experience also indicated information about the assumed cause. A less uncertain version of theirself could have chosen wrongly and thought or acted accordingly.
I feel like I am constantly wrong and or unsure about my own experiences, so I guess it depends on what you mean by 'wrong'. I also feel like other people in similar situations are very confident they understand their experiences, which could be true, but I am suspicious given my own experience.
When I think of being 'wrong' about an experience, I am not thinking that I am wrong in thinking I am upset when I am upset. I think I am often wrong in understanding why I am upset, and it is very easy to falsely attribute my feelings to some causal idea that I later doubt or strongly believe to have been wrong. This happens both with mood and with (maybe) simpler things like why my stomach hurts or why I have gas, etc etc. So my explanation for the woman example is that she felt good in a vague meditation related way, and she interpreted that as thoughtless enlightenment, because that was a readily available framework, then later learned that she interpreted her own internal state incorrectly, and something else was going on.
I got some sleep, and now I will try to clarify my thoughts here a bit. I think a core disconnect for me is that I do not think of myself as entirely the same thing as my body. My personal experience of consciousness feels limited. I can control my body, to an extent, but there is a lot I can't really do, like turning off my senses in response to a negative stimulus. Eventually my nose will 'get used to' a bad smell, but I seem to have zero conscious control over that, some shadowy corner of my brain that I can't touch is responsible for deciding when a bad smell can start being ignored. My, subconscious if you will.
My general model of this would be that my consciousness is a sort of learning/pattern matching thing that has significant but not total control and understanding of a more complex and larger organism that is the whole me. It feels like I get a lot of noisy signal from my subconscious systems, and as such, miscommunication is common. I would imagine that like most aspects of the human condition, some people get more noise and some get less, and on the other side, some are quicker/more confident in their diagnosis of the signal. I think being 'wrong' about experience is basically when the conscious self mistakes noise for signal from the less conscious bits.
When I read the original "I achieve jhana" bits, and then when I started to read this post, I thought about the Sam Harris example of "I am enlightened." I think it is the perfect example for all those jhana-ites out there. Thanks for citing it.
But I feel like you can fail to notice having thoughts. I don't know what it would mean to not notice that you're not really blissful.
With the Sam Harris example, I wonder whether she had some real spiritual experience that changed the quality of her thoughts somewhat, so that she didn't have the thing she recognized as thoughts, but was having something that the master could point out.
The lady in the Sam Harris example was doing most of the same thinking as everyone else, but somewhere between her having the thoughts and being able to talk about them, she was doing something different than normal. This could look like not forming short-term representations of thoughts, not forming memories of representations of thoughts, forming memories of thoughts but not trying to notice them, trying to notice memories of thoughts but not being able to, or recalling thoughts but discarding them due to conflict with another activated worldview. Or, more realistically, some of several of these at the same time.
The example lady seems clearly mistaken because we have a model of how this lady works, and "thoughts" are things in this model that do things like occupy short-term memory, or represent plans for later recall, and the model says the thoughts are still there doing their thing (or else she'd be doing a lot more shambling and drooling), there's just this noticing-and-reporting problem. But even absent such a model of the thing itself, if you suspect that the noticing-and-reporting system is behaving badly, that should be a tipoff.
Could you report that you were massively blissed out, but actually be mistaken in a similar sense to above? Well, you'd have to have mentally represented your state as being blissful even in the absence of bliss, or had accurate short-term representations but formed a long-term memory of bliss anyhow, or had accurate long-term memory but altered it when recalling it, or recalled the memory accurately but overwritten it due to conflict with another activated worldview.
I'm deeply curious what you're experience treating addiction is like, as a psychologist. Because my gut reaction was that of course people can honestly be wrong about their own internal experiences, of course their own mind can lie to them, addicts in withdrawal experience this all the time.
From personal experience, when you're trying to quit a pack a day cigarette habit, that your mind lies to you constantly. You can not believe how many excellent reasons there were, from bills to upcoming finals to job applications, for me to start smoking again. And it never felt irrational, it never felt like a the the moment, it always felt internally like the logical and rational thing to do until I'd quit and relapsed a few times and got that feeling that...like, my logical brain was lying to me. I always conceptualized it as my inner will and my logical mind and every time I would get strong cravings, my logical mind would generate a ton of good, rational, sensible arguments for smoking again and eventually irrationally rejecting was the only thing that proved effective. I'm not even sure how I would characterize my internal experience at that time.
But, rather than go off my personal experience, I think the right place to look for further insight is into addicts. You seem to be struggling with issues of how people deceive themselves and our own internal experiences and addicts are a wide class of people with deep internal conflicts and struggles of self-perception that do not pattern match to mental illusions or the the tricks. And you've got excellent access to them.
Yes this. Also true of a person in the grips of an anxiety attack or a depressive episode. And of course to a lesser degree true of all of us all the time because our level of self-awareness rises and falls constantly and we are almost always telling ourselves stories of one kind or another.
Your rationalizations for why you should smoke were metacognition. The impulse to smoke and the brief instant of stress relief were simple cognition that exists and hence “true” (albeit at a cost of greater stress later.)
According to the reframing from my group therapist, working on replacing what an addiction provides is much more effective than focusing on how addiction will hurt us: in the case of stress, working alone or with a group to develop stress tolerance skills which would be much healthier, more rewarding, and more sustainable than cigarettes-- so addressing the original cognitions to make tackling them with metacognition easier.
Wow, I think discussing this untangled the thought for me suddenly. Cognition is always inherently “true” because it simply is while metacognition can be true or false, and language to represent metacognition can be true, false, accurate, inaccurate, etc.
I see a couple of cracks in the idea that people can't be wrong about their subjective experiences.
First, while I grant the "I didn't realize I was hungry" example of not being aware of their inner reality, it feels like there should be a distinction between being unaware of something and being unwilling to accept something. Example: suppose a child is clearly afraid of a dog, but is ashamed of being fearful. However if you ask them they say they aren't afraid and you cannot get them to admit it. Some children in that situation might be lying in the sense that they know that they are afraid, but they don't want you to know. However, I'm pretty sure that there would be other children who will not admit to themselves that they are afraid. Maybe they're repeating to themselves in their head "I'm not scared, I'm not scared, I'm not scared". It seems to me that this child is choosing to believe something factually false about their inner experience, in a way that is different than simply being unaware.
Second, vaguer objection. People have incorrect memories all the time, including of their own actions, attitudes, and experiences. Is there a reason there should be a time limit for that memory shift? Can someone remember their experiences of a few seconds ago incorrectly? I would assume so, given the evidence that we can misremember our recent actions or motivations. (Why did I come in to this room? Where did I set my keys?) If we can have those incorrect beliefs about such recent experiences, what is so different about the present?
I agree "the present" is not usefully distinguishable from the recent past. The time it takes to speak a sentence about "the present," and in which time the present has become the recent past, is on the same scale of time that it takes for a "what was I doing?" lapse of attention to occur. By the time I can report it, it is a memory and not present experience.
There's another aspect to this - when someone says any variant on "you are not feeling the way you claim", it has in my experience generally been about power. What they are saying is "I am so much more important than you that what I say about your internal state is true". Except not quite - sometimes it's "my immense superiority over you allows me to perceive you more clearly than you do yourself" with the speaker honestly believing in their immensely superior perception.
This is the kind of thing I expect to see from the kind of therapists who participate in forcible deprogramming - their job as they understand it is to make the defective patient behave a little bit more like acceptably normal people. It also sometimes comes up in situations presently labelled "rapey" - "you really ARE attracted to me, I know it" and variants.
It's also true that quite often outsiders can better predict someone's behaviour than that person can themselves. And the behaviours involved are generally considered to go along with internal state. Many parents predict correctly that their cranky child will be less cranky when fed, and summarize their insight with "the child is hungry" and even "the child is behaving this way because they are hungry". Ditto, sometimes, when the child in question is overdue for sleep.
Indeed, part of raising children may include teaching them to recognize situations where their body needs something, and correctly realize what is needed; saying "I'm hungry" or "I'm tired" is optional, but often comes along with acquiring this self knowledge.
But I still don't think it's OK to describe someone else's internal state in contradiction to their self description, if only because their state is quite likely to change to "angry at you", whatever it was previously.
I agree that it's usually about power. The issue with "never contradict a person's lived claims" is that while it seems like the more compassionate option no one is able to actually sustain it. It too breaks down into a power game where some people cannot be contradicted and others can.
I think the 'trick' Scott describes here is a pretty good way of viewing most examples of 'experience being wrong', mostly because it demonstrates that feeling a specific way often doesn't lead to effective outcomes. If you take some drug and report "wow! Time was moving so slowly!" then you, or even more likely, a listener who believes you, may think that taking this drug makes you more productive. I think it's fair to call a statement false, if there are many interpretations and implications of the statement that are not true, and by that criteria I'd say 'time feels like it moves slowly' is mostly false.
Trying to judge the plausibility of jhanas using this framework does lead to many obvious questions. 'is someone experiencing jhana visibly more active?' for example. There is no way you could get a satisfactory answer from just one external criteria, but combining many together... Maybe.
Just looking at people testimonies, jhana does feel suspicious under this framework, for example "experiencing jhana is better than sex, but I'm not addicted to it, or feel a strong desire to constantly do it". This implies that jhana is substantially different from a form of happiness like sex, which means that there may be other non-obvious ways in which sex!happiness and jhana!happiness differ.
>This implies that jhana is substantially different from a form of happiness like sex, which means that there may be other non-obvious ways in which sex!happiness and jhana!happiness differ.
I think we could find counterexamples where that conclusion doesn't follow. Good sex is very pleasurable, yet I believe there's a subset of asexuals who can derive pleasure from it, but don't actively pursue it, or find it particularly addicting
There's a common meme where people (possibly most commonly teens?) joke they don't feel like taking a shower at first, then don't feel like getting out. Warm water does feel good and relaxing.
Somehow repeating the experience of having a shower daily doesn't immediately self-correct the reluctance to get in
I think separating the qualia of happiness (the various kinds of it, if you want) from the other parts of the experience that acompany the feeling is probably high yield.
It's not unreasonable to think it is some other part of the experience that causes the difference, not necessarily just sex!happiness being distinct from jhana!happiness (which may still be true)
One dimension to consider - claiming an absence of experience is different than claiming a positive experience. So if I say “I am experiencing an absence of thought” or “... absence of hunger”, it’s entirely possible that thought or hunger is going on in my mind but outside my field of awareness or cone of attention. However if I have a positive subjective experience of bliss, I don’t see how that qualia can be “mistaken”.
I think it’s also possible to lie to oneself after the fact. See: false memories. But this doesn’t really fit the specific scenario of sitting down and doing a repeatable thing that produces bliss; it’s more about long-term memory being mutable/unreliable.
Suppose someone is simultaneously receiving stimuli that should cause both bliss and anxiety. Perhaps too much caffeine at the same time as the other stimulus.
(This assumes contradictory feelings can exist, that anxiety is not a sort of anti-bliss)
Could this person's anxiety be outside of their cone of attention, in such a way that they can describe feeling blissful due to the other stimuli, but would report discomfort otherwise?
If anxiety is a bad example, also substitute with tinnitus (which is easy to forget, but annoying when noticed), back discomfort, or hunger.
We can still answer that this does not matter, because tautologically the overall end-result feeling is still what was faithfully reported. But the overall feeling may have components that are not reported faithfully, and that could be a useful way to talk about things like experiencing 7D objects.
Perhaps if we could conceive of snapshotting just the visual experience, without the other impairments and drunkenness, so that you could examine the visual qualia sober and at length, it would still look like intricate psychedelic geometry but not evoke confusing 7D descrpitions.
Back to the bliss with filtered anxiety, if we could find a neural correlate for two contradictory feelings, and show that a person's cone of attention is sectively filtering either, isn't it sort of ambiguous at what level of perception the filter really is? The malaise or discomfort that's not noticed in the moment can be noticed in retrospect in my experience ("oh, I've been uncomfortable because of X for the last minute, and I'm only paying attention/noticing now!")
There may be a use in separating the "available" or "input" qualia vs. the noticed, or highlighted qualia. (And acknowledging again it doesn't exactly mean someone is wrong about their own experience. They're only reporting the noticed qualia, while I also want to call input qualia real and meaningful)
Possibly I'd be tempted to model Sam Harris' woman as having 'actually experienced the thoughts' (for the definition of experienced I'm using here), but not having noticed them.
Then, we could both say she's faithfully reporting what she noticed about her internal state, while still mistakenly overlooking qualia that she discarded as unimportant.
If someone has a good time while successfully ignoring back discomfort or tinnitus, it is not exactly the memory, and not arguing against the tautology, but we can still usefully point at some other real internal 'feeling' that was not reported by inattention?
I see no puzzle here at all. We are bombarded with all kinds of sensory experiences and can only focus our attention on very few at a time. If you are not used to attending to your hunger sensor, you will not report a hunger experience.
In addition, our brain/mind is not well modeled by a single agent, and the one answering the question may not be the one experiencing the qualia in question. I see it all the time in people who tend to dissociate severely, and I assume you have seen plenty of those in your practice.
There was a theory: we are made of multiple sub-agents, each of which can receive attention and control as needed in each situation.
This that was popular a very long time ago; I've lost track of the consensus. An example book about it: "multi mind"
https://www.goodreads.com/book/show/2859264-multimind
The theory certainly matches my experimental observations, including the ones described in this post.
There is a homunculus. And it itself has a homunculus inside of it. And that one too, and the regression is infinite, because it's actually two mirrors pointed at each other. This is a necessary feature of self-consciousness; what Kierkegaard would call spirit and Heidegger would call Dasein.
So, as long as you're a reflective spirit / Dasein, both sides of this argument are always, necessarily, true. You can always apply the tautological solution, since there's always a higher reflection, being tricked into immediately experiencing whatever it is by a lower reflection. Also, there's always some level at which a person is wrong about their experience, because there's always a lower reflection, doing the tricking.
I think this whole post is reaching for the concept of mediated versus immediate experience. Scott is insisting on the existence of immediate experience, at some level. The commenters are insisting that experience is always mediated by various reflective processes. I think both are right.
Thanks for this. This clarifies what seems to me to be my internal impression of the situation. It really does seem like a hall of mirrors, which is not actually infinite but just two mirrors that together create an illusion of infinity. I guess I should make a note to read about Kierkegaard's notion of spirit and Heidegger's notion of Dasein at some point.
Edit: after more reflection I think a better analogy is that it feels like just one mirror, a single reflective entity, which is sufficiently flexible to circle around into a cylinder and reflect upon itself.
Maybe the two mirrors are the two halves of the brain talking to each other?
That's a very interesting and IMO important insight into consciousness. I think that definitely having the qualities of being a quine, of self-reference, self-reflection, or however you'd like to label it, is a necessary process for what we call consciousness, and I think I understand it a little bit better now.
In my mental model of people, there are multiple different conceptualizations of "self". Think of it like the Freudian ego and id concept - the ego may legitimately believe that the person is not hungry, although the id would describe its current state as hungry.
This neatly solves the paradox - people have multiple states of being simultaneously, so it's entirely possible that they could be accurately reporting that they are not hungry one minute while they're most attuned to their ego self but then end up identifying that they actually were hungry when they are instead attuned to their id self.
It's a little woo, but I hope you can grasp the underlying concept - this paradox arises due to the simplifying assumptions made that people are coherent and singular, and a deeper evaluation of that assumption resolves the paradox.
Two things come to mind. Not saying they're counterexamples, just food for thought.
First, I've seen people (and had the experience of) being woken up, immediately insisting that they weren't asleep, then usually realizing that yes, they were asleep.
Second, often when I doubt someone's subjective report of their own state, it's not that I think they're wrong or lying, but more that I think they have some control over it when they act like it's something that happened to them. Like if someone is getting angry about something, I don't doubt that they're angry, but I do think that they could choose to calm themselves down and instead choose to stew in their anger. They aren't wrong about being angry, but they're wrong about whether (and to what degree) they're choosing to be angry.
A person first waking up making inaccurate reports is experiencing hypnopompia and hence their observations about consciousness are expected to be less accurate than a person that is fully conscious. Technically this is important to note for data about meditation, as a meditator in hypnopompia (or hypnogogia) will be making less accurate reports about meditation than a meditator that is fully conscious, but this is really only a point that comes after concluding “qualitative reports on meditation can yield meaningful data.”
My ex would never be hungry but would experience the side effects or hunger - light headedness, headache, irritability, etc. And I’d say, “Are you sure you aren’t just hungry? Let me get you a snack.” “You don’t know me! You think you know everything!” Hand them a snack…two minutes later, “La la la…did I tell you well my presentation went today…”
You won’t be surprised by the ex part.
I'm inclined to agree with the proposed tautological solution to the title question: if you define "internal experience" as exactly the subjective component of your internal state that you can't be wrong about, then I think that's perfectly reasonable. Of course this might differ from any objective physical state (even one that we don't fully understand), which you certainly can be wrong about.
This feels mostly like debating definitions though, IMO the more interesting part is, where does the subjective and objective diverge, and why? The post gives some interesting examples of such cases, but I'm left feeling a bit unresolved about the dynamics.
Likewise regarding lack of resolution; I’m almost tempted to reframe the whole thing in purely consequentialist terms: “if the goal is objective, then objectivity needs to be combined with subjective accounts. If the goal is subjective, then subjective accounts are most relevant.” This works in practical situations, but accomplishes little for advancing philosophical thought. (That almost feels like a tautology too, haha.)
This seems relevant, with other examples of ways people seem to be wrong about basic features of their own visual experiences: https://philpapers.org/rec/SCHHWD-2
And supposing that it's true that there are certain kinds of features of our own experiences we couldn't (or practically couldn't) be wrong about, we could still ask whether the claims under discussion in the jhana debate are of that kind. And at least some of them clearly aren't. Claims like "this experience is ten times as intense as orgasm" are the sort one could be wrong about even if we think our epistemic access to our experiences is pretty good, because it involves using memory and comparing experiences of different kinds and at different times, among other things. "This hunger is twice as intense as the thirst I had yesterday" can be mistaken even if "I feel hungry right now" can't.
Thank you, this seems to align with my view. It seems pretty obvious that a claim like "this experience is ten times as intense as orgasm" is just made up nonsense. Experiences are ordinal, not cardinal. All you can honestly say is that "this experience is more intense than that experience." There is no way to assign any cardinality to it.
It doesn't ?
Mistaken != nonsense
And psychometrists would disagree :
https://en.wikipedia.org/wiki/Stevens%27s_power_law
> Without assuming veridical interpretation of numbers, (Narens 1996) formulated another property that, if sustained, meant that respondents could make ratio scaled judgments, namely, if y is judged p times x, z is judged q times y, and if y' is judged q times x, z' is judged p times y', then z should equal z'. This property has been sustained in a variety of situations (Ellermeier & Faulhammer 2000, Zimmer 2005).
Also important :
https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law
> Activation of neurons by sensory stimuli in many parts of the brain is by a proportional law: neurons change their spike rate by about 10–30%, when a stimulus (e.g. a natural scene for vision) has been applied. However, as Scheler (2017)[23] showed, the population distribution of the intrinsic excitability or gain of a neuron is a heavy tail distribution, more precisely a lognormal shape, which is equivalent to a logarithmic coding scheme. Neurons may therefore spike with 5–10 fold different mean rates. Obviously, this increases the dynamic range of a neuronal population, while stimulus-derived changes remain small and linear proportional.
That's interesting, thanks. However, it seems to me that the Stevens' power law compares perceived magnitudes of different intensities of the same stimulus. There is no way to compare the perceived magnitudes of different stimuli, let alone complex experiences. So I stand by my comment.
Right, of course...
It depends then how literal they were with that orgasm comparison (indeed, probably not much ?)
Like, if you were hallucinating a sound, you could try comparing it to your general experience of sounds ?
Maybe not to the point of being able to do science though, since that requires an "all else kept equal" setup, which seems to be very hard to pull off for a state where you hallucinate being "somewhere else" and/or where you are more or less cut off from outside stimuli ?
In my experience most of the problems go away if the sentences are rephrased without an "I" (i.e. pronoun plus form of to-be) but instead describe what goes on a noticing.
In the first example someone doesn't notice the feeling of hunger. Done. They may lie about it or not but the noticing makes the process clear and doesn't attribute it to some inherent quality.
The time example is more difficult because multiple different things get lumped into time perception: A) Density of action or experiences: Times without action can be boing and feel long and get described as time goes slow.
B Felt measures of biological clocks: There can be a distinct sense of urgency or time having passed without explicitly checking a clock and independent of the amount of things going on. The same type of clock that allows some people to wake up at a planned time.
Additionally, for A, there are distinction between noticing actions in short-term memory or long-term or episodic memory.
This mix leads to the feeling one may have sometimes that time seems to go slow and fast at the same time depending on how you look at it. A bit like an optical illusion with two readings as in the bearded face/woman under tree example.
Depending on these cases the example would be rephrased as
- noticing there is a lot going on right now (maybe interpreted as time going fast)
- noticing a memory of an episode with many activities (maybe interpreted as time going slow)
- noticing a feeling of urgency, e.g., of getting a train (maybe interpreted as time going fast)
And so on.
I suspect there's something along these lines which is akin to the placebo effect. Any time there is a trial of a new medication which is ultimately ineffective there are always a small number of people who feel "much better" from something which doesn't work. Indeed - they may become staunch proponents of the treatment. Either this needs to a be a very niche positive effect (only works on people with eg. rare mutations), or people have tricked themselves into believing that the medication is doing something positive.
no that's just regression to the mean, some number of people would randomly have felt better anyway and happen to have undergone treatment around the time they started recovering naturally.
Well, might as well take this to the controversial places it wants to go.
If someone says, "I experience that I am a *different gender than my physical gender.* Can they possibly be experiencing the feelings of that other gender--have they somehow mastered complete telepathic empathy and understanding?
What they mean can only be, "I am experiencing that I am feeling what I believe that other gender feels."
(To be clear, I don't actually care if gender revolves around feeling a _correct_ feeling or not.) But aren't people clearly wrong about their own experiences when they are claiming to experience something that they cannot know what they correct experience is?
So while there are very few externally objectively correct experiences, this is an example of one of them. And once we've demonstrated that some experiences are objectively experienced wrongly, where should we draw the line?
Considering the intro “might as well take this to the controversial places” acknowledging how upsetting the topic is for many folks & the conclusion “objectively correct” asserted without any evidence, I am not confident in good faith discussion, but I will go ahead and offer some anecdotal data, as the topic is pretty close at heart to me.
First, a number of my best friends are trans, and they all had to spend YEARS developing any confidence that they share any experiences with other trans people of the same gender as them, let alone cis people of the same gender. Transitioning is not about perfect emulation of a cisgendered person’s experience--and in fact, many trans people tend to feel they have more in common with other trans people of various genders than cis people of the same gender. No one is making the claim of identical qualia REGARDLESS of whether self reports of qualia are reliable.
Second, primary goals for transitioning are usually a combination of alleviating anxiety & alienation, increasing confidence & self-acceptance, and saving lives. None of these goals are “wrong” or need any “lines drawn” to me, as I approach the topic with basic humanist consequentialism. If you think otherwise, I highly recommend including some notes on what moral perspective you are approaching the topic to better facilitate discussion and transparency.
Third: physical sex is expressed in bone and fat density, hair, genitalia, vocalization, etc along scales with no clear line between one and the other, only a pair of normal-curves that meet in the middle. Physical gender on the other hand is expressed in neuron structures, with a striking amount of structural similarity between cis women attracted to men & trans women attracted to men, followed by a high similarity between trans men attracted to women and cis men attracted to women, and finally either medium-high similarities or a lack of data for various other genders, sexualities, and sexes. (As there are ten times more neurons in the human brain than humans, with 2^100,000,000,000 possible states Without counting connectivity states and tubulin data, there are of course no identical human brains-not among cis people of the same gender or even identical twins, only patterns and divergences.)
I'm reminding of a line from Dennett's "Quining Qualia" where he quotes Wittgenstein:
"Imagine someone saying: 'But I know how tall I am!' and laying his hand on top of his head to prove it." (Wittgenstein, 1958, p.96) By diminishing one's claim until there is nothing left to be right or wrong about, one can achieve a certain empty invincibility..."
Basically, the only sense in which we're guaranteed to be right about what we're experiencing is a sense in which claims about what we're experiencing are pretty much devoid of content. If feeling hunger is a matter of being in a state with typical causes/effects, you can be wrong about whether you're feeling hunger. But if feeling hunger is just being inclined to call whatever state you're in "hunger", then sure you can't be wrong when you say you're feeling hunger, but that's not because there's some substantive fact about yourself that you're reliably tracking.
Yeah, this.
And to continue on the Wittgenstein thing, as soon as anything is reported (and probably before, since there's no private language in Wittgenstein), language gets into it and it's a whole mess. Even *if* we somehow grant that you can't be mistaken about your sensation of hunger, once you start to *report* it, any notion of it being private goes out the window.
Seems like you could argue in the same way that monolingual people can't really speak any language.
Or rather: Quine did indeed argue that you can never be *certain* about a translation into a different language (the famous Gavagai thought experiment).
Wittgenstein rather argues that you cannot have a private language, as you have nothing to check it against.
What counts as something valid for being checked against? If a group of speakers only speak a single language together, it doesn't seem like they have anything external to check their language against, and everything that they "say" would be self-referential and contentless.
Yes, this is Wittgenstein's point (apart from it being contentless, that is). The only point to language is communication, so you have nothing to stand on except if you seem to be managing to communicate with people. There is no external "meaning" to language, nor any rules, outside of this. Language is a "game".
https://en.wikipedia.org/wiki/Language_game_(philosophy)
'The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words "block", "pillar" "slab", "beam". A calls them out; — B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language.'
—Philosophical Investigations
It doesn't seem like the workers are communicating anything that is heterogeneous between them. They may as well be parts of a single machine using nerve impulses or a serial bus. Physically, this seems fine. We can treat them as a single entity. But then it's no longer clear why their activity is different from a "private language".
The ideas in Wittgenstein is that since a language only has any meaning as communication, a private language doesn't make sense. You can test your regular language by talking to people and see if it works, but how would you test your private language?
The question of whether a person experienced a jhana is unlike the hunger or happiness subjectivity question -- in my mind -- because jhanas are a thing described with reference to an external tradition of practice and expertise. Like the woman claiming enlightenment, the question is in reference to an outside body of wisdom accessed through teachers who presumably are further down the road than the person reporting the subjective jhana or recent enlightenment experience.
Picture a new-ish yoga student who has mainly learned yoga from YouTube videos and books. After some diligent practice, they feel they've nailed Half Moon pose. They're like "yep, it looks right, it feels right; I've got it." But then some weeks later they have the chance to take an in-person yoga class with an experienced teacher who says, "actually, your hips do this in half moon pose, your leg goes here instead, and the whole pose should feel more like X than Y." The yoga student wasn't lying about their experience before, but they lacked sufficient background and context to accurately assess their experience relative to the tradition in which they were practicing.
Many comments in the jhana discussion seemed to argue that people were intentionally lying about their jhana experience in order to seem special. Many of these people seemed to dismiss jhanas as a real experience because it seems to them supernatural like levitating or mind reading. Once you step inside the Buddhist tradition, it becomes clear jhanas are not a magical supernatural kind of thing. But also, one could see how people reading books and practicing at home without working with a teacher might also be guessing about things they don't know a whole lot about -- and might also be bragging to get attention. I don't imagine that's what most people are doing, but you could see how some people might. How do we describe that yoga student's experience relative to half moon pose -- mistaken, I guess we would say, right? Not mistaken about how it felt to be in what they thought was half moon pose, but mistaken that they had accomplished what the yoga field calls half moon pose.
There's a whole spectrum of ambiguity that also exists because different yoga (or meditation) teachers might disagree somewhat about whether the thing being described or performed (whether jhana or half moon pose) constitutes an accurate instance of that thing or not.
Another example might be whether a person had a manic episode or not -- a huge amount of psychiatric diagnosis falls into this realm of question. There's the person's self-described experience; there's an external body of expert information (the DSM, research, etc); and there's someone with more experience (one or more clinicians) assessing it. There exists room for error -- lying, mistakes, confusion, inaccuracy, expert disagreement -- at all of three levels.
Delusion is a word that gets used a lot in both Buddhism and psychology to describe the situation in which a person claims to not be having an experience (like anger, say) even though their behavior strongly suggests they are having that experience. The whole parade of psychological defenses exists in this same weird territory where people at one level are having an experience (or part of them is) and at another level they are disowning or unaware that they are having it. Because we aren't these unitary selves, I think it's often possible to be deluded about our own subjective experience. But the assessment of delusion is made (often controversially) by someone from the outside who brings some deeper or wider expertise about how to identify delusion.
If you tell someone what they're going to experience if they do something right, that's going to dramatically increase the chance that they experience - or think they experience, or report experiencing - this.
I don't really have anything to say on this subject myself, but I feel like it is worth dropping some relevant links :P
Eric Schwitzgebel has written a bunch on this subject, Philosophisticat has linked one of his papers, here's another: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Naive1.pdf
And here's Luke Muehlhauser posting about it on LW a decade ago, largely drawing on Schitzgebel: https://www.lesswrong.com/posts/J55XeCNeF7wNwgCj9/being-wrong-about-your-own-subjective-experience
Actually, wait, no, I have to point out this fascinating bit from Schwitzgebel and Gordon's paper on human echolocation: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Echo.pdf
> One might think that the blind [...] would be immune to such errors, but that is not the case. For example, one of the two blind subjects [...] believed that his ability to avoid collisions with objects was supported by cutaneous sensations in his forehead and that sound was irrelevant and "distracted" him. [...] [I]t was only after a long series of experiments, with and without auditory information, and several resultant collisions, that he was finally convined that his judgments were based on auditory perception. Similarly, Philip Worchel and Karl M. Dallenbach report a nearly blind subject convinced that he detected the presence of objects by pressures on his face. [...] In fact, so common was it [...] for blind people to think that they detected objects by feeling pressures on their faces, rather than by echolocation, that their ability was originally called "facial vision."
Quick perusal, I don't see the method they used to rule out the face stuff. My initial thought is; sound hits the body everywhere, not just the ears, so maybe their face actually is sensitive enough to pick up some sounds.
What if, after many vision tests (and a few hundred terrible car accidents), I know that when I subjectively see “red” I could be seeing red but I could be seeing what others call green. I see both as “red.” Knowing this, I say, “I believe I see red” This is subjectively true — but also highly equivocal. For example, if I say “I believe I see red” while driving toward a traffic light, I’m probably bracing for a *possible* collision. Does this equivocal state fit neatly into your two categories of subjective experience?
The thing is that if this were true, you wouldn't have car accidents, and no vision test would detect this. In fact, assuming that this had been the case for you since birth, you would never know or have any way of suspecting. All your knowledge of color names comes from people describing things that you see. If people describe a certain wavelength of light to you as red when you are first learning language, then that's red to you, full stop. The traffic light on the bottom would still be green to you, and the one on top would be red, and you would stop or go as appropriate. You would never have know anything different. All of which gets into the issues several people have raised about the potential disconnect between subjective experience and the language used to describe that experience.
No, that's not right. As a toddler he would have been confused by people saying "this is red" and "this is green" when pointing to what looked like the same thing to him. Later, he would probably have been diagnosed with some form of red-green colour blindness, and then he would either never drive, or drive and learn a workaround like "the red-looking light at the top means stop, the red-looking light at the bottom means go."
Oh you're correct, I misread the original. Mea culpa.
I've taken to using the distinction of signal privilege vs representational privilege to talk about this. (https://tis.so/the-limits-of-signal-privilege) In those terms, I would say that yes, you can represent your own signals incorrectly even if you can't be wrong about the signals themselves. If someone else describes different affordances that will interact with your future signal than you do, they can certainly be more correct than you ("I'll ice my knee and feel better!" "You literally don't have a knee."). So ultimately whether you want to call it "wrong" or not depends on whether you're talking in a signal sense or a representational sense.
This is basically another angle to look at Wittengenstein's idea about a "private language"; if you rephrase your question as "can people ever fail to be fluent in their private language"? you can see that you've already gone too far by assuming the private language must exist.
Very interesting stuff. I'm curious though what it would mean to see a 7 dimensional object. Consider this situation:I tell you that I perceived a 7 dimensional object while in an altered state, but I only have a popular conception of dimensionality. When you explain rectilinear dimensionality, ie that the count of dimensions is a measure of how many lines you can draw which are at right angles to all other lines in the set, I consider and say, I did not understand what I was saying, I had an impression that seemed to me 7 dimensional, but it was not actually a 7 dimensional object, by this definition, a definition which I accept as more true and meaningful than my previous concept of dimensionality. Then I have gained a sort of enlightenment which shows my previous error. Or have I misunderstood?
Yeah, there's a lot of stuff going on here, tough to sort it all into neat categories. As a general rule, honesty is important to keep distinct from truth, since there's a definite tendency to conflate the two. But when talking about personal experiences, honesty and truth are more entangled than usual, making it very difficult to keep the two apart.
I think it's worthwhile to bring up the idea that people interpret their own experiences. Eyeballs send the image, but you decide what you're looking at. This interpretation is something that you learn, sort of like walking. It is a skill that you build and expand upon, until the interpretation is so baked into your everyday existence that it feels like you aren't doing anything. Memories are formed, then they are interpreted, then the initial memory fades away, while the interpretation sticks around. In this sense, people can be simultaneously honest and wrong about their own experiences, correctly stating the interpretation, while completely failing to account for the initial, less filtered experience.
If you carry this model through to its natural conclusion, you'll notice that it allows for a sort of roundabout dishonesty. If I build a system of interpretation which biases my memories towards a dishonest interpretation of reality, then I use that system so frequently that it becomes second nature (like telling my body to walk), then I've effectively created a way to be honestly dishonest. My interpretations will always be genuine, even if I was acting dishonestly when I created the system which creates my interpretations.
I hope some of that is insightful. This is a pretty neat topic to think about!
Is this what happens with a Delusional Disorder? The interpretation filters are broken and the memories are then malformed. But the paths through the filters become trails then roads etc.
My other question is about the woman who said she was not thinking while the mystic revealed her to have been thinking. You say if she was lying she could just continue the deception but this seems to presume that she knew the truth herself but tried to deceive others. What about the case where she deceives herself? Where she experiences the qualia of thinking but 'talks herself out of it'? You seem to presume a simple and undivided conscious self, is that how you experience yourself?
So it's probably pointless to discuss this because the real answer is "I'll never be in another brain so who knows?"
But I think it's much much more complicated than you're making it here. For instance, you discuss people talking about time slowing down on salvia and explain that there are three "levels" on which they can be wrong, and while they probably didn't really have "more time" in some objective sense, it's absurd to say they were wrong about their subjective perceptions.
But then you add in a parenthetical that literally says just that - the person you're speaking to, the person who is not currently on salvia but who is talking about their memory of a subjective perception, was wrong about that memory.
This happens *all the time.* Constantly I'll remember hating a movie that the internet tells me was bad, and then get corrected that during the movie I actually was really into it, or vice versa. For years, psychiatrists thought they could uncover repressed memories of deep trauma, and the patients legitimately thought they'd had the subjective experience of that trauma. Job interviewers are more likely to hire the first or last person they interviewed, because they have a stronger recall of their subjective perception of them.
People online who want to hear voices or have multiple personalities badly enough genuinely think those things are happening - more likely they're choosing to remember a stray thought as audible, and to give it more power in their memories than it had in real life than they're legitimately lying. And people who claim to have experienced an orgasmic state of bliss from meditation probably didn't but probably genuinely remember having done so.
I don't understand what "wrong" means in this context. I've only used salvia once because it scared the shit out of me. I understand that gravity's vector did not objectively shift 90 degrees and that my body was not being cut into infinitesimally thin slices by a razor sharp filament. To an external observer, I would have appeared to be laying down on my friends' couch for 15 minutes, pressed back hard into the cushions. But I sure as shit experienced those things at the time.
You sure as shit experienced something corresponding to your comment, but I think it very likely that you have processed the experience to give it a coherence which it lacked in the moment.
Robert Jones has it. Basically our perceiving and remembering selves have two different goals. Our perceiving self experiences sensation and our remembering selves try to put that sensation into some kind of context. That includes changing our memories of subjective experiences if they don't align with the story our remembering selves wants to tell.
OK, that makes more sense to me. It took some time in the immediate aftermath to come up with words that even gestured toward capturing the feelings.
Something that seems to be missing here is the concept of effective communication and interpretation. It may be possible that the thesis here is correct, that a person cannot be wrong about their internal experience. However some context is required to interpret many of the statements given as examples as being about internal experience. For the person who claims to no longer be angry about their father, there is some true thing about their perception of their internal experience of their emotions about their father. There is also some false meaning that is indicated by their behavior when their father is brought up. Depending on the context, the statement that they choose to make May indicate one or the other of these meanings more or less. In many such cases the meaning much more likely to be interpreted may be false, which would make the person wrong in having chosen those words to convey something about which they are not wrong.
I don't understand Emilsson.
3 dots on 3 planes = 6 degrees of freedom: I guess so? That's assuming the planes are unrelated to each other, e.g., you don't know if they're parallel or how far they are from each other.
3 dots on 3 planes = 3 degrees of freedom plus a new dimension: Um, what? And what's this about oscillators, & what does it have to do with the pictures of dots following curves? The dots on the different planes can't be coupled oscillators, because we would need more degrees of freedom to know the relationships between the planes (or between the dots). My guess is that he's drawing a single dot traversing 3-space and projected onto 3 different planar slices, but you have to add a lot more information than the "3 planes, 3 dots" scenario gave you.
I liked this a lot probably because it confirms some of my own thinking on this, where you have some sort of world modeler in your head that generates and experiences qualia, the sensory apparatus hooked up to that that world model, and the external reality feeding the sensors.
I can give myself goosebumps at will by convincing myself that I am cold. I can even make it happen on just particular parts of my body. I experience the qualia of cold and my guess is the wiring that normally goes from my skin to my brain is at least slightly two way. Of course I never actually change my environment but the model in my head can exert control over things that it is wired to.
So, I think I’d agree with what you’re saying that people can’t lie about their internal experience to some extent. I may not be understanding correctly what you’re trying to reconcile/justify at the end with the homunculus fallacy. I know even if I convince myself I’m happy that my mental model can’t stay out of sync with my own internal state and external reality forever. Or at least not optimally. But I don’t think that requires there to be some separate experiencer that exists on its own outside of the rest of me, then again I think I might be missing a piece there.
I always think of all three pieces of this model as being parts of an agent rather than one specific component being the “real” agent. The same way intelligence isn’t a single magical something but a series of specific inputs and information transformations. That stuff functions interdependently. Does that resolve the need for a tiny person watching a tv in our heads?
It seems clear that the brain is always doing a million different things at once.
To me, it would be strange if only a tiny handful of those things produced conscious experiences. The idea that the auditory system stops producing conscious experiences when we 'tune out' a noise, then starts producing them again when we 'tune in' to that noise, despite the system doing almost all the same things in either case, seems absurd.
It makes much more sense to me that the brain is producing all kinds of different conscious experiences at any moment with all of it's different processes. And that what we understand as 'our' conscious experience mostly has to do with which of those get encoded as memories, and which of those get control of areas relating to reporting those experiences.
Split-brain patients are a good example here. It seems very clear that both hemispheres of the brain continue to 'think', at a level where it would be surprising if they aren't both producing conscious experience. Yet the conscious experiences that split brain patients verbally report are those with access to the verbal centers of the brain. It takes very precise manipulations to get reports from the parts that don't have access to the speech centers, but when you do they seem to report entirely different thought processes (that I assume produce qualia based on their complexity).
Another good example is sleep and 'unconscious' states. Evidence from twilight drugs is suggestive to me that we are never 'unconscious' so long as the brain is working, we merely have times when conscious experiences aren't encoded as memories and therefore fail to become part of our self-reported history or self-concept. Similarly, while some people report dreaming often and some don't, it seems clear that everyone does dream (the brain is doing the same types of things in each case) and the main difference is again whether those experiences are encoded to memory.
Thus, I think the simple ways people can be 'wrong' about their conscious experiences are cases where significant conscious experiences do not have access to reporting mechanisms or encoding mechanisms. People may be wrong in their memories of what they experienced, even with a .1 second delay in some cases. And they may be having experiences that are not being 'noticed' by the part of the brain that is talking to you, as with the 'not having any thoughts' example above.
As others have mentioned some puzzling experiences with perception, the famous experiments by Gibson showing that the organs participate in the data collection should be referenced. I find it interesting that we can reflect on our experience at other cognitive levels, like the experience of our interpreting an idea or meaning (about an optical illusion like the Frazier Spiral illusion), or the experience of our verifying (or in this case falsifying!) in judgment the fact or knowledge that the spiral actually is composed of circles. The experience of experience itself is unique compared to other levels in the scientific method (i.e., methods that result in knowledge or *scientia* the Latin root). In any case, the border areas where we might be wrong about our own experiences are dealt with in metaphysics and worth studying if you are concerned with reality, existence and being.
I think the issue comes down to the fact that, in a structured phrase "I am experiencing [...]", everything after "I am experiencing" is an attempt to communicate some internal experience; observing that, "You're not experiencing X, you're experiencing Y" is an attempt to correct not your internal experience, but rather the language you use to attempt to communicate that experience.
Scott please read my narrative of my phenomenology of a psychotic episode found in the blog post "Yoko Taro is a Dragon from the Future" and give me a brief inventory of which things you think have ontology to them, which are confabulation, and which are whatever else. I have been trying to get any such analysis done forever and mostly people just become agitated and hostile when I try
How about one we have probably all experienced:
"I'm not angry!", said by someone very clearly angry.
I don't think this person is somehow not experiencing the qualia of anger. I just think they (like most people most of the time) are not introspecting on their current emotional state. They are genuinely mistaken about it.
At least one aspect of this is that words are slippery and experiences are described within a cultural context. I'm from a British culture rather than an American culture, and view American emotions as basically "performing" what they have been told to do via movies and TV for over a hundred years.
So, to take one example, grief. The combination of my personality and my culture mean that I don't "perform" grief and, more than that, I think I mostly don't feel grief the way others act it out. An American psychiatrist (perhaps not our host!) might insist that I have some sort of PTSD, that I am so upset by the loss of a loved one that I simply refuse to process the experience. Well, you can insist that all you like, but I think it's BS and an example of US cultural projection.
So that's one example of how things can get lost in communication, and how something like jhana can be described differently by different people.
Another example is how I interpret what was experienced. Like Tolstoy, in my teens I suffered from complex partial seizures which are a form of epilepsy that, in my case, never took over my body, but did have my mind experiencing something like a very pleasant waking dream.
If I were a religious person, I'd probably claim this as some sort of religious experience, interpret in that light, and very soon I'd be remembering it that way, not just a pleasant waking dream but angels, god talking to me, etc etc.
If I were a junkie, I'd probably claim this as some sort of drug experience and use it to justify my on-going habit; I'd remember it in the context of whatever drug I used and would claim the two as parallel experiences that both make me a superior human being to the rest of you squares.
But I am a boring, science-minded square, so I remember these as pleasant waking dreams, as devoid of meaning as the other random experiences of dreaming. And because I don't see any reason to project meaning into them, I don't project meaning into them, and they don't grow to something beyond what they initially were.
I recently watched this video by MachineLearningStreetTalk [1] which had clips that might be useful, so I'll summarize them below.
In the intro, John Searle gives a lecture on consciousness in AI where he describes the categories of subjective vs objective epistemology (knowledge), and subjective vs objective ontology (existence). He says that "lots of phenomena that are ontologically subjective admit of an account which is epistemically objective", and explains how this is crucial for developing a science of consciousness. He draws a distinction between phenomena that is observer-dependent and observer-relative, and gives an example of how money is observer-relative since its value depends on the user. Since all observer-relative phenomenon are created by human consciousness, they contain an element of ontological subjectivity. Yet we can still have an epistemically objective science of a domain that is observer-relative i.e. an objective science of economics. Perhaps that isn't the best example but I think the point still stands. (Full lecture [2])
In a later section [3], Karl Friston gives a computationalist perspective about how feelings or phenomology may emerge from an in silico replica as hypotheses generated from a separate model which takes as input data from all underlying models involved in planning, exteroception, interoception, etc. From this perspective it seems easy to reason about how a person's subjective interpretation of ontologically objective information can be incorrect or inconclusive. He also touches on chronic pain which can be psychologically driven. Another interesting example is Alexithymia, in which an individual is unable to identify and describe emotions they experience and is associated with impaired interoception.
[1] https://www.youtube.com/watch?v=_KVAzAzO5HU
[2] https://www.youtube.com/watch?v=rHKwIYsPXLg
[3] https://youtu.be/_KVAzAzO5HU?t=3111
+1 I was waiting for someone to mention the words "interoception" and "Alexithymia". In Lisa Feldman Barrett's account of emotion, emotion in an interpretation of ones physical sensations and context. It's not that you ARE angry, it's that your body has a series of physical reactions which you interpret as anger and which you then experience as anger. But under a different set of circumstances, that anger could feel like love. This type of thing happens all the time. As someone with moderate Alexithymia, I am often wrong about what emotion I am experiencing (though I'm getting better). So yes, you can be wrong about the physiological reaction you are experiencing, but I don't think you can be wrong about the qualia.
> So yes, you can be wrong about the physiological reaction you are experiencing, but I don't think you can be wrong about the qualia
Really well said. Shameless plug, but I'm actually workng an app that aims to improve Alexithymia. We just released an MVP and are investigating new tools feel free to join the discord if you have anything to share or suggest! https://www.animiapp.com/
While the topic itself is incredibly complicated to answer, and the best answer I could provide would fall short, I feel the topic hasn’t been done justice until we address the sheer amount of cognitive bias towards normative information that has been expressed recently in discussions. In other words, people are more likely to believe a description of familiar experiences and disbelieve an unfamiliar experience EVEN IF the unfamiliar experience is calculably MORE LIKELY than the familiar ones--- I have been the subject of this dozens of times.
The best example was an appointment with a nurse I had many years ago. I was very unwell but had no idea what was happening to me and thus couldn’t describe my experiences well.
The nurse tried to ask “Did you pass out?”
I insisted, “Well, I described what symptoms I could. I’m not sure if I passed out.”
She got really angry at me, and said, word-for-word, “How could you not know if you passed out?!”
She was not the only medical professional to repeatedly assume I must have the ability to distinguish consciousness from unconsciousness despite this requiring a level of metacognition skills that have to be developed and maintained, and the inverse requiring no metacognition skills, making it more likely if no priors are applied.
The cause was eventually determined to be narcolepsy: experiencing a mix of consciousness levels simultaneously is a defining symptom of the condition. There is a little vindictive joy in being able to hold up the numbers and say “Ha, I was right!” but the deeper issue here was having my description of my subjective experience rejected in the first place, in spite of probability for no identifiable reason.
Reading the enlightenment example enlightened me. I legit stopped thinking thoughts right when I read that. Got 10% of my brainpower back for reading with an internal voice and looking at stuff.
Gonna try to keep the streak alive all night.
I mentioned in the comments of the article on Jhana that I believe for every 1 person actually experiencing Jhana, there were many many more who believed they had reached it but actually didn’t. People have a good time meditating and see the positive benefits, so they convince themselves they’ve achieved Jhana. Without a baseline experience for the bliss of enlightenment, there’s no way for them to know the difference. I’ve had friends who, in response to me talking about the enlightenment of a mushroom trip, say something like “I already have that without shrooms.” When these people finally try mushrooms, they realize how completely wrong they were. They obviously weren’t lying, their mental models of the experience just sucked.
Another example I see of this is with elite endurance athletes. They will often say things like “when you think you’ve reached a wall, that’s actually about 40% of what you’re really capable of”. There was a time where I was lifting, and a friend with far more experience our significantly more weight on the bench than I was used to doing. I told him it was way passed my limits, but he insisted I was capable based off what he’d seen. He turned out to be right. Moral of the story is that subjectively speaking I had been pushing myself to the absolute limit, but my experience turned out to be wrong. I simply had no concept of what actually pushing myself really felt like.
What does this entail about depression and anxiety being so prevalent in a world of more material comfort than ever before? Perhaps people could benefit from engaging with the less fortunate to build a sense of gratitude.
I don't think it's that. I think it's that there's a whole bunch of different intensity levels for what's called "Jhana", that can smoothly blend into each other, and so the overall situation is kind of like someone from the Appalachians and someone from the Himalayas talking about their mountain-climbing experiences, and realizing that they mean something very different by "mountain-climbing"
It's not that the Himalayas are "true mountains" and the Appalachians are "false mountains", it's that "mountain" is just a super-vague term that points towards what they have in common instead of how they're (very) different.
Same thing with Jhana. The term desperately needs extra qualifiers added to it to slice things up more finely and make clear what's actually being claimed, it's denoting a pretty big chunk of mental territory.
Or you could just take the path of going "only things over 20,000 feet are real mountains! Almost nobody has ever climbed a real mountain!" and in the meantime Joe is still getting benefits and a nice time out of hiking in his local backyard hills.
Ahhh interesting analogy, that makes sense if the definition of Jhana encompasses a broader range of experience than I had thought. The way its often portrayed on here and elsewhere makes it seem like it’s on the extreme end of possible human experiences. I suppose the people being interviewed are something of “Jhana experts” who you could expect to have a more intense experience than casuals, just like professional climbers will have a more demanding experience hiking in the Himalayas than I do in the Appalachians, even if we’re both climbing mountains.
That said, even if they’re both literally “mountain climbing” the 2 experiences are *not even close* to the same thing. I can climb the Appalachians for years and I would not have the slightest clue about the experience of climbing Everest and nothing I’ve done before would prepare me. If the difference between the “pro” and “casual” Jhana is as big as the difference between the Himalayas and appalachians, then I feel like the already fuzzy and imprecise term loses all meaning. If that’s the case, then I experience Jhana fairly often when I mediate for 10-15 minutes before bed. I feel pleasantly connected to my body and the universe during and after the sessions. There is usually a specific moment where I feel my consciousness shift into a deeper meditative state. Maybe this is a mild form of Jhana, but I am still a long ways from the life changing spiritual enlightenment people report having at the top levels.
This is why I’m really interested in hearing from people with considerable experience with *both* “Himalayan” Jhana and high dose psychedelics, I want to know how these experiences relate to one another. From the descriptions they sound very similar, and it would be useful to be able to use enlightenment of a mushroom trip as a “baseline” for what you can expect out of Jhana. With psychedelics, there is a considerable range of intensity depending on dosage and other factors. Still, there is a threshold you cross which serves as the dividing line between tripping and not. It’s fuzzy and tough to describe, but I know from experience it’s there, and you know when you’ve crossed it. Would be curious to see if it’s the same with Jhana.
I wonder if this is in any way related to the fundamental attribution error? We categorize the experience of others differently than we categorize our own? So a someone else's mind includes the subconscious and their bodily states. Our own mind experiences the subconscious as foreign and part of the environment that the mind navigates.
So... it all depends on where we draw the circle that delineates 'self' from 'non-self.' Scott is describing people as they would describe themselves, with 'self' being only their conscious, holistic thoughts. Other people are including things like 'bodily states' and 'subconscious thoughts' in the definition of the self. I would guess that Scott's model allows more granularity and is therefore a bit more useful, provided that someone had the time and energy to sit and ponder.
In any case, as you mention memories can be constructed after the fact. The past, then, is a foreign mind that we tend to describe as a non-foreign mind. If your memories of a past mind thinking in seven dimensions doesn't actually allow you to do useful computation in seven dimensions then your memories of the past mind are false.
I'm not sure if I've said anything here that Scott didn't say. If this post is redundant to Scott or the thread (I haven't read the whole thread,) then I offer my sincere apologies.
The thing about qualia is that they're kind of irreducible things-in-themselves, that cannot actually be transmitted to other people.
You can see that something is red, and be absolutely certain of it. But if you say "red" to someone else, that's just a word you've learned, that doesn't inherently have any relation to the quale you experienced. We assume that different people have somehow similar qualia when they look at a red thing, but this is fundamentally an assumption, and not checkable.
This seems like a difficulty in resolving this question, because if someone is describing their qualia to someone else, they're passing it through a messy translation filter that might very well communicate something completely different from what they actually experienced; whereas if they just think about it to themselves, then the whole process is sufficiently self-referential as to be of questionable interest. If someone says that they saw a seven-dimensional object while on DMT -- what does that even mean? What is the "real", "true" quale of seeing a seven-dimensional object? Either you know that quale, or you don't; and in either case, you can't judge whether someone else is having it.
If someone is describing a quale, their description may very well be wrong. E.g. someone thinks they are having the quale "enlightenment", and says so, when they're really just a naturally content and happy person. But it's hard to say that this is being "wrong about their own experience", because it's fundamentally an interpretive claim _about_ their experience, not a direct transmission of it (which is impossible). I'm not sure what "being wrong about your own experience" would even mean.
Are we to assume there is one single step in brain computation in which data hits our "consciousness"? Clearly there is processing on the data before it hits our consciousness, and there is processing after.
Example of processing before consciousness: Your optical nerve gets pixels input which is theb translated to colors and shapes which is then translated into the concept of a polar bear by checking against your world model and other objects you've seen before. You don't have a conscious experience of seeing lines and shapes though, your first conscious experience is already seeing a polar bear.
Example of processing after consciousness: Let's say you squint harder and realise your first conscious impression was wrong, maybe it's actually a new kind of white bear but not a polar bear.
There may be ways to define what are right/wrong/good/bad ways of this processing occurring. So on DMT, the processing might be converting a drawing of waves into the concept of 7 dimensional objects because it wrongly decides to check and match against your knowledge of high school geometry. Whereas if you're not on DMT you're more likely to check against simple drawings of waves and be like "oh yeah that's just a wave".
But the even bigger question for me is whether there exists a single step at which things hit consciousness? If I write down the brain algorithm as a program with subroutines, will I be seeing an infinite loop* with exactly one "hit consciousness" step? Or for instance, can I simultaneously be conscious of different parts of my brain outputting different things after different amounts of processing? Or can I sometimes skip the hit consciousness step all together, or sometimes hit it way too often inside a single loop iteration?
*technically it will terminate on death, and to some extent while sleeping but in a short time interval its practically an infinite loop.
A friend introduced me to an Emerson quote this evening: "Foolish consistency is the hobgoblin of the little minds".
I love that you're questioning and analyzing your own assertions here. At least speaking personally, it makes me upweight your thoughts and opinions as I assume they're all constantly being subjected to the same rigor (although I do have to ignore the whole coming-to-the-AI-X-risk-conclusion thing).
I think the fallacy is not the homunculus, but the excluded middle.
"This is also how I interpret people who say “I’m not still angry about my father”, but then every time you mention their father they storm off and won’t talk to you for the rest of the day. Clearly they still have some trauma about their father that they have to deal with. But it doesn’t manifest itself as a conscious feeling of anger."
In this situation, the way out is surely to say: this person both feels angry and doesn't feel angy. There are lots of ways this might happen. For example, they might have two different parts of their mind (e.g. conscious and subconscious), one of which is angry, and the other of which is not. Or they might have an emotion that they don't remember, e.g. they act angrily, but literally do not remember the emotion, so it doesn't feel to them in retrospect (sometimes very brief retrospect) as though they are angry. Or they might be feeling emotions that share some features with anger, and some features with not-anger: perhaps they feel happy, and don't realise that anger is compatible with happiness; and experience an undirected tension, which is like anger but doesn't have a target.
So it seems like there are lots of mechanisms by which one could feel both angry and not-angry; but the folk psychology theory of both the individual and the psychologist is ruling out that possibility for both, so they're stuck in a dichotomy of "either I'm not angry or I'm lying."
Well said!
Well, I wouldn't no-way. We certainly seem to agree on a great many things, but the borders are interesting areas to investigate! Minsky's point about thoughts being ambiguous is that it's a feature, not a bug.
I used to experiment with lucid dreaming. I think of this as sort of being able to flip different qualia switches with my conscious mind. I’ve come to believe these switches do as little work as possible.
For example, I might flip a switch that says I’m listening to a symphony. My conscious mind really feels, “Wow, I am listening to a symphony with a perfect reproduction of all notes…isn’t it amazing that my mind can do this?” But, after long reflection, I don’t think my mind is actually reproducing the same qualia associated with listening to a symphony in an actual music hall. It’s just flipping the switch that makes me believe I am.
It’s a bit like asking people if they can picture a penny in their minds. Many honestly think that they can, yet when asked to draw, they can’t remember if Lincoln faces left or right or where the date goes.
There is obvious vulnerable component in all this... the reporting part. That's the part doing "compiling what theoretical inner experiencer feels into an outside report", even if no real "experiencer" part actually exists behind it.
If you subvert reporting in any way - not just lying, but deny it experience to evaluate and transmit, or deny it memory of having a certain feeling because lower circuits didn't feel it as important enough to encode and filtered it out, or some part just didn't get attention to be included in report despite being possible to recall later, you could have every feeling and still be wrong in reporting it.
People wouldn't be wrong in their feelings (as they certainly had them before they were filtered out), but they will be unable to report them.
And then other people could see something obvious reflected on your face as it happened (or read instruments), and correctly guess underlying feeling, yet you would deny it happening as you would have no memory of it to report.
"The map is not the territory" makes sense, directionally. But better yet, have a third level.
1. Natural language with all its ambiguity
2. Conceptual structure given an agreed upon set of axioms
3. What actually exists exceeds our capacity for modeling or communication
The word "map" often refers to a conflation of L1 and L2, and we have a tendency to overassociate our perspective on the world, what we sense and process, with L3.
When we swallow the bitter pill that we must write code (ie go to L2) to sidestep definitional debates, and give up hope of "being right" or "knowing the truth" except in the trivial academic/synthetic sense that given a rule for rewriting strings of symbols, one may apply the rule correctly or not, we open ourselves up to deeper levels of experience.
I prefer to use English words to those that require a whole cultural and textual tradition to understand, so for me, "Joy" feels more natural than "Jhana". That said, if life feels hard to enjoy, why not choose to find value in suffering? When we observe and study the data that comes thru the channels of pain, fear, anxiety, etc., we treat it as a resource, an asset on our mental balance sheet. And indeed, to suffer for deep purpose sounds better to me than to feel pleasure devoid of meaning.
Most contexts in which I've heard someone say "you're wrong about your experience," they're pretty clearly saying "your experienced qualia is poorly matched with some more objective reality". As applied to the aforementioned example of hunger: the idea isn't "you are wrong about not experiencing the qualia of hunger" it's "your digestive system is objectively asking for food and your perceptual engine is making a prioritization error by not noticing this and providing the qualia of hunger".
As applied to the "false enlightenment" case, the person had manipulated their perception prioritization engine into ignoring categories of experience for attention purposes (in a way apparently different from the intended goal? Most meditative experiences sound to me, from the outside, like increasingly elaborate manipulations of our perceptual engines, but I have no idea).
All to say: we don't usually mean "this qualia is wrong" we mean "this qualia is a poor map to reality". This is an important thing that comes up all the time. I can't think of a case in which being wrong about the qualia itself matters/has any impact distinct from the qualia being a poor match to reality.
I think the point of "false enlightenment" story was that if it would be "true enlightenment" then just waiting a bit and focusing on "do i have any thoughts? how about now?" wouldn't have broken the experience; hence being largely same as your first example.
"(though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)"
I can very confidently assert that people having a bad trip on psychedelics keep on repeating from one minute to the next "How long is this going to take?" while obviously having a very unpleasant experience of subjective time dilation. There seems to be an element of amnesia to this: they forget that they asked the same question maybe twenty seconds ago, and incorrectly interpret that a very long time has passed. However, this is a very obvious immediate subjective experience while the trip is going on.
As contrasted with ketamine, where a lot of people report amnesia and a subjective experience of time going faster ("wow, where did the last hour or so go?"). So merely the amnesia part doesn't explain it for me.
I've never seen 'never trust a fart' given such philosophical depth.
Re jhanas: I don't know anything about Bhudism. So I don't have any idea what difference a Buddhist would see between 1) meditator A, seeking and finding jhana and 2) meditator B, seeking and finding Paul Ekman/Darwin 'Expression of Emotions in Animals and Man'- style happiness.
One factor that complicates this, and that I never really appreciated until I had a toddler, is that we learn to express these inner states from other people. My son says he's hungry, and he might be hungry, he might be lying, or he might actually not fully understand what hungry means. In that sense I think he actually could be *wrong* about his internal state, because he doesn't fully understand the meaning of the language.
Right! And then they grow into adults with widely varying abilities to be aware of and name their internal experience. Which says to me we are all “wrong” to varying degrees about what our experience is and how best to describe it in a way that communicates well to others. The word “wrong” doesn’t so much work for me here but maybe more like at what levels of subtlety are we aware of our experience and how good is our language for describing it (ie, what’s our basis for calling this experience “jhana” versus “a happy feeling while meditating”?)
I think that last paragraph actually catches a very important distinction. If we take a sort of No-Self as a premise (as contrasted with a permanent Self), the claim "I'm experiencing X" becomes "There is an experience of X", and the counter-claim becomes "No, there is not an experience of X."
It seems to me that the latter one is a harder claim to make than "No, you just think you're experiencing X." To claim there is a complete lack of experience X feels intuitively stronger than claiming someone is wrong about something.
I think this is due to the idea that we think of a person as an observer to their mind states, and that observer might be correct or incorrect on their observations. Whereas if we think experiences simply bubbling up, we remove that part of the equation. No mistaken observations occur, as there is no one observing. I guess this is at odds with most people's intuitive view on the matter, but as a meditator it seems very right and natural to me.
I don't think this means that people cannot be wrong about matters related to their cognition, but I do think it does make the claim "No, there is no experience of X" a lot harder to defend.
I think it's useful to distinguish between attention and conscious experience. It seems possible to have the latter without the former, and so by definition it will be possible to be mistaken about one's own experience (if one is not paying sufficient attention to it). For example, I could be in physical pain from sitting in my chair for too long while reading a book, and yet not notice this pain because I am so engrossed in the reading material. Suppose I stop to reflect now and realize that not only am I feeling a pain in my left foot at the moment, but that I have been feeling it for the past five minutes. If you had asked me during my inattentive phase whether I was in pain, I could have honestly reported in the negative. This seems to go beyond the "absent thoughts while meditating" example, since it's not that I made a past mistake in categorizing my thoughts (as being a 'pure conscious stream' or whatever), but that I really missed a conscious experience.
Of course, in normal situations our being prompted by a question would be a sufficient stimulus to activate our attentional mechanisms, but it is at least conceivable that we can fail at this task. It's possible to go so far as to say that our attentional mechanisms are perfect and incapable of missing something, but that seems a really strong (and demonstrably wrong) claim.
It might also be countered that attention = consciousness, and so I wasn’t really consciously aware of the pain until the moment I realized it. But this still commits us to introspective error, since that would mean that my realization that “Aha, I have been in pain for the past five minutes” is itself erroneous. This does leave open the possibility that the more moderate claim of "we can't be wrong about the experiences we are presently aware of" is infallible though (which is perhaps what you had in mind anyways?).
Just to clarify, I envision the remembering of the past pain not as the realization that one is presently in pain and that probably this has gone on for quite some time, but rather as the true remembrance of a past event with the newfound sensation of pain in the left leg being present in the memory. I myself have experienced this on many occasions.
In my very uninformed model of conscious sensation:
feeling of X = some deep neutral network spread around my brain has spit out a low-dimensional output that I've been calling "X"
I'm pretty sure I'm just conflating ANN 101 with the vastly more complex brain network, but funnily enough this model seems to reduce the problem of this post to triviality.
If the "hunger" output is produced by the network beyond a certain threshold, I feel hungry, otherwise I don't. And sometimes the many inputs to the network happen to be *almost* right for "hunger" but in an unfamiliar/untrained combination that doesn't trigger the output enough.
So saying "I'm not hungry" is a statement about the output reading, not the inputs.
This also works alright for the statements "I can see in 7 dimensions" and "I'm enlightened". In this case the error is in what label we give to a new network output.
I'm writing this because it seems to work too well to be right, and I'm hoping to get a good scolding and a solid update to my beliefs.
Sure they can: "I've been experiencing blue sky at 12:34:56" can be easily described as right or wrong. And even if you are saying that you are happy it may be because cosmic rays activated speech-related neurons without your brain actually being in a happy state.
You can construct your theory of knowledge with "you always right about experiences" axiom, but nothing forces you to and you get more confusing and contradictory view that way.
If you ask a two year old what color they see something as, and they answer wrong, I think that might constitute a pretty clear example of someone being mistaken about their experiences in some sense. (I think people who claim to not be hungry may sometimes be doing something similar to that.)
Generally speaking, people attempting to describe their own experiences could mean something very different than what I would mean by the words they say? I would imagine Andres is having some sort of experience which they are choosing to describe as "seeing 7D phenomenal objects", but I suspect I would choose different words if I had the same experience.
I don’t know … Seems like you can have conflicting experiences.
E.g. If a psychotic patient tells you they hear voices telling them that you are actually secretly a lizard person, but they know that’s absurd, and they just want the voices to go away… Both experiences are “real” phenomena, but there are no voices to hear and one part of the brain knows it. The same brain experiences voices, and experiences no voices at the same time.
This also brings to mind the split-brain experiments, that also seem to show conflicting experiences: One part of the brain can be factually incorrect about the nature of reality, and the other part of the brain can be seemingly wrong about what the first part believes and why.
A question then, is what we mean when saying people can’t be wrong about their experiences… Are we taking each experience individually, are we describing a meta-experience of having conflicting experiences, or are we describing the mental process of squaring the experiences…?
Indeed. Sort of like transwomen claiming to be female ... 😉
For example, see the Medium article (archived) titled, "Is Julia Serano right that transwomen are female?":
https://archive.ph/hpXas
Though Serano apparently dances around the definition for "female" without ever saying exactly what "she" means by the term.
In any case, the point is more or less, as you suggested, that the "truth value" of someone claiming to be a member of particular categories -- "male", "female", "messiah", "famous historical generals" -- is contingent on what society deems to be the "necessary and sufficient conditions" for category membership. Individuals claiming membership without evidence of the required "membership dues" can rightly be deemed as mad as hatters.
Hmm… That’s seems like a bit of a non sequitur to what I meant. Definitions of terms, “society deems”, “category membership”… These seem like semantic issues. My point was about how to think about internally conflicting experiences in light of what Scott wrote. Maybe you meant this as a reply to someone else?
Maybe a "bit of a non sequitur", though I had been wondering whether the subtext to Scott's question wasn't the somewhat topical issue of transgenderism, and of the claims of transactivists.
But don't think it's entirely a non sequitur since your own "factually incorrect about the nature of reality" seems to hinge on the terms and framework we use to describe that "reality" (someone once argued, with some justification, that that terms should always be put in quotes).
My point was, sort of, that however we describe "reality" -- to ourselves or to others -- it seems always to be the case that we use words to describe it -- "semantic issues" from square one. Though I suppose something like, "I see the spinning dancer turning clockwise" as opposed to "the dancer IS spinning clockwise" is maybe closer to the dichotomy of what you and Scott are getting at:
https://en.wikipedia.org/wiki/Spinning_dancer
But, in some cases, we might simply be using a non-standard definition, or be unclear on what are the accepted criteria for category membership, in the case of the sexes, but, in either case, I think transwomen claiming to be female have to be seen as "factually incorrect about 'reality'".
The subtext isn’t so “sub-“, as it is spelled out quite clearly and hyperlinked in the first paragraph: “A tangent of the jhana discussion: I asserted that people can’t be wrong about their own experience.” You should browse through that discussion if you haven’t already.
Reading subtext into things is a dangerous exercise, though. Even if I articulated my point so poorly that misunderstandings are unavoidable, the response still comes across as a bit of a Rorschach test…
Either way, I’m not going to discuss gender here.
Don't see any reference to transgenderism in either of the two articles. Couple of comments other than mine in this one.
But still think it moot about "can't be wrong about their own experience" since it seems dependent on how defines "experience" in the first place, and somewhat contingent on how one describes those experiences.
Though agree with you about "dangerous exercise" -- why I tend to phrase any reference to it in terms of questions & hypotheticals.
But your call of course on "not discussing gender here", though I might suggest Scott pick up the cudgels again on the topic. And that particularly since it seems naturally to follow from the current ones, and since he gave a more or less decent thrashing of it some 8 years ago -- even if some of what he said then seems to contradict what he's saying now -- and since it ties in with my point about categories:
https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
ITA: Scott Codex remembers the lesswronger technique of Dissolving The Question
Maybe this is already covered by the examples in the main post (but no example matches it 1:1 IMO): a very common experience (almost a trope) while on psychedelics is something like: "Wow, what is this weird feeling I'm feeling? I can't identify it. Wait, maybe it's happiness? Oh, it feels so good to be happy! Wait, it isn't happiness. Ah, now I recognize it: I'm hungry! I better go grab a snack.".
To me, it makes perfect sense to say that this person was hungry all along, but misclassified the hunger due to the altered mental state making it hard to identify emotions. So it makes sense to say that this person was wrong about being happy.
This is not like the example of someone not feeling that they are hungry. The psychonaut know that they are feeling a feeling, they are just unable to classify it.
Another effect I think is similar is that it reliably takes me a few seconds to differentiate hot from cold.
I'm inclined to think the problem is less the slowed down mental status, more that the feeling itself (in my experience) is unclear.
I have failed to identify hunger for about 45min on a trip, I felt weak, short of breath and generally unwell, but even consciously looking for it not something that exactly felt like hunger (that's probably not exactly the same as what happens when it takes a few seconds to identify which feeling it is, but my hypothesis is the feelings themselves being weird and unusual, I think moreso than the drunk mind being slow to name them)
I'd make an analogy with myopia. I'll have a hard time reading without glasses because the qualia comes pre-blurred. It can be hard to classify feelings like you said, but I feel like it's a sensory/qualia distortion more than the altered consciousness/drunkenness. (Not sure if the distinction entirely makes sense!)
My problem with this analysis is that it doesn't do much to define what's meant by "honesty". For myself, I find it helpful to think of honesty as dependent on self-knowledge, which is distinguish from what I call "sincerity" which is rendition of what one thinks or feels quite apart from any issue of self-knowledge. For example, if someone has not ever understood that anger can frequently be a manifestation of underlying anxiety, they may sincerely talk or act as if they were driven entirely by anger and subjectively report no anxiety, even if they were quite apparently motivated by anxiety from the perspective of someone who understood them better, but this would not be honest because they lacked self-knowledge.
I also recoil at making honesty and lying a dichotomy. I only call it "lying" if it's an intended misrepresentation (by commission or ommission) or possibly a really blatant departure from what most people are expected to know about themselves in our culture. The opposite of honesty is dishonesty, opposite of sincerity is insincerity. Thus, the way I look at things someone can be honest yet at the same time insincere (because what they say isn't what they believe or feel in the moment). The novel, The Sympathizer, is a great rendition of how this can be.
Of course not everyone draws these distinctions. But I do think honesty is much more difficult that just saying what you think which is why sincerity is a very important concept. And I think that assuming people are lying if they misapprehend their own thoughts or feelings isn't a very useful way of looking at what a challenge it is to live honestly and authentically in our world.
As someone who studied and was briefly involved in criminal law, I cannot believe this post took more than 1 word: "yes".
One would not believe how many well-meaning and sincere witnesses will remember things that just did not exist or did not happen. This can ve as simple as "the man in red pants came from the right and hit him"/"the man in white pants came from the left and hit him" (where only one man came and hit the victim and there was no-one in white pants until a few minutes later, and the court can see this on the video).
And that is without even touching on false confessions.
To show people how we fool ourselves I tend to ask people to close their eyes and concentrate on picturing the front of their house. After a few seconds I ask if they can see it with all the details. Sometimes people are a bit insulted: of course! Then I ask them to enumerate one of the details that is repeated. E.g. how many shingles are there horizontally? This tend to break down the illusion they really saw those details.
You don't have to reject the "experiencer fallacy" outright, it's enough to accept that the experiencer isn't a very consistent entity, but somewhat variable depending on circumstances. I like the "boardroom" metaphor for consciousness, where various modules/processess can bring up their issues for attention and participate in reflection, but no "board member" is there all the time. So, for example, you might be completely sincere about not being bothered by your father issues *at that moment*, but as soon as something reminds you of him, the "board" is reshuffled and suddenly you care a whole lot.
I thought this post was going to be about memory. We know people can be wrong about their memories (e.g. contradictory eyewitness testimonies, false memories implanted by others, or people being sure about where they were on 9/11 but then finding their contemporary diary entries that contradict it).
The post is mostly about people's subjective experiences in the immediate present: I am/am not hungry, I do/do not see a 7-dimensional object. But the question that prompted the post is about people reporting their experiences (of jhanas) in the past, so the unreliability of memory could come into play.
(not saying "and therefore jhanas aren't real", just saying "this is a factor to consider")
Phenomenology started out at pretty much exactly the position you're putting forward here: if we think of qualia as the apriori of experience, i.e. the condition for the possibility of experience, then our experience of qualia is direct and unmediated - there is nothing that can get 'between' us and the essential building blocks of perception. If there is a fact of experience, then that experience is a fact.
In this view, we're each 'authorities on our own experiences', and in experiencing ourselves experiencing something we become immanently aware of the given underlying structures of experience that we have always been taking for granted.
Now, the problem with this idea, imo, is that the fabled 'qualia' which constitute directly experienced objective facts seem to be a myth, and perception is mediated by cognition just as cognition is mediated by perception. Rather than "everyone always knows and is right about what they're experiencing", isn't every 'experience' both mediated by categorization (the dull pain I feel in my right arm, which appears in this localized form only because I have mapped my own body and the notion of pain, the citrus yellow which I can perceive in a certain way precisely because it has appeared with a certain stability in a certain context, and where my 'immediate' experience isn't so much that of a color, but includes, is bound up with, the various contexts within which I would expect to encounter that color - in other words all those neurons are firing as well) and something which requires further reflection, interpretation, for us to make sense of it even to ourselves?
We are always already mediated in our experience by our conceptual frame of the world, just as our conceptual frame is always already mediated by experiences, so there's no 'ontological ground' we can refer to when we're talking about an experience. In talking about it, we are, in a sense, reconstructing a process that has 'reflective depth' to it, and what it is and how we should make sense of it isn't by any means a trivial question.
I (also) wondered if the post was going to be about the political idea of "lived experience", which is often thought to be incontrovertible, e.g. people's lived experience of racism or sexism. Can someone be wrong about that, or about a particular instance of that? Maybe, if you think Bob was mean to you because of your race, but actually Bob is a jerk and is mean to everyone.
I'm surprised more people can't see that the obvious answer is 'yes'. Particularly, about being happy. Two quick examples: politics on twitter, people think they like it, they think they're having fun, but actually they're getting angry and outraged and addicted to those feelings, and mistakenly think it makes them happy when they are not. Another example is cocaine. Ask any user, and you'll find that the first few bumps always feel great, but towards the end of the night ask the user, and they mistakenly say they're having fun, but you can see they really aren't, and the next day people can more clearly see that the fun wore off after a few lines and it just became about hitting the need.
When I was a kid, I idolized my cousin Shoshana, who was two years
older than me and therefore impossibly sophisticated in my eyes.
During a family camping trip one summer, she introduced me to a
trendy band called New Kids On The Block. As we sat in her tent,
listening to their latest album on her cassette player, Shoshana said,
“Ooh, this next song is my favorite!”
After the song was over, she turned to me and asked me what I
thought. I replied enthusiastically, “Yeah, it’s so good! I think it’s my
favorite, too.”
“Well, guess what?” she replied. “That’s not my favorite song. It’s
my least favorite song. I just wanted to see if you would copy me.”
I was embarrassed at the time. But in retrospect, it was an
instructive experience. When I claimed that song was my favorite, I
meant it—the song truly seemed better than the other songs. I didn’t
feel like I was just saying so to impress Shoshana. Then, after
Shoshana revealed her trick, I could feel my attitude shift in real
time. The song suddenly seemed corny. Lame. Boring. It was as if
someone had just switched on a harsher light, and the song’s flaws
were thrown into sharp relief.
quote from Julia Galef's book
Children are often tired and hungry and not realizing it, even when they're told/asked. This suggests there's a learning curve, and this kind of things are often unequally mastered even into adulthood.
It could be a matter of skill.
To drag this down to grim reality, I am answering "Yes" to "Can people be honestly wrong about their own experiences?" because at the moment I (and others) are dealing with a family member making claims about our shared childhoods.
These claims are wrong (some of them involve me, and when I say "That never happened", they come back with some rationalisation as to how they are right and I am wrong). They are in therapy, and I have the feeling that they are telling the therapist all this, and being believed, and so nothing to challenge their presentation of "the facts of my experiences" is happening, and they remain convinced of their mistaken memories or interpretations.
So yeah - it's perfectly possible for someone to give a report of their experiences, which they honestly believe is true and what really happened, even when what they claim ranges from the 'misinterpreted' to the 'literally impossible to have happened as you describe it'.
Sorry for dragging down a fun discussion of mental spaces, but I have no idea what to do or where to turn right now, especially as any challenging I do is further incorporated into the victimhood narrative this person has going on: "argument over claims and denial of same" becomes "heated discussion and yelling" becomes "you physically assaulted me!" so when they are telling this to a third party, the story goes "and Deiseach hit me when I tried to tell the truth about what happened back when we were kids". The third party is going to believe them, why not, they weren't there and family member doesn't come across as obviously delusional just upset and fearful, as they would be if they had been physically assaulted.
There seems to be some confusion about what "being wrong about your experience" means. It's well known that memory is malleable and fickle, for one thing, and even in the moment people can get very different impressions of the same event. "Your memory about an event is factually accurate" and "you honestly think that you experienced what you remember experiencing" are very different things, and my impression is that Scott says that you can't be wrong about the second one.
Can you have a factually incorrect memory about something that happened 1 second ago? Because for all practical purposes that would be the same as being wrong about what you are experiencing.
Of course you can. Say, you have bad eyesight and mistook a stranger from afar for your acquaintance. You're factually wrong, but your internal experience of thinking "I think I've seen Bob" is true. It's pretty much tautologically true for most intents and purposes, and Scott brings this up regarding subtle conceptual points.
I think you can, in the sense that even short term memory is a necessary summary of your mental state. The full mental state (which in a sense is the experience) is gone forever, not stored anywhere. And as I think consciousness is basically the same thing as memory from what happened 1s ago, i all boils down about difference between experience (the whole brain activity) and conscious experience (a summarized, edited to be easy to remember and speak about, version of the brain activity). I think they are sometimes very different, and I suspect it's the case for Jhana, maybe it's a hack about creating a conscious story (in fact short term memory) of pleasure, while the whole brain activity is very different from other more addictive kinds...
I wrote on it back in the Jhana thread, so I will put it back as I think it's relevant:
If you admit that consciousness is just a summarized, coherent (or trying to be) and sequential story about your actual mental processes (which are less coherent and parallel because made of multiple actors collaborating/competing), coming after the actual decisions or experiences, then yes, I believe you can be wrong about your experiences. In fact, you are always wrong, exactly in the same sense memories are not the experience itself (and false memories exists). I think it's mostly the conscious thread that is stored as long term memories, and also the one that is communicated to others (because it exists exactly for that, it's a compressed serialization which is exactly what you want for storage and low bandwidth communication. Your conscious experience, like short term memory (not clear if they are different), has been edited to remove incoherences, compress it, maybe to the point it is a misrepresentation of the actual mental state before it. it's indeed an illusion but not in a pure philosophical sense, as the mental processes running on the side, even before the conscious thread is built really exist, can be observed and maybe are even partly recorded (although this is not clear).
A good example is drugs preventing medium/long term memories. Can they be used as sedation, as anesthesia, would you accept to use it (together with strong bonds) for surgery? Basically, the whole discussion regarding the different steps of complete anesthesia are very enlightening (and very disturbing for people trying to put a profound metaphysical importance on consciousness)
“Well, it’s possible that fundamentally all happiness is an illusion, but I’m definitely experiencing the normal type of illusory happiness right now, it’s pretty vivid and intense”, and they say “No, you’re wrong about that”, I still feel like they’re making some kind of weird type error. Can’t justify it, though.
-------
How would they know? The difference I find between "hunger", "happiness" or "seeing polar bear fur as white" and jhana/thoughtless consciousness/seeing in 7D is that the first ones are possible and lots of people experience these regularly while the others are either impossible (I don't think we can see in more than 3D as a matter of, like, how our eyes are constructed? and thoughtless consciousness?! really?) or extraordinary claims where "you're fooling yourself" seems the far more likely explanation.
Sorry - this may be too basic for our philosophers here but I thought maybe a stupid layman like myself might bring things down a level or two...
It's trivially false that all statements made by honest people about their experiences are true, if this includes both past and present experiences, because people sometimes contradict their previous statements, and it's impossible that both the original statement and the contradiction are true.
If it's limited to present experiences, this problem doesn't arise. The honest person who says, "I'm happy" is telling the truth, and if they later say, "I now realise I wasn't happy", they are then mistaken (perhaps by misremembering). However, this is now a very weak claim, which in particular doesn't help with reported experiences of jhana, unless the speaker actually is in jhana at the time.
Here's a silly, trivial example that might clarify a larger point.
Suppose you are estimating the probability that something will happen and you estimate there is a three in four chance of it happening, so you say "I subjectively think there is a 80% chance this will happen" because you have a brain fart and momentarily think 80% = 3/4.
Would we say you are
"accurately reporting your subjective experience of experiencing yourself believing the thing will happen at probability 80%, while you actually believe it will happen at probability 75%"
?
Maybe? But is "believing thing will happen at probability 80%" really an experience?
A better way to describe what is happening is that you are being wrong about your experience. Or, more accurately, in translating your experience to words you find the wrong words to describe it. In this case, you say 80 instead of 75. I think that works for the meditation example too. The woman was indeed experiencing something, but she picked out the wrong words to describe it "I'm not thinking about anything" VS "I'm thinking about not thinking about anything".
Now, believing that a sentence S corresponds to an experience e, is itself a belief. So granted had the woman said
"I believe that the sentence 'I am thinking about nothing' corresponds to my subjective reality" she would have accurately reported her subjective reality. But that is not what she said.
Instead she just said 'I am thinking about nothing'. Without the surrounding words which makes her statement a false first-order report of her mental state and not a true, second order statement about her beliefs about her mental state.
What happened to the idea that beliefs should be anticipation controllers?
When a person A says they have mental state X, but you object that actually they are wrong, what kind of predictions each of you is making about the future?
Somatic complaints may be something like being wrong about your own internal experiences, since the basic hypothesis is that they are a function of low-insight (regarding the connection between your psychological processes and your body).
But I guess its more like a misfire regarding the underlying "cause" of the physical symptom and wrongly describing that property of it.
CS Lewis pointed out that everything is a real something. It might be a real hallucination.
He probably got it from Plato's Euthydemus.
He did, and mentions it elsewhere! Great pickup! Notice how he uses a related argument in LWW, even though it is a children's book. (Actually, the Narnian Chronicles are adult books written to be accessible to children.)
Hmm, that doesn't ring a bell from LWW, but it does remind me a little of "things [magic apples] always work according to their nature" from The Magician's Nephew.
I’ve always wondered what was really going on with people who think they’ve been abducted by aliens.
A lot of them seem to be quite normal but earnestly believe the experience.
They are. It’s fascinating. A woman whose name I can’t recall did a deep dive on this and wrote a book (I think). I heard a lecture she gave about it.
What were her conclusions?
That the great majority of them were normal, working, engaged people who you wouldn’t think in 1 million years had been abducted by an alien
Yeah, I get that. Did she have some grand theory of what is really going on with these experiences?
Not that I recall . I just tried to find a link on google but no luck.
All of this seems like "if a tree falls in a forest, does it make a sound?". There is an internal perception of hunger, there are some biological correlates, and there is nothing else. You can call the person (an honest person saying they're not hungry) correct or incorrect, it doesn't change anything.
"But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are."
Blindsight.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2747232/
"On your blindsight side, do you have the experience of seeing any obstacles?"
"No."
"Reach out for the target object."
-Subject avoids obstacles in the way of grabbing target."
Using sight without the qualia of sight. They can say "I'm not consciously aware of experiencing it" but not "I'm not experiencing it."
The trouble is that all introspection is retrodiction.
Take the person who says "I'm not angry at my father", but clearly is. They might be having feelings of anger, accompanied by visual imagery of their father. But they are so invested in the narrative of "I'm not angry at my father" that every time someone asks "what's wrong?" they look back at the memory and come up with a post-hoc rationalization of what they were feeling--any narrative will work, so long as it isn't "I'm angry at my father."
This happens every time we report our inner state (to ourselves or to others). We have the experience, and then look backwards at it and say "what was I feeling just now?" If you watch this process during meditation you can twist yourself into all sorts of pretzels.
"This seems no worse than somebody on drugs scrawling “JOY = JUSTICE * LOVE” or something on a blackboard and believing that it they’ve discovered the fundamental truth of the universe"
Well of course this is wrong, it's MERCY which is JUSTICE * LOVE 😁
Belief, itself, is a tool, not merely a measurement.
We are social and story-based creatures. Consider how we can love money -- not merely the power that money would bring us, but money itself -- even though we don't really understand what money is.
Engineers and scientists, beaten by experience into respecting falsifiability, apply a rigorous test for things they believe. But even they only do this reliably in their practical scientific domain. I know religious engineers who turn from skeptical thinkers to ingenuous ones when the subject changes from their trading algorithms to Jesus. They know what truth feels like when it's rooted in undeniable empirical reality; sometimes I marvel that they don't apply that lens to their religious belief.
But of course, even though I don't believe in god, I do believe in plenty of my own myths, like personal integrity, interpersonal loyalty, familial love. I don't apply any kind of falsifiability test to these beliefs; I believe them out of a mix of deeply rooted values and intentional belief. I **want** to believe in them, and in fact I cultivate my belief in them, steering myself back towards them when I get too infatuated with competing fixations like personal success or petty rivalries.
I've often thought it curious, for instance, that so few people who claim to believe in Jesus make a study of the gospels in their original language. And of course, most of the people in the world who claim to believe in Jesus have never even bothered to learn what Jesus's name was! (No one in his lifetime called him "gee-zus".) If I believed in Jesus the way that I believe in, say, Bayes' rule, or even Mexico City's being a phenomenal city to visit, I would definitely be over on Duolingo studying Aramaic :)... But that's not the sort of belief my colleagues have; theirs is more like my belief in familial love, where I certainly might read the occasional book about parenting, but I also think I more or less have all of the basic knowledge I need already to live in rch communion with that belief.
That is the lens I think we should apply to claims about jhana. There are some Christians who claim to believe in Jesus who would agree they do not believe in Jesus in the way they believe in dental cavities; they might recognize that belief in Jesus is a tool they use so that life is more in line with how they want to live it. They might acknowledge that they could easily find the same truth in most other religions. But these same people are not lying when they say they believe in Jesus -- because our capacity for belief is precisely **for** this sort of contradictory mess! We are fucking great at believing things; belief is like a perfect plumber's tool, that doesn't just tension stiff but can twist and turn itself to find any holes in a pipe and patch over them so the certainty and confidence and connection can flow.
Is there a point before forming such a belief where we are aware of a deliberate choice? In a few weirdos, yes, but for most people, noticing those shimmers of intention is something they're just not used to doing, and the same part of their brain that is so good at forming and using belief is also great at simultaneously erasing its tracks. (For someone who has gotten good at noticing in ourselves how frequently we begin to lie up ourselves, it can be very destructive to relationships to start noticing how often others are doing this!)
Of course, on some level, I know my belief and familial love is a brilliant hack performed by my DNA and brain chemistry, and not something objectively real. But honestly, I sort of suppress thinking about that; embracing the illusion is necessary for it to work, and I am so committed to the illusion that it feels somehow disgusting right now for me to even admit that I know, on some level, that it is an illusion. We are weak creatures designed to team up extraordinarily well; we report beliefs that happen to be rooted in interpersonal story, status, and faith more than in empiricism (beliefs that remote parts of our brains know are sorta bullshit, just in case) -- because **that's what beliefs are**. We're all Trump trying out the belief that it wasn't him on the Access Hollywood tape, until his confidantes nix that idea; we're just usually much more suave in our belief forming then he is, much better at hiding from ourselves, and others, how the belief sausages are made.
Someone who tells us their experience of jhana is better than the best sex imaginable has some part of their brain that could clarify that this is an aspirational belief, mixed in with some observations of sanguine fact that they consider plausibly close. But why would they? Pursuing that sort of clarification certainly isn't what got them to meditate for 1000 hours. Only the occasional weirdo is, like me, even mildly interested in trying to build walls between their aspirational beliefs and their observed sanguine understandings.
This was super interesting to read. It leads me to wonder if there's a useful distinction to be made between the stories we tell ourselves about the values/priorities we hold (the values/priorities of familial love, Christianity, financial success, etc) and the stories we tell ourselves about a thing we experienced.
Whether one experienced a jhana state or not feels to me like an experience question rather than a values/priorities question. So for me it lands more in the realm of questions like "have you had dreams in which you flew?" or "when you go for a long run, does it elevate your mood afterwards and for how long?" or "when you're on ketamine, do you experience a sense of dissolving personal boundaries?"
A person could say "it's really important to me to be someone who has had interesting mind states during meditation and my identity is pretty wrapped up in the idea of performance, even in the realm of meditation, so..." and then we can speculate whether they choose to lie about the jhana state experience, whether they think they had a jhana state experience nudged forward by their "need" to, or whether they actually did in the sense maybe that you could bring ten really experienced meditation teachers into the room, including one that's worked with this person over time, and after interviewing the person about their jhana experience confirm that very likely that was a jhana experience.
Jhana states are a thing that happens when people meditate -- it's not an everyday thing for everyone obviously, but it's not super unusual either. And so in that sense it seems pretty different from ideals or aspirational aims or values. Now I think it's still possible for all kinds of self-delusion in either case, but I don't see that self-delusion based on ideological commitments is essential to describing an experience one has had the way it seems to be in what you describe about familial love or desire for material wealth.
The distinction I'm making isn't a bright line in all cases for sure. It sounds to me though that you're arguing for claims about experience being just as ideologically driven as claims about ideology, and I don't see that.
Put another way, if someone says they're currently feeling hunger, that's a subjective judgment and it's probably impossible to be honestly mistaken. It might be that you're registering a different feeling as hunger (i.e. food won't fix it), but you're the best judge of your own current experience.
But you can absolutely be wrong when you say "yesterday I felt hunger" because your memory can be mistaken.
You can also be wrong when you say "I'm way hungrier today than I've ever been." You're comparing your current subjective impressions to your memories of past impressions.
And you can absolutely be wrong when you say "I was more hungry last Tuesday than I was six weeks ago." The amount of potentially mistaken memory you need to process to make a comparison like that virtually guarantees you're just making a story up about your past self that is agnostic to the actual truth.
One's perceptions can be fooled via the phenomenon of transidentification, see
https://peterwebster.substack.com/p/cornflake
and
https://peterwebster.substack.com/p/un-elephant-ca-trompe-enormement
I think I have a good counterexample. I like to read books in bed late at night, sometimes when I'm doing this I get tired and close my eyes for a moment, then drift off into a half dreaming state. In this state I still have the sensory experience of lying in bed, but I hallucinate something about reading a book. At this point if you asked me "Are you reading a book?" I'd say yes.
From this state I often fall completely asleep, but sometimes some mental process notices "Wait a minute, my eyes are closed. How can I be reading?" At this point I think about the book I'm reading and realize that I couldn't actually repeat back any of the sentences I've supposedly read. It's like someone stuck an electrode directly on the "feels like reading a book" neuron without feeding my brain any book content. In this moment I realize that I was not reading a book, and furthermore I was mistaking some kind of incomplete dream quale for a much richer experience I was definitely not having. If you had asked me "Can you tell me what the book you're reading is about?" I would have thought "Of course", then attempted to and failed utterly.
This is a little oblique to what you wrote, but reading it returned me to the example of whether say a depressed person can be "wrong about their own experience" and therefore we can all be wrong about our own experience.
Like your reading and then "reading" after falling asleep, there seems to me to be an important distinction between a person describing their felt experience versus a person characterizing their felt experience, though the line is a fine one.
"I am worthless and there's no point in my carrying on" is a common depression thought. The person having that thought isn't wrong when they say they're having that thought. The question is, are they wrong about the appraisals contained in the experience that engendered the thought?
If the depressed person said instead, "I am having the thought that I am worthless," then it seems much less arguable about whether they could be wrong about their own experience.
In the case of book reading or jhana experiencing, if a person said "I think I'm having the experience of 'reading' or 'a jhana state'" then that little bit of "I think" acknowledges the provisional nature of all stories about one's experience.
I find myself thinking along the lines of "okay, but so what?"
Looking back at Jhanas, what those of us who are skeptic are really worried about is not if some person has an unusual feeling of extreme happiness (I think most of us would agree there are some people who just seem to always be happy, even if we can't achieve it), but whether this feeling is transferable. If I do the things they say they did, will I also have this same feeling?
If their feeling makes sense, like being hungry when you haven't eaten in a while, that seems more transferable. If the person has not eaten in 10 hours and reports "not hungry" then I doubt that I will have the same experience. If they have a suggestion of how to achieve the same thing, for instance an appetite suppressant, then I can evaluate whether that's a good option for me. Appetite suppressors are a real thing, but even if we were uncertain of that we could try one out and see if we had the feeling - it's a low time-investment and pretty low effort investment to make a determination. If we still felt hungry at similar levels to before under similar circumstances, we could also determine that the suppressant didn't work (at least for us).
With the Jhanas, one of the biggest issues is that the process is described as taking many hours per week over potentially years to achieve, with no guarantee of success. The fact that there are no guarantees and no standard timeframe means that any failure to achieve the same results would not count as a failure to replicate. Taken in aggregate, there is no way to falsify this belief - only positive cases are counted. This should lower our belief that we are able to achieve this state and lower our belief that the state being reported is actually achievable through the methods proscribed. This is true regardless of the subjective belief of those saying they experienced it.
I got a weak version of it in three tries (try = ~hour-long meditation session)
My advice would be something like
1: it's easy to be practicing the wrong thing and waste time that way. I desperately wish that someone had told me back in college that the 100% alertness/awakeness part was way more important than the calm part and if you're being super calm and meditative and peaceful that's totally the wrong mindstate for hitting it. Maximum attention! There's other stuff like this, advice I wish I would have had to make it easier. Then again, I'm typical-minding super-hard right now and maybe the average person has a brain that makes the requisite calm much more difficult to attain than the requisite attention.
2: Value of information is pretty high on this one. Maybe you have natural talent! Something that 20% of people can get in under a week, 60% of people can get in two months, and 20% of people can get never (warning: numbers pulled directly from ass) can still be very worth trying!
3: If it's not working after a month, give up. I mean, it's not fake, but I really don't think "just anyone can do it", and much less do I think "anyone can do it in a timeframe that makes it worthwhile". Mindstates can be non-universal among humanity and still very real, like the ASMR response (another "no guarantee of success" mindstate). Just stick someone in an MRI and see if their brain is doing something weird, there's the falsifiability for you. This seems like a realistic standard, otherwise you'd have to say that ASMR is unfalsifiable. Also, even if you say "it's not falsifiable", the variant statement "The claimed state can be attained in a timeframe that makes it worthwhile" (more decision-relevant) is extremely falsifiable. You try it and if it's taking too long you go "fuck this" and quit, and if you got it without spending too long on it, you go "woo, it's true!"
I'm confused about the worry around transferability of the jhana experience because it seems to suggest the whole reason a person would take up meditation is to have jhana experiences. I guess there are people out there (and on here) whose main motivation in meditating or being a Buddhist is to experience these transitory states. Certainly for some people being able to access these states repeatedly is a spur to keep practicing. But the benefits of meditation and Buddhism are by no means limited to or contingent on experiencing jhana states.
> This should lower our belief that we are able to achieve this state and lower our belief that the state being reported is actually achievable<
You mean if I try it and can’t do it then all those other people must be full of it?
Not necessarily, but if we try something in the way described and have a different outcome, it should at least lower our expectation that it could be real.
This feels a little bit like no true scotsman...
Trivial examples of honest reporting being incorrect is the medical phenomena of referred pain. You have pain in some part of your body but you perceive it as being in another part. Example pain radiating down left arm from a coronary attack or ghost pain from an organ that has been removed such as after a cholecystectomy.
Meditation is good to bring up. The example of thoughts given above is an excellent one, and there are numerous other honest self deceptions in the practice.
This comment may come across as low-effort sniping, but it's an honest question. Isn't this just an (obvious) argument about semantics? The question "Can people be honestly wrong about their own experiences?" seems to me less an argument about the state of the world, or even about philosophical truth, and more quibbling over the meaning of the words "wrong" and "experience". In particular, it is quibbling that can never be resolved satisfactorily because the English language is not that precise: if you want to resolve the confusion, you'll have to be more verbose, but there isn't a non-semantic issue at play.
Couldn't this be tested by crafting problems in 7D geometry / topology which would be intuitively obvious if you could directly experience 7D objects, but would take a lot of very hard maths to work out normally?
I've heard that Robert Langlands claimed to be able to visualise surfaces in 4D space, and I don't know if he could, but he did at least make major field-defining discoveries about them!
Charles Hinton (https://en.wikipedia.org/wiki/Charles_Howard_Hinton) invented a system for visualising the fourth dimension using a set of (three-dimensional) cubes painted in a rather complicated way, whose final version appeared in his 1904 book "The Fourth Dimension". I don't know how much success people have had with them.
I’m inclined to agree with the great British and Liverpudlian philosopher Sir Richard Starkey MBE who, when asked whether he believed in “Love at first sight”, said:
“Yes, I'm certain that it happens all the time”.
Not that he had experienced it but he believed other people’s experience of it.
Unless it’s clearly a lie or impossible, I’m inclined to believe other peoples description of their qualia.
This was a central point of debate throughout the entirety of modern Western philosophy. Descartes, in Meditations on First Philosophy (the foundational text of modern philosophy, for those who don't know), more or less builds off this problem. His eventual question becomes how error in judgement is possible if humans are made in God's likeness. He concludes that humans, being several steps removed from God, are imperfect, and that errors in judgement occur when the faculty of reason is misused: when we assert something a high degree of certainty without necessarily reflecting on how sure we actually are and how much we know.
Two things:
First, I challenge whether you're asking the right question here, Scott. As it stands I think we've overcomplicated it. Why can't we just assert that human perception is highly limited, and that moreover we tend to be somewhat rash in judgement--in other words, people tend to make judgements about things they don't understand, and thus you get all sorts of examples of people making claims about internal experience that aren't necessarily accurate or true.
Second, I think this is the reason that a large portion of the philosophical tradition has since moved away from this sort of subject-object metaphysics. I know we aren't a big fan of postmodernism here, but one thing I find really interesting in the through line from Freud to postmodernism is the questioning of the unification of the subject. There is no unified, single, coherent subject; people are not immediately self-present/present to themselves. This is why psychological defense mechanisms like repression are possible.
> we just assert that human perception is highly limited
I think I understand what you’re saying here, but I would use the word attenuated rather than limited.
Here is an edge case which is very difficult to reconcile with the thesis: the "Self-Torturer" paradox, described in Michael Huemer's excellent book "Paradox Lost":
A person which starts in a state of no pain is repeatedly given the option to increase his torture level by an undetectable increment, in exchange for $10,000. Each time, the difference in pain is undetectable, so it seems rational to accept. However, the end result is a life of agony that seems not worth the financial reward.
Huemer claims that the only way out of this is to recognize that there can be an introspectively undetectable differences in subjective experience.
This is the paradox of the beard. One hair does not make a beard, if N hairs don't then N+1 don't, therefore there is no such thing as a beard.
Our senses categorise everything, even when the thing itself is continuous. We suddenly notice that we are hungry, although the physical state of blood sugar etc. has been changing continuously. We suddenly notice that a friend is showing signs of age, although the seconds have been continuously ticking by. Indiscernable differences add up to discernable differences.
The Self-Torturer Paradox is related to the Beard Paradox, but not quite the same. An argument could exist that resolves the Beard Paradox but not the Self Torturer, because the latter has the added feature of introspection.
BTW the book Paradox Lost also contains a fascinating discussion of the Beard Paradox (under the name Sorites Paradox).
Can you be wrong about your own subjective experience? There are the obvious ways: lying or misremembering. Other than that...
For one thing, it depends on what you call subjective experience. Is it only the things you are consciously aware to be experiencing? Does it include all the stuff that you would experience if only you paid attention? This is about word definitions, not too interesting.
But I think there is also room for being wrong about your own subjective experience that is not about word definitions. There's also how you frame and end up translating that experience into a description. This can be affected by drugs, sleep, elephants in the brain, priming, the models you have...
When the drugged person reports an experience and describes it in a way that makes no logical sense, he just can't be right. Sure, as a listener you can always patch it and when someone says "I am tasting love and it tastes like the moon", or "I am not alive", and take it to mean "I feel compelled to describe my experience by the nonsensical description [...]". Then sure, if she's not lying or misremembering, I guess she can't be wrong. But imo, that's just being wrong, and you are going beyond charitable and changing the meaning of what was meant to be said when you add the extra meta layer. If you report a deja vu and report it as "I've lived this moment before", without conscious awareness that it's an illusion, you are plain wrong.
Maybe you can't be wrong if you don't add this extra meta layer between the raw experience and the description. That sounds like what meditation people kinda want to achieve, right? Then you can't be wrong in this way but only because you also can't make any claims at all, there's nothing to be wrong about.
“Consciousness is precisely the only thing that isn’t reduced if it were an illusion.”
That’s from an eerily discussion between Eliezer Yudkowsky and Jaron Lanier had back in 2008. https://youtu.be/Ff15lbI1V9M like 26:20
It seems to me that all these examples assume immediate reporting of the experience. But often that is not what happens. Rather, people are reporting the experience at a later time, and what they are really reporting is their memory of the experience. When the memory is created, or recreated, there is ample opportunity to add a story to the experience. One might even argue it is nigh impossible not to add a story. The story is our, or often someone else's, interpretation of the experience. So I think the more important question is whether it is possible to alter an experience with an interpretation and then remember that as the actual experience, and then later to report it as such. And I think science has shown the answer is a resounding "yes."
On further reflection, it seems to me that this post could be interpreted as a wholesale rejection of the rationalist paradigm. If we are supposed to believe that people cannot be wrong about their subjective experiences, and if we assume that for the most part people are honestly reporting their experiences, then what are we to think of religious experiences, for example? How can we call them delusions, if people honestly think they happened? Also, googling around, here is an interesting article:
https://towardsdatascience.com/a-bayesian-quest-to-find-god-b30934972473
And this: https://www.cambridge.org/core/journals/religious-studies/article/abs/religious-experience-and-the-probability-of-theism-comments-on-swinburne/4FB6BFE12560DC9D93D28110A3DE5B58
I am sure this has been discussed before and someone will reply with a Scott article addressing these types of arguments.
I think there was a study once where people were shown faked photos of their father taking them for a ride in a hot air balloon or something, and many of them could recall the experience including sometimes things like how they felt scared or happy at the time - despite the researchers checking before hand that these people had definitely never been in a balloon ride.
So it seems that people can definitely be wrong about their past feelings and experiences, as well as more mundane things like "the car in the accident was definitely red" - and that's before we get to "repressed" memories recovered through therapy.
My priors are very high on "yes, people can be wrong about their own experiences".
I would put it differently. I don’t think people are wrong about the fundamental experiences, but they’re open to any narrative that you might want to attach to them.
Perhaps they should show pictures of their father throwing them off a cliff and see what happens then.
I don't think it's necessarily impossible that our brains could visualize things in 7 dimensions. Consider a thought experiment where you have someone participate in a VR simulation, but the world you simulate is 7D, and you transmit sensory information from different directions in the 7D world into distinct input channels that connect directly to the brain. I think it's plausible that someone immersed in this simulation for years would be able to adapt to it quite well. It's also possible that they wouldn't, if the 3D nature of our world is hardcoded in some way into our genes controlling perception. But it's not obvious to me that this is the case.
The Fields medal winning mathematician Bill Thurston apparently claimed to be able to visualize objects in 4 dimensions and that this helped him come up with difficult theorems that turned out to be true. The full anecdote is here-- https://qr.ae/pvEXSf . This seems like the best evidence to me that this is possible in principle.
Meanwhile, under normal circumstances, we can *barely* say to be able to "see in 3D", or at least we perceive one dimension quite unlike the other two :
- light geometry assumed to be a 1D beam (and hot desert air floating oasis mirages resulting from that assumption being violated)
- those beams being projected as a 2D image onto the retina (see also : the screen you are reading this on, and you might visualize "3D" models on)
- depth perception coming from a comparison between those 2D images (and even then, not only !), and it's not like we have eyes on our index fingers, they're pretty fixed and pretty close relative to each other !
- compare with the "4th" dimension, the non-spatial one of time, which seems to have an informational content much closer in intensity to those of the 2D retina images than the stereopsis that adds the 3rd one.
(Of course all 4 of them interact with each other to form the full visual perception.)
Scott, a request. People are dropping all kinds of interesting links to philosophy papers in the comments. Would it be possible to get a post with all the links in one easily accessible place?
The mind is a black box, and even from the inside it's totally dark. It's very easy to couple words and concepts with your internal experience in ways that do not hold, do not match up when probed by experts on the outside. I love this lesswrong article on thinking you're great at emotions when really you're totally disconnected ( https://www.lesswrong.com/s/g72vrjJSJSZnqBrKx/p/qmXqHKpgRfg83Nif9 ). I have friends like this, that think they have a good grasp on their emotions but are clearly actually suppressing them. I've watched one of them run around in an anxious frenzy when having many unexpected guests over and when I asked whether he was anxious, he told me no, even though we was behaving like it and physically trembling. I have another friend who claims to be very emotionally minded, more connected with her emotions than with her thoughts, but when I ask her to describe her emotions during some past event, she immediately starts conceptualizing instead of covering her emotions. Maybe she's just terrible at describing them, who knows, but from the outside it looks like she's way more connected to her thoughts than to her emotions.
POINT 1
Well, first of all, we know that people can have false memories, so it's absolutely the case that people can be honestly wrong about their internal experiences that happened in the past. If somebody said they met aliens 10 years ago and had a joyride in an UFO, there are at least 4 possibilities:
1. They actually met aliens.
2. They lied about meeting aliens.
3. 10 years ago, they hallucinated the experience of meeting aliens, and they faithfully reported their experiences of meeting aliens.
4. This was a false memory and they did not have the experience of meeting aliens.
To say that people are always accurate about their subjective experiences unless they're lying, you have to say that all apparent accounts of #4 is either #2 or #3, which is just implausible to me given how frequently people have false and easily primed memories (see e.g. court witnesses).
Now I assume that your ninja trick in the above essay will look something like "people correctly reported their experiences of remembering meeting aliens, even if they were mistaken about their remembered experiences of meeting aliens." But to me I think this definition is not the most intuitive one, or the best way to carve reality. *And it actually matters.* If there's an anesthetic that removes memories of pain, that'd be valuable, but nowhere near as valuable as an anesthetic that removes the ongoing experiences of pain!
___
POINT 2
More central to the debate, I do think perhaps there's not a "truth of the matter" to our discussions here, but more of how we choose to interpret things.
Your interpretation of the time question goes:
>If you properly differentiate all of these, you can say things like “people are accurately reporting their subjective experience of internal clock speed, while being wrong that their internal clock is actually slowed down relative to wall clock speed”.
Whereas my interpretation of it is
> your subjective experience of your subjective experience of time is slowed down, but your actual subjective experience of time is the same as before (or even sped up).
There might not be a truth of the matter here, just a difference in framing.
> (though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)
Incidentally, this is my best guess for how (not always, but often) subjective experience of the experience time can seem much faster than what *I* call "subjective experience of time" (ie, clock speeds).
Basically, here's the chain of reasoning:
1. You don't have "true" instantaneous experiences because humans are implemented on wetware with discrete time jumps.
2. So all instantaneous experiences are a bit of an illusion anyway, and relies at some level or another on memories of subjectively-present, objectively-past events.
3. More so than most, the experience of *time* necessarily relies on memory.
4. Drugs that make time seem to pass slowly (or high subjective clock speeds) often work by making you forget recently past events. So time seems to go by slowly because (compared to baseline), 30s ago in objective time feels subjectively "murkier" and further away. This explains why when stoned, music feeeeellllls like it's going reeeeeaaaallly slooooowly.
5. There might be a countervailing effect where very-near-past events (say 1s ago) feel more crisp and you remember more things (this intuitively seems like a plausible model of adrenaline). In that case, you'd have nearly the apparently opposite effect that still results in altering your subjective experience of your subjective experience of time, without actually changing clock speeds.
5a) I think it's much more plausible that there are drugs that massively slow down clock speeds, than that there are drugs that massively speed up actual clock speeds. So I'm much less suspicious in the other direction. Though *small* speedups from amphetamines or w/e don't seem insane to me.
To some degree, this model is empirically testable. If you have friends who take drugs that make time appear to go slower, you can do a single-blinded study like this:
1. Ask them to watch something where time is tracked very objectively with relatively short jumps (e.g. the second hand of an analog clock).
2. Ask them to take drugs, and then report what their experience of time is like.
3. While drugged, ask them to now look at the clock again, and in "real time" report whether the clock hand appears to be moving more slowly or quickly than before.
4. If my model is correct, they're more likely to report that the clock hand is moving either the same or more quickly than before, rather than slowly.
___
POINT 3
While at some level there's no "truth of the matter" to our debate, and this is mostly a semantics question or one of preferred framings, I do think (unless we're very careful) what we choose to emphasize can have substantial real world implications in the future.
Consider the question of subjective experience of time. If we live in a future with digital minds, what I call "subjective experience of time" or "clock speed" matters a lot. A life on 100 subjective years matters roughly as much as 100 calendar years, even if it's only instantiated in one calendar year. In contrast, I care much less about what you call "subjective experience of internal clock speed."
Getting this mixed up is pretty bad on any calculus that cares about internal experiences. Since (by my natural ontology) we want people to have actual subjectively rich and valuable lives, not just to falsely believe they do. On the other end, being tortured for subjective millennia is (in my opinion) much much worse than being tortured for one internal clock second but thinking it lasted millenia, or having a false memory inserted of long-lasting torture.
I don't expect this question to be *very* important before we get digital minds (because we probably don't have massive OOM differences of speeds in biological minds), but I think it matters a bit for animal welfare. There's some earlier work on internal clock speeds variance across species by Rethink Priorities (disclaimer: I work for Rethink Priorities. I did not work on that report).
https://forum.effectivealtruism.org/posts/qEsDhFL8mQARFw6Fj/the-subjective-experience-of-time-welfare-implications
https://forum.effectivealtruism.org/posts/4ie9fTgB4spQ3zARk/research-summary-the-subjective-experience-of-time
I find differences in experiences of time most intellectually interesting, but of course, differences in other "honest" reports of subjective experience vs actual morally relevant subjective experiences matter too. We'd much rather people not suffer than just think they're not suffering, for example.
>your subjective experience of your subjective experience<
How does that work?
Like a second order derivative of subjective experience?
Hmm basically Scott used different (on priors, clearer) words as I did to describe the same phenemenon in the OP, so you can just read that.
I guess when I think of subjectivity, I think of the old Maxim, you only get one chance to make a good first impression.
I'm confused why you don't talk more about memory and fallibility thereof. Unless you're narrating your current experience (and maybe even then), you're reporting a memory of an event and I think it's broadly agreed that people can be honestly wrong about memories.
I'm a little unclear on the boundaries of the term "qualia", and the discussions above seem to address questions that are at least slightly different, although they examine them in depth.
This term seems to bundle raw sensory data together with some amount of inductive post-processing, provided that the processing isn't conscious or deliberate. But that means that post-processing errors don't invalidate a "quale", provided they are unconconscious? So stating that a report of "qualia" is inherently correct as far as it goes presumably means it doesn't go very far? What happens if some kind of careful self-review isolates and identifies a previously-unconscious leap as an error, and brings it into consciousness? Did the quale retroactively change, or is it still valid as originally reported?
If you sincerely perceive a silent beckoning figure at night out of the corner of your eye, but everyone (including you) concludes it was a random movement of curtains in the moonlight which you unconsciously
converted into a familiar shape, is the correct statement at the qualia level that yes, you saw a ghost, even if that only entered your head for a moment? What if someone took DMT and formed the belief they were looking at a 7D object, but after examining the memory retrospect they conclude on their own
that it was actually an invalid analysis applied to an ordinary two-dimensional visual halucination? At the qualia level, did they in fact see a 7D object? This example sounds like the example above of "people retroactively remembering things as lasting longer", and I'm not sure which part of that is a quale, either. But if "qualia" explicitly include untested assumptions about raw sensory input (provided that those assumptions aren't made conscioiusly), how relevant would "qualia" be?
I apparently don't get hungry. I use the word "apparently" because this is the most likely explanation I can find for comparing personal experience with reports of the experience of others. Personal experience would presumably be categorized as qualia here, and reports would be reports of qualia, but there's an objective truth here I would like to identify, and the weakest part of the argument rests on this "qualia". I therefore wouldn't say that "I don't get hungry" is necessarily true, it's just a likely interpretation of the information I have available.
I will take a shot at this one.
To me, Qualia is nothing but the physical embodied experience of whatever it is: fear, amusement, anxiety, exhilaration, orgasm, all The infinite varieties of pain and pleasure that our bodies are capable of sensing.
All these words are our best attempt to communicate Qualia to someone else. As soon as words get involved, inductive reasoning at some level is a given. I don’t see a way around that.
It is entirely possible different people would choose different words to describe the same physical sensation, not to even get into the listening side of that equation because let’s face it words are only as good as what others hear. It’s also possible that some people listen to their bodies better than others.
For example: Something makes a person uneasy. They have a resistance to feeling uneasy, probably because of their toilet training. Here’s where inductive reasoning comes in;
I will construct a mental framework to contain my feelings of uneasiness, and to compensate for them, because I would rather not listen to them. So I become overly assertive, grandiose even, and all kinds of other behaviors that are constructed to deny my feelings of uneasiness, or on the flipside, my feelings of uneasiness lead me to build a mental construction where I am inferior, and other people notice that and are out to get me or 1 million other constructions that one can use to intervene with the feeling itself.

I think that’s why a big part of the Buddhist philosophy is listening to one’s body and not jumping out to name something too quickly. (I think it’s worth noting that those philosophies rows at a time when human beings were first negotiating this change in the weather, and we’re a little closer to the originalist state. As the world has become increasingly more complex, in terms of abstract, thinking, and constructions of word and metaphor there are several layers to cut through)
I am sure I am not alone when I say I have had experiences that I interpreted one way in the moment, and later came to realize that something else was going on entirely, but that did not change the Qualia in my current use of the word. what changed was the significance of it.
I remember when I was in my 20s and I was in therapy for the first time that the therapist would always ask “how did that make you feel?“ And I couldn’t say anything because I wasn’t paying close attention to my qualia. I would say objectively now looking back, that I was extremely neurotic at that age. And that, given my circumstances, it was entirely reasonable to be so. 
In short, I think Qualia in human beings is pretty similar, across-the-board, and pretty universal. Physically, we are really not very different from one another, but the interpretation and the recognition of those things is an open season for distortion, misunderstanding, and confusion.
 all capitalizations of qualia courtesy of Siri. 
Identifying qualia with "the physical embodied experience" of something doesn't really resolve what I was wondering about.
Consider the bearded-man/woman-under-a-tree illusion. The raw sensory data starts as a firing pattern of retinal neurons, but then gets processed in layers into abstractions: relative light and dark, connected edges, component shapes, and recognizable images. Every level operates probabilistically on evidence to draw "conclusions", which become the input to the next stage of
processing. If the viewer is not paying explicit attention to the image, this process continues unconsciously up to an assignment of abstract concepts (like "simple work of art", perhaps associated with a speculated purpose or artistic intent), and it may or may not be given enough
priority by the unconscious mind to demand conscious attention. If attention isn't demanded, the event might still be saved in episodic memory, and if the recalled later, the memory might be "there was a drawing of a bearded man" or "there was a lady under a tree", but almost certainly not "there was an ambiguous optical illusion" (because the viewer never realized it wasn't a simple drawing).
On the other hand, if the image attracted the viewer's conscious attention, then after the further processing of a conscious examination it might be added to episodic memory tagged as "a deliberately ambiguous drawing".
I might have the processing layers wrong, but there will be processing layers of some kind, and that leads to my question:
At what point on that pipeline do we have a "quale"?
It sounds like in the inattentive case we have a quale at the inaccurate partial-interpretation of the image, but in the attentive case the quale includes the additional reasoning done after reacting to the surprising nature of it. But that seems to contradict the definition of the term "qualia".
If the reasoning is excluded from the quale in the latter case, then is it a quale before it's a discrete memorable event?
I would expect the human brain to have more parallelism than this, and that there would be unconscious processing resolving it in multiple ways for multiple purposes, but that just makes "qualia" harder to grasp.
Also, I used the phrase "inductive post-processing" rather than "inductive reasoning" intentionally, because I wanted to refer explicitly to the more general principle of aggregating evidence to produce a probable conclusion rather than a high level cognitive process that might require
conscious intent. The former can be done by small numbers of neurons, and would presumably be involved repeatedly in every layer of processing above the signalling of individual sensory nerves.
Essentially, I think of qualia as everything to the left side of the continuum between inchoate and loquacious.
The primitive form of the word is an utterance of sound. What then is the impulse that motivates an utterance of sound? That for me is the realm of Qualia. Clearly, a grunt, or a scream is the origin of language. After that, it is one abstraction built on another leading away from the original impulse until it is lost in the mists of time. Kind of like the way a solid object dissolves into quantum fluctuations, as its essential nature is more closely observed.
I hope that makes some sense to you.
This would change the "fundamental experience" under consideration from receiving perceptions to performing voluntary actions. The definitions of qualia I've found so far don't extend the term that way: all the examples of "experience" have involved sensory information or the cognitive equivalent. Considering a discrete voluntary action would obvously exclude some of the perceptual processing/cognition-boundary issues, it might open an analogous can of worms relating to levels of conscious control broken down into bundles of simpler directives automated by training or reflex.
vision is a complicated one. The concept-forming comes in quickly. Think of seeing something bigger than you that moves but you have no priors for it.
I would start with sound.
You hear something. (not words). what happens?
For having no priors visually, that wouldn't happen to an adult, and might not happen to humans at all: the visual system might bootstrap from evolved primatives.
Hearing something would begin with nerve impulses from the ear (cochlea presorting information by frequency), and at higher level they are organized into presumed location of source based on combining the ears' information as well as timbre, which varies front to back due to the external ear, but can't be integrated without referencing memory of similar sounds, as well as the current model of the immediate environment (something is more likely be sorted and tagged as an expected element of the environment-model than an unfamiliar one). All of this happens before becoming accessible to conscious thought, although it seems like a deliberate focus of attention can relabel choices made by later stages of the process. I would argue that concept-forming comes in quickly with sound as well, although I don't know as much about how primatives are assembled into higher level abstractions in the processing stack.
I am coming back. at this cause I think its an important question and I clearly missed before. I have been pondering;
I feel like the question you're asking is, At what point is the"qualia" identified as a "qualia of the type [X] and then become available for further classification? And my immediate thought about that is, Does a Qualia not exist until its named? I don't know that this brings us any closer to the issue you are trying to clarify. Let me know.
This bears directly on my original question. Superficially, whatever a quale is, it wouldn't have to be named, since experiences can presumably be legitimate, discrete experiences without needing be named at all, even if they are part of fully conscious, deliberate thought. More to the point, I'm presupposing that a majority of high-level thought operating on abstract entities occurs outside of conscious awareness. By the time conscious consideration is applied, it's applied to a constructed mental model populated by various levels of abstraction (I haven't studied philosophy, but it's possible that is what is meant by "an ontology"). In fact, consciously perceiving something as a raw experience when it's actually a pre-packaged abstraction assembled from a few fragments of sensory data, associative memory, and the current model of the world-state seems to be unavoidable. One could call that packaging process "naming", and then that would always be happening while a person is awake/aware. But since information about the construction of the "pre-packaged abstraction" is available (it can be partially disassembled in realtime by applying conscious attention), and further information can be derived from a longer attentive examination, the questions of "when is it a quale?", and "what part is the quale?", become confusing.
Note that the issue mostly goes away if you're not asking "could a person be honestly wrong about a personal experience", which manages to highlight the question of the processing stack: which part would have to be in conflict with which other part in order for a person to be "wrong"? And which in order to be "lying".
> In fact, consciously perceiving something as a raw experience when it's actually a pre-packaged abstraction assembled from a few fragments of sensory data, associative memory, and the current model of the world-state seems to be unavoidable.
Imagine you are sitting quietly in a peaceful place lost in thought or reverie. Stay there a while. And now imagine a sudden loud noise occurs right behind you.
Compare this to what might be the experience if this had happened to you more than once, and you had a solid notion of what causes the noise.
The first time anything like this ever happens (perhaps to a baby?) I would guess an expansive update occurs at every level of sensory processing to incorporate the novel event type. This is likely to be very disorienting and unpleasant to the baby.
Later, any "sudden loud noise" will be subject to various processing/sorting steps that create the abstract event that impinges on consciousness, and the unexpectedness of the event will be restricted to the novel aspects (e.g. it happened in a context modeled as "peaceful"). Perhaps a substantial update, but much smaller than the baby's. If the modeled environment includes the memory of a parked car, and car alarms are common, then tagging the noise as "car alarm" might be at a low enough level that there is no conscious perception of undifferentiated "noise" even for a moment, just "Car alarm!", maybe with context modifiers.
If it happens often enough, a person might get to the point where it's not even noticed because it never exceeds the threshold of drawing conscious attention. That person might or might not be able to recall an instance as a discrete event. If not, was it a quale? What about the background hiss of air movements that might have been unnoticable even if the person was listening for them, but which only caused slight vibrations of the eardrum and triggered some nerves in the cochlea? What about the continuous distant rush of a highway that's you hear if you listen but not if you don't?
This reintroduces my original question about where the boundary is between "a process yielding a quale" and "the process of thinking about qualia". It also bears on the article topic: Can people be sincerely wrong (as opposed to lying) about their own experiences as defined as "making sincere but incorrect propositions about one's own qualia"?
A person can obviously "lie" by deliberately making statements inconsistent with qualia -- we don't need to know where the boundary is for that, we only need to assert that there is one. Likewise, a person can make "logical errors" by making mistakes in higher levels reasoning operating entirely above the qualia level (thus the person is "wrong", but their assessment of their qualia was correct).
Inconsistencies can also obviously be incorporated into the qualia themselves at the raw data level or the simple inductive processing levels directly above that. These would be nerve misfires or "sensory illusions", respectively. Sensory processing being probabilistic and inductive, it has to yield invalid results at least some of the time.
But in order to be "wrong about experience" as defined by "qualia", a person must make a genuine error exactly at the processing step that crosses the boundary between the low level processing that constructs the qualia and the higher cognitive processes that constitute "thinking about qualia". If no one agrees on where that boundary is with enough precision to identify a boundary-crossing step, then no, a person cannot "be wrong about personal experience" in terms of qualia because the definition of "quale" is insufficient to determine whether that happened.
That wouldn't so much be an assertion about the human mind as an observation about the language expressing the assertion. It would also mean that disagreement about the assertion will be very hard to disentangle from conflicting unstated assumptions about the words used.
I feel like the major issue with statements about meditative experiences is about the meanings of words. If you take Wittgenstein's idea that words aquire their meaning through use, it becomes clear that the more private and subjective an experience is, the harder it will be for us all to arrive at the same meaning. Think of that guy you wrote about who had no sense of smell but didn't realize it for years - was he wrong about saying that something 'stinks'? I'd say he was just wrong about what 'stinks' means. It's easy to be wrong about what words like 'concentration' or 'piti' means. You think 'oh piti isn't all that' for ages and then one day you hit a jhana properly feel like you're thrumming like an electrical pylon and you say 'oh, i was wrong, THATS what piti means'.
>If you take Wittgenstein's idea that words aquire their meaning through use, it becomes clear that the more private and subjective an experience is, the harder it will be for us all to arrive at the same meaning.
I very much agree with this. I have a strong belief that the meaning of a word is 100% a function of the listening component and nothing to do with the intention.
I think at least some of this is solved by the observation that the experience and the report/thought/interpretation are not the same; you can be wrong about the latter but not the former
When you report that you "feel happy", you are aggregating a lot of complicated sense data (e.g. heart rate, temperature, dopamine, whatever) into a story that makes sense to you. But the story is not the sense data
Similarly, if somebody says they experienced a 7D object while on DMT, likely this is just the story that fits them best; it was probably not "actually" a 7D object, but the sensation they experienced was whatever it was, independent of the story.
Boy I just spilled a lot of ink to make the same point a lot less concisely.
Can someone explain the John Edwards reference? I first assumed it to be one of those Berenstein Bears things where people tend to remember the wrong spelling better than the real spelling, but when I Googled "John Edwards" the John Edwards I was thinking of was spelled "John Edwards".
Ludwig Wittgenstein addresses the phenomenology of being wrong in his Philosophical Investigations as part of a wider discussion of the building blocks of language.
He provides the example of a person teaching another a series of numbers that follow a production function. Providing several examples of wrongness (all wrong, a mistake part way through, understanding but failing to catch an edge case, getting it correct accidentally) W. considers whether the person can be said to have learned that expansion correctly in each case.
I will not claim to fully understand what his conclusion is since I am still in the book but it seems to be leading to his larger point that all language is antisystematic and idiosyncratic. As he so eloquently puts it, to imagine a language is to imagine a way of life.
"This woman wasn’t lying - if she had been, she would have just continued the the charade."
She might not have continued the charade because she was embarrassed about being caught out in the charade. Or because she was embarrassed by the realization that she had been fooling herself about her alleged week of no thoughts.
There is no real dispute that people will exaggerate pain for monetary gain. And people will exaggerate both negative and positive experiences as attention seeking behavior.
There are many many times I've read a medical report that says: the patient is a poor historian.
The question is always about the line between being skeptical and being cynical.
I think there’s a motte and bailey here. If someone says they’re “not angry about their father” and then rants at you for an hour, then yes, they are angry about their dad.
Maybe they truly believe they’re not, I’m very willing to believe that. But they are angry. Being triggered into an aggressive rant when someone mentions X is, for basically everyone, the definition of being angry about X. Calling it “stress-related behavior“ is equivocating. This is true also for the person making the statement; the phrase “I’m not angry about X” virtually always means “I am not in a state where X will trigger stress-related behavior.”
You could say, as many people here have, that there are different levels to your subjective experience, and that the top level is never wrong. Provided you define that top level narrowly enough, then there’s no way to argue it. But that’s the bailey, it’s taking the framing to such an extreme that no one could possibly disagree with you.
The people who are saying that you can be wrong about your own experiences aren’t talking about the top level, they’re talking about the deeper states underneath. They’re saying that people don’t always have a very good understanding of their emotional states, and there are countless examples of this behavior.
>I think there’s a motte and bailey here. If someone says they’re “not angry about their father” and then rants at you for an hour, then yes, they are angry about their dad.
I had no idea you knew my eldest son.
Probably been said before, but I don't think there's much here beyond "it depends whether the person making the claim is claiming something falsifiable, or purely reporting an inner experience". There really is no middle ground, logically speaking.
I feel that if someone would say "being on DMT feels like how I would imagine seeing in 7 dimensions would be like" nobody would contest that claim. There's really little to contest. What a person experiences, unless they are intentionally lying, is tautologically true, and this statement should really be trivial by now.
But if someone says they literally see in 7 dimensions when on DMT, you'd call them a liar, because there's an objective, falsifiable claim that they're making, and you can probably test that (given a definition of seeing in 7 dimensions) and prove that they're wrong.
To tie that to the tangent of the Jhana talk - the problem with a lot of debatable claim-groups, like spoonies, psuedo-DIDs, Jhana enthusiasts, and so on, is that they are sort of doing neither.
Take the DMT example from above (when someone says they "see in 7 dimensions when on DMT"). Imagine the conversation you'd have if a friend of yours made that claim. It would probably start out with you trying to figure out what they mean, and the more it will seem like they're saying that they ACTUALLY SEE THAT WAY the more it'll turn into an argument. In the end, probably, when you'll sit down to well-define "seeing in 7 dimensions", that person would give a weaker definition that ends up being strictly internal, and you'll both call it a misunderstanding and carry on with your life.
Problem is, this probably happens to them reliably with smarter/more argumentative people, and to the others they're kinda of being misleading.
With some people from what I called debatable claim-groups, it feels like when out and about, they use language that is very suggestive of objectivity, cause it makes their experience sound cool, or real, or many other desirable traits, but when confronted on what evidence they have they will retreat to "of course I'm not making any objective claim, and if you think what I'm describing is basically nothing then ok man, your entitled to that opinion." And I think that's kinda bad, on their part.
I want to make it abundantly clear here - I'm not saying people like that are using that strategy deliberately and running some long con that earns them social capital on net, or anything calculating like that. Maybe some do. But more likely, they do have a strong inner impression that something "real" is going on, and are reluctant to admit "its merely subjective" even to themselves, cause that might hurt their sense of identity, or any of the other innocent usual suspects for irrationality.
To illustrate this with a story - I sort of have synesthesia, smell to vision. Ever since I can remember, I get these split visions of colors, like spots on my eyes, when I smell things. For my entire life I wasn't sure how "real" it is, since the experience is very fleeting and its not like I'm seeing a 3 dimensional illusion taking space in the room I'm at, or something. Most of my life I didn't find it very exciting (only for a brief period when I started learning neuroscience), I never even shared that on a date, even though that's a stellar ice breaker with the sort of nerdy girls I tend to like (Its not that I'm so humble and above such tactics, I just didn't figure it would count as cool).
Anyway, during my degree I had to participate in some experiments (as a subject) to get credit on some psych/neuro classes, and had a bit of a hard time finding any for left handed people, cause, you know, we're sinister like that.
I eventually found one testing for synesthesia and then having the subject, if they do seem to have it, answer some questions on a computer test.
I talked to the guy who ran it for me about a year later, and he told me that I wouldn't believe the amount of people coming in sure that they had a measurable phenomena, but that testing them showed they're reports (mostly colors seen when hearing sounds, or stimulating other senses) weren't coherent at all, certainly not to the point of statistical significance, on any relevant axis.
We both also agreed that they probably weren't lying, first because they really have nothing to gain coming to an anonymous experiment with false claims, and second because normal people don't generally lie about big things like that very often (at least to my experience).
So what was going on here? Well, I would reason that a big chunk of those people started up with some random phenomenon that could maybe seem like synesthesia if you squint at it, then read up about it online, or where exposed to it in a context were it sounded cool, and decided they might have it. I believe, much like Scott's views on placebo, that when dealing with high-noise signals, with a lot of degrees of freedom, the brain has some leeway as to what conscious experience ends up manifesting, and this somewhat depends on priors. So I believe (not with high confidence, mind you) that if you keep leaning into the narrative of "I see colors depending on what note I hear being played", you can end up in a cognitive equilibrium where it really does feel like that, even strongly.
Same goes for having a "module of a different person inside of you, so distinct its more interesting than a normal person thinking "what would X do?"" (I'm alluding to Scott's description of what the DID people in question say when asked about it). I'm sure its very possible to feel that way, especially in an environment where other people feel it too and think its cool, but the question is, if you sit down to define how exactly this module is different from "what would X do?", do you end up with something falsifiable or not? If you do, then test it, and we're done. If you don't... well, again, its seems to me that making tiktok videos of different personalities of yourself conversing with one another, or, just the sense I got from reading these people's descriptions of their experiences (I didn't even know that was a thing before the post, btw)... its all pretty misleading. Like, its pretty clear that without A LOT of context, the median listener would be convinced those people mean something very distinct and falsifiable (even if the listener can't put that in those exact words). I get the sense that if [the average way one of these people talks about the phenomenon] was the pitch for a payed course that helps you develop such a personality, some people will end up suing the advertisers, and the judge/jury would agree that the customers were being mislead.
On more of a side note: I also think Scott's general approach is a bit uncharitable towards anti-spoonie claims (though I myself am not that far on that axis). The talks about this really reminded me of the deliberations on the term "lazy" in the old "the whole city is the center" SSC post. I think its not unreasonable to say that when feeling kinda of under the weather, or even having some pains, or a mildly upset stomach, or whatever, you can choose to lean into it, feel a lot of it, or try to shake it off (usually by going on with your day and doing other things). If the discomfort is not intense enough that you can't really function, I think most people would agree that there's a correlation between the decision to not shake it off, and how bad you end up feeling that day. If you're the kind of person that really lets themselves lean into it, and also have some natural tendency towards some particular discomfort, you can end up, using the same logic as I detailed above, in an equilibrium where this discomfort is pretty chronic and is a big deal. If we assume for simplicity's sake that there's a trait such as "self-suggestibility", and that trait is distributed normally across the population (again, I'm aware this is a gross simplification), it seems very reasonable that if you take the tail end of really self-suggestable people, and intersect it with people who for different reasons have slight medical discomforts, you get a lot of spoonies.
Do such people take up a significant chunk of those who call themselves spoonies? I have no idea, but I think its worth considering. And if they are, is that in anyway their fault? Should we as a society wag our fingers at it? Well, maybe a bit. First of all, it would be effective to maybe give them different treatments (like maybe try hallucinogens to relax their priors). Second of all, perhaps once you're stuck in a bad equilibrium its no longer a matter of choice, and blaming you is pretty pointless. But if you are the kind of person that systematically makes choices that lead to lesser functionality, and ends up hurting yourself or people around you thanks to it (e.g. cause your partner has to take care of you, or cause your colleague has to frequently cover for you at work), that's a pretty good steelman for the word "spoiled". Again, I myself am not that enthused by this claim, but I think its worth thinking about.
Sorry for being so wordy.
We do know that *memory* can be wrong. So regardless of whether one can be wrong about what one is experiencing now, one can certainly be wrong about what one was experiencing five minutes ago, or even thirty seconds ago. (Once we get into durations shorter than the amount of time it takes the brain to do things like give the correct answer on a Stroop test - in which you're given words like "green" written in non-green letters and have to say which color the letters are - then the distinction between "now" and "memory" stops working.)
I think someone can *remember* their mental state incorrectly.
Once, after a trip, I had a clear visual memory of taking my passport out of my bag and putting it in a drawer, but then when I looked it wasn't there. I panicked for a minute before I found it in the bag. I realized i must have just *thought* about taking it out, pictured it in my head, and for whatever reason this was encoded as a "real" memory.
I don't think I ever had the subjective experience "I am currently taking my passport out of my bag." I assume that at the time, my subjective experience was "I am imagining taking my passport out of my bag."
Perhaps similarly, I remember an incident of randomly "astral projecting" when I was a child. I remember feeling somehow separate from my body, and have a visual memory of seeing myself from above.
However, I'm skeptical that my subjective experience at the moment was of *literally* seeing myself, as opposed to *feeling* outside myself and *imagining* what that would look like. In contrast, around the same age my vision briefly got stuck seeing triple. In that case I knew something was wrong and told my parents. I think if I had "really" seen my body from the outside in the same sense that I "really" saw triples, I would have acted like it was a bigger deal.
My stepdaughter jessy would often say "Mum I'm hungry." then instantly fall asleep.
> "My favorite example is time perception. You can meditate or take drugs in ways that make you think that your clock speed has gone up and your subjective experience of your subjective experience of time is slowed down."
FWIW: I get this **a lot** in my ketamine infusion therapy sessions.
ETA:
> "(though this study suggests a completely different thing is going on; people have normal speed, but retroactively remember things as lasting longer)"
I dunno. When I am having said sessions, the experience is very interesting, internally. There's part of my brain that is hallucinating like a wombat out of Hel, and there's also still a little "me" in there (is that the "Ego"?) that is seeing what the hallucinations are and are able to comment on them and I am definitely having perfectly coherent (if odd) thoughts in English sentences. And that person is the one who goes, "Man, I swear it feels like time is *actually* somehow moving at a different, far slower pace, even though I know that is physically impossible".
I've even run experiments (my ket doc is awesomely open minded) where I had a timer running on my phone, and we keep the infusion device and IV drip set so that I can see them (ADD OCD ex-EMT, among other things) and even when I look at the time and note that yes, time is in fact passing at the normal rate, I can close my eyes again and go back into that molasses-flow.
Regarding "see[ing] seven-dimensional objects on DMT" I would absolutely believe that he felt that way. I've thought a lot of various interesting things about the way the universe should be able to be molded plastically by my mind when I'm on ket, so... even though that little "me" in there that's forming full coherent English sentences exists, he's not always thinking too clearly, either. ;)
I have definitely on many occasions wished that C'punk 2020 had turned out to be real so I could show my friends these amazing things I was seeing, though. I mean, among all the *other* reasons I have for wishing for that. Alas, no full skeleton replacement for me. :'(
ETA, again: And then I get down to the bottom and you're talking about a "homunculus fallacy" and now I don't know **what's** goin' on in my noggin.
Don't children have to be taught to recognize when they are hungry or sleepy? It isn't always obvious what one is feeling. There are all sorts of sensations that one has to learn to recognize as part of growing up. It isn't just the basics either. It can be thing like jealousy, anger or trauma. There's a whole branch of psychiatry which tries to help people understand what they are actually feeling. You can't necessarily argue with people's sensations. They are feeling them. You can argue with how they are interpreting them.
I've several times had the experience of being entirely mistaken about which external label matches my internal experience. I remember an aha moment of "so this is what anger feels like!" I'm still not totally sure if I experience sexual attraction or not. So I could easily imagine someone storming off when I mention their dad AND internally experiencing something I would call anger AND honestly saying “I’m not still angry about my father” because they map "anger" onto a different internal experience than I do. Definitively not saying this is the only explanation for the example -- just that translating internal experiences through words adds an extra layer of complication.
I'm quite sure not many people will come all the way back here, but I would like to report that as a meditator in the progress of deepening their practice, I have also attained some of these deeply pleasant unfalsifiable states, which, according to some commenters, I have not experienced. Thereby I would also like to report that these deeply pleasant experiences, which I apparently have not had but of which I have a solid recollection, are likely attainable for almost anyone who is simply willing to practice and able to do so well (perhaps under proper instruction).
As for why meditators wouldn't, then, spend their entire lives in jhanas - something which meditation teachers sometimes do mention being a risk - I recommend both John Culadasa Yates' The Mind Illuminated and Daniel Ingram's Mastering the Core Teachings of the Buddha. Both of these go into detail on why jhanas are not, and often should not be, an end in and as of themselves.