B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
How did Georgism define economic land, and why is it important for tax policy in space?
Can you explain how Georgism can be applied to space resources like energy, matter, and physical space?
How might governments in space subsidize the collection of solar energy?
What are some difficulties with taxing matter in space, and how might taxes need to be adjusted to avoid distortions?
How can physical space in space be taxed, and what factors might be considered in assessing the value of land in space?
How can excludable resources like broadband spectrum and orbits be properly managed in space?
Why is it important to strike a careful balance when implementing taxation policies for space colonization?
I've seen talk about ending wildlife suffering, a goal which seems impractical, but while we're at ot. why not look at optimizing for pleasure for wild animals?
While ending wildlife suffering sounds like Fourier wanting to turn the seas into lemonade, that was a heartwarming video. I wonder what other approaches could there be to optimize pleasure for wild animals.
I'm extremely worried about AI risk lately, especially after the twin shocks of ChatGPT and Sydney. I want to do something to help. I live near DC and know people with political connections who could potentially help me get a policy meeting with a member of Congress. Two questions:
1. If I can get such a meeting, what specific policies should I propose? My own ideal is to ban all AI research, but I know that's an impossible ask.
2. Any insights on members of Congress who are particularly good targets to try and meet with specifically? I'm talking to my connections about this too, but I'm wondering if anyone here has knowledge specific to the AI risk field, in terms of which members may be receptive or have been receptive in the past, and have the power/motivation to try to do something about it.
1. Policy is a bit of an open question. This section of the online book Better Without AI (https://betterwithout.ai/pragmatic-AI-safety) may go into that, I haven't read that far yet. He has argued that AI mainly runs on hype, so if public perception turns negative, the whole thing would shut down. So policy may not be as effective as advertising.
Fucking hell, I love that G. E. Moore was dumb enough to argue that holding up his hands in front of him and saying 'Here is one hand, here is another' was enough to prove the existence of an external reality, Jesus, I am going to murder every philosophical position I disagree with with that one.
I later saw that there's some more to it, but the first link I saw said it was simply what I said. Actually, now that I saw the logical formulation, it really isn't better at all.
The trouble with appealing to common sense is that that is the death of philosophy. In particular, you really cannot appeal to common sense while trying to convince me I don't exist, because come on dude.
> In particular, you really cannot appeal to common sense while trying to convince me I don't exist
Per my other reply to you, I'm not trying to convince you that you don't exist, but merely that you don't have the existence you thought you did. This should not be surprising at all. Most cells in your body are replaced every 10 years. Are you the same person you were 10 years ago? What are the specific properties you ascribe to "existence" and "you"? This is not an obvious question, so why do you think you already have all of the answers?
As for Moore's proof, it's a simple argument in the end. Every argument of skepticism about the external world depends on concepts, evidence and even logic that we derived from the assumption that the external world exists, so if you then turn that around to question the existence of an external world, then you undermine all of the concepts, evidence and logic that form the core of your argument, so any such argument is self-defeating and necessarily less plausible than the external world just existing.
The existence I thought I did? I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Am I the same person from 10 years ago? No, but it is the same sentience. The witness has always been there.
About skepticism, it's at least partially derived from observation, but observing the world does not at all mean that the world is what it seems. I do think concepts, evidence and logic are all flawed and are just limiting viewpoints, useful to apprehend certain aspects of the truth but unable to contain all of it.
Likewise to eliminativism, first you had to read some philosophy to reach that conclusion, because you did not always believe you lacked consciousness. Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
> I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Nobody is claiming that you don't have some existence.
> Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
Nah, the information processes that constitute "me" threw out the fiction of consciousness after processing the information and being convinced by eliminativism and science as a whole.
It's not the same Moorean argument because what I'm throwing out is the qualia/phenomenal experience but not the perception, and the latter is all that's really needed.
I'll start: One of my personal favourites is Norm Macdonald's gag about Bill Cosby on Comedians in Cars Getting Coffee. Everything about that joke is perfect.
Truly awesome. A friend of mine used to play poker with Norm - he said he was so funny no one noticed he was winning. Perhaps they didn't care. Poker Patreon?
F-ing brilliant. Note how Jerry is all over the provenance -- he knows it's a gem. I suspect it's Ricky's joke, but by saying he doesn't remember where he heard it, Jerry can't ask for it... Comedians are fiercely competitive.
Considering how many jokes I've heard you'd think I'd be able to remember them better. Also depends what counts as jokes; the funniest stuff tends to be improv or machine translation games.
Not the best, but one that always stuck with me was from Mitch Hedberg. "I went to the store to buy candleholders, but they were out of candleholders. So I bought a cake."
I was reading through an old bookmark and saw that Scott was still looking to review Nixonland eventually. I read the book myself after that banger book review, but found that it's actually part of a quartet:
I've not read Before the Storm or Reaganland yet, but Nixonland flows near seamlessly into Invisible Bridge and I imagine the other two books segway cleanly too. I'd napkin that the whole thing is close to 150 hours of audiobook.
Even though Perlstein is quite partisan (he wrote an '05 book on how the DNC could take center stage again), I'm not sure anyone else had even attempted to write a consolidated history like this of American society/politics/discourse/mood for such periods of time.
I'd grown up around adults who made mention of things like Woodstock and Watergate or the '68 DNC riot, and even someone who still couldn't believe we'd elected an actor (Reagan) as president. It was all ancient history, with the current era being written on TV as and after the towers fell. Reading another generation's political experience connected *a lot* of dots in my zoomer mind. I'd defo push these books at any zoomer interested in politics frfr (or anyone seeking to understand the boomers).
I know there has been back and forth on this and it is not the wonder drug it was thought to be for a while but I've been taking a single aspirin before going out to shovel a large amount of snow for a couple years as a sort of voodoo ritual to protect myself.
I'm 70 now but have no diagnosed cardio issues, and have good cholesterol numbers, good blood sugar, healthy weight, good blood pressure and an echo cardiogram shows no buildup of bad stuff in my heart. In the summer I still ride decent mileage on my bike, though my 'century' rides are all metric now. I also sit and crank my old Schwinn on a turbo trainer in the winter until I think I'm risking dying of boredom.
But... when my dad was my age he died from his third heart attack so I figure taking an aspirin before I go out and perform what used to be a pretty reliable widow maker chore, hell, it can't hurt.
Does this make sense to anyone who actually knows about this stuff?
What you want to be careful about is the fact that aspirin has a significant anti-clotting function, and one thing you *don't* want to have happen is a brain bleed that doesn't get sealed off right away, i.e. a hemorrhagic stroke, either a big one, or even the very small kind that you don't notice but which degrades your brain over time.
I don't have any useful advice about this, this is something you want to discuss carefully with a physician, I'm just observing the tradeoffs. People dismiss the anticlotting issues with aspirin because they think "oh a GI bleed is less scary than an MI" but they forget about the stroke possibility.
The Biden Administration is obviously going to lose in the Supreme Court regarding the student-loan cancellation. And on the merits they should lose, though this Court would rule against them regardless.
Both the president and his staff know that's what's going to happen, and privately don't actually mind it. The whole thing is kabuki theater.
I have been getting a lot of usage out of the saying "if you're not going to do something, you might as well not do something big" recently. Everybody in politics seems to be following that mantra these days.
Having done a bit more reading on the case I would adjust my prediction somewhat. The standing of the two set of plaintiffs, one in particular, appears a good deal weaker than I was aware of. I'd still wager in favor of the Court ruling against the administration. But there does seem to be some fair chance of 5 justices agreeing that the plaintiffs' standing to sue over this topic just doesn't pass a smell test, leading to a procedural ruling rather than a merits one.
I only read about half of that article, but I think that may be the wrong take. People with bad mechanics rely on conscious thought, possibly in the same way that chess masters instinctively know the good move while amateurs have think quite hard. But chess masters go through a long stage of thinking hard about every move before reaching that level, and "don't think too hard" would be terrible advice for someone looking to improve.
Similarly, focusing on your muscular form etc is often an important part of building good movement habits. The end goal should be to make it unconscious but that's not necessarily the way to get there.
As I understand it, the problem was that people were being told to pay attention to their knees during rehab. Perhaps the issue is that they weren't gotten to the level of unconscious competence.
"“We know that early-life stress impacts the brain, but until now, we didn’t know how,” Baram said. “Our team focused on identifying potentially stress-sensitive brain pathways. We discovered a new pathway within the reward circuit that expresses a molecule called corticotropin-releasing hormone that controls our responses to stress. We found that adverse experiences cause this brain pathway to be overactive.”
“These changes to the pathway disrupt reward behaviors, reducing pleasure and motivation for fun, food and sex cues in mice,” she said. “In humans, such behavioral changes, called ‘anhedonia,’ are associated with emotional disorders. Importantly, we discovered that when we silence this pathway using modern technology, we restore the brain’s normal reward behaviors.”
So there's this cluster of positions called "Illusionism" which is about doubting the existence of consciousness to various degrees, whatever that means exactly. I'm very interested in understanding how people think about this in more detail, so if anyone here is sympathetic to that set of ideas, I'd like to hear from you! Like, what is consciousness, what exactly does your position say about it, and why do you think that? (And if it's applicable, what do you see as the objections to that position, and why are they unconvincing?)
If it's relevant, I'm mostly through Dennett's "consciousness explained", and I think I understand his model quite well.
Eliminativism is specifically about the ineffability of mental states, the "what it is like". The illusion of consciousness might be like the "illusion" of solidity. We know solid objects are mostly empty space, but this doesn't eliminate the category of "solid", it just reframes what "solid" means. Somewhat analogously, we know that ineffable qualia are incompatible with a scientific picture of reality, and rejecting the naive first-hand "ineffability" as deceptive will permit us to reframe and properly understand what's going on with "consciousness".
I think this "ineffability" is the same sort of woo that so troubled philosophers a century ago when they were trying to explain how inanimate matter could lead to life, and so they invented vitalism. Vitalism is nowhere to be seen now because progressive improvements in our understanding of the mechanisms of life shrunk the explanatory gap to the point where it seemed implausible that there would be anything left to explain once that process neared completion. I think the same pattern will repeat with consciousness.
I think the objections to this are well-known: p-zombies, Mary's room, etc. P-zombies just aren't conceivable. A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. We lose nothing meaningful by just accepting we are those automatons. I don't find this particularly objectionable because I'm also a Compatibilist about free will.
For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics. Anytime you bring in some kind of infinity you start getting unintuitive results, and humans are notoriously bad at intuiting correct answers in such contexts. I think this is just another example.
Even if this were overcome in some revised Mary's room, I think there are a lot of reasons why Mary could still be surprised upon seeing red the first time (like the ability response), which are compatible with physicalism. There just isn't anything convincing there.
"For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics."
I think that's a misleading analogy., The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
> The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
Indeed, and yet almost every mathematician in the world was shocked when Godel effectively proved that Peano arithmetic was necessarily incomplete, which proves my point that humans are generally not great at intuiting results when infinities are involved, even when they're experts.
Nor has anyone shown that Mary's room is self consistent and so the infinity doesn't matter, so we're once again at the place where an intuition pump purporting to prove the existence of qualia does nothing of the kind.
"A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. "
A zombie world would only look the same objectively. It would feel different subjectively to everyone in it, ie. it would feel like nothing. Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging....The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
No the zombie wouldn’t be conscious. If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
There’s no science here - I think you are more “philosopher” than scientist anyway - basically all you are saying, in an verbal excrescence is that we can’t prove qualia therefore they don’t exist.
> If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
That's the whole definition of a p-zombie. They are physically indistinguishable from so-called "conscious beings", talk, walk and speak exactly the same things as conscious beings, but instead of being conscious and talking about consciousness, they're just completely wrong about their own mental states while asserting they have consciousness.
> Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging.
Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
> The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Not really, it's more like: why posit more entities than are necessary to explain the data? It's more parsimonious if p-zombie world were just our world and we're just mistaken about consciousness, and perceptions are commonly mistaken, so I should epistemically prefer "no p-zombies".
Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
After all, you can't actually prove that consciousness exists without simply referencing your perception of internal conscious experience. All of the logical arguments purporting to demonstrate its existence are fatally flawed. Unless you can prove it, there is no reason to posit its existence. That said, I agree a mechanistic theory should explain why we *believe* we have consciousness, and I think Graziano's paper which I referenced above is a great start on that.
So I've been reading this back and forth (thanks for having it!) and I feel like I understand the models of both sides and why you seem to be talking past each other, but mb I'm completely wrong! But here's an attempt to explain them. If anyone wants to give feedback on this, I'd be very interested.
I think the implicit disagreement that basically generates the arguments on both sides is what you take as the epistemic starting point. (I'm gonna represent both sides by Alice and Bob here so that if I mispresent someone it's less bad, also one side was argued by multiple people.) So Alice would argue that the way you reason about the world -- not about consciousness specifically, but about everything -- is that you have patterns of qualia and then react to them. That's how you navigate the world; you see qualia of something, like the image of a car in your visual field, and you react to it somehow. So the epistemic chain is something like [qualia] -> [interpretations of qualia based on experience/knowledge/intelligence/whatever] -> [conclusions about the world]. This qualia could be a non-material thing that acts on the body, or another aspect of the material stuff in the body; the distinction actually doesn't matter here.
So consequently, the qualia takes epistemic primacy. That's your *starting point*. Which means any theory has to explain qualia first and foremost. And it's a-priori impossible for science to rule out the existence of qualia because everything science does is *itself* based on qualia. If you're working in a lab, you don't start from the measurement itself; your starting point when you look at the display of an instrument is the *qualia* of the result that appears in your visual field, and then you conclude stuff based on that.
In particular, the argument from the paper,
> (1) for everything we know, there must be information in the brain
> (2) the brain's models are always fallible
> (3) therefore, our "knowledge" that we have qualia is fallible
completely misses the point, as does the illusionist framing in general. The reason is that all of these arguments only show that you can be misled about *what an input means*; that's what an illusion is. None of them show that the *input itself* is non-existent, which is the only thing that matters! Alice isn't claiming that her qualia are evidence for any particular thing in the external world -- such a claim could indeed be wrong! -- she's only talking about the qualia itself, and the argument above doesn't show how that could be nonexistent, and neither does any other argument made here. Again, it doesn't even make sense to her because *everything* you find out about the external world is itself based on qualia; it all starts at the same point.
Meanwhile, Bob identifies himself not with qualia but with his entire body as an information processing system. The qualia thing isn't the epistemic starting point; it's an *output* of the information processing system that is Bob (or anyone else). So conversely, the argument "you need to explain why there seems to be experience" misses the point because, well, the "seeming of experience" thingy is also just an output of the information processing system that is you. So you do have to explain why *the system produces this output*; you have to explain why Alice talks about 'kon-shush-nuhs' but you *don't* have to explain the experience thingy itself, because that's just something the information processing system talks about; it doesn't have to be true.
(This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status. The part that's confusing to Alice here is that you even model *yourself* from this third-person perspective, sort of.)
So as long as Alice waves her hands and stubbornly repeats that *no you really have to explain this experience thing, it's really there*, all that misses the point for Bob because it all assumes that the qualia is the epistemic starting point, which it isn't; again it's just an output. The only evidence that *would* count is, basically, anything that's valid from a third person perspective. So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence. Or if we argue about the a priori *complexity* of a universe with qualia in it, that could be relevant for the prior we can assign to both hypotheses. Or if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive. But appeals to the epistemic primacy of qualia aren't.
Does this sound right? I feel like if it is, then neither side has really provided evidence that's compelling the other side -- understandably so!
I don't think the qualiaphilic side need to lean very far towards the primacy of qualia, so long as consciousness is not ignored. In a way, what is epistemically primary is some notion of experience or perception including , but not limited to, qualia.
"if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive"
But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Yeah, I took qualia as a standin for "subjective experience of any kind"
> But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Not novel causal powers, but perhaps causal powers period -- even if they're also explainable in material terms?
I think you have the general contours of the situation. Some people take the primacy of qualia as a given, that it's a new kind of knowledge that simply cannot be be questioned because it can be directly experienced.
This seems to inherently beg the question to me. Scientific study always starts with "working definitions" that serve as a good starting point but require refinement as we develop a more complete, coherent picture of what's going on. We started with the premise that qualia exist because we had no reason to question their existence.
So we did our science and ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*. This state of affairs *requires revision to our assumptions and basic definitions*, as it would in any other scientific study, and we can devise alternative explanations, like eliminative materialism, that resolve this problem. But because Alice takes qualia as axiomatically true this is not a solution but an absurd kind of heresy, and rather than question the primacy of qualia, she would prefer to question various assumptions like reductionism or materialism.
There is no logical argument or evidence definitively demonstrating that qualia must exist or must have primacy, and the only argument from my side is epistemic parsimony and a recognition that nothing has been as effective as science at explaining the natural world of which we are a part.
Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
> So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence.
Yes, and we do have evidence that qualia are not integral but merely components of an information processing system. Phenomena like blindsight show that humans reliably report objects in their blind spot even when they don't consciously experience it. This is clear evidence that conscious experience and qualia are simply not what we perceive them to be, that they are merely a component of a system.
> Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
I think in the Alice model, it is possible to do science without qualia, but the evidence that you get from science -- and even the evidence that you are doing science at all -- is again qualia.
Anyway, I feel like this does point to a possible way that the problem could be resolved in principle. Like, Alice and Bob could agree that they can't agree on the epistemic starting point, so they could take the scientific validity of qualia as a crux. It'd be up to Alice to explain (a) how qualia works at all, (b) how a universe with qualia is philosophically simple, (c) how various phenomena like blind sight or the moving color thing from Dennett are compatible with a theory of qualia, and (d) how qualia makes functional predictions about the brain. If she could do all that, it ought to convince Bob that qualia exists after all.
>ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*.
You haven't shown that.
There's an argument against dualist theories of qualia, based on physical closure, where Qualia would have nothing to.do, and are therefore an unnecessary posit.
There's an argument
against identity theory based on irreducibility. you haven't even mentioned it..
In line with your tendency to ignore identity theory.
So that's two arguments against two theories of qualia. They don't add up to an argument against qualia unless they are exhaustive.
"This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status"
But "qualia" as such doesn't appear in naive phenomenological reports because it is a philosophical term of art. The naive theory is that colours, etc, are properties of external objects that are perceived exactly as they are. Naive realism, as its called, is unsustainable scientifically because science requires a distinction between primary and secondary qualities. In addition, there are specific phenomena, such as blindsight and synaesthesia , where qualia are missing or unusual. Qualia aren't uniformly rejected by scientists. for all that some philosophers insist they are unscientific.
Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
> Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
This begs the question. Focusing on the quantifiable and objective is not an implicit assertion that the subjective and unquantifiable exists, it is an epistemic stratagem to focus on that which *can* be quantified *at this time*, and progressively build understanding to a point where the previously unquantifiable can then be quantified.
The opposite is true. Given that everybody has an internal conscious experience then that has to be explained. Any science that doesn’t explain it isn’t a science. In fact it’s just hand waving because we don’t understand the brain.
> If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
I've responded to this point elsewhere in this thread as well, which I'll reproduce here:
> Science also does not require fully reliable perceptions or senses because it can quantify the unreliability via repeatability, and 1) restrict itself to the narrow domains in which perceptions are reliable, and 2) project measurements from unreliable or undetectable domains into the reliable domain. That's what instruments are for.
But how are you discerning which perceptions are reliable? And even if perceptions are unreliable, there is still the fact that we perceive. Reality could all be an illusion, but the illusion is being presented to someone.
> Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
sorry to jump in, I'm just curious, does this sentence imply that all [theories of consciousness under which consciousness is real] necessarily contradict science? Like, there's no way to have a theory that posits the existence of consciousness but is consistent with the laws of physics (and hence science/effective explanations/etc.)?
This would sort of mean that a second, implicit reason why you like the Dennett approach is by process of elimination; all the alternatives are bad.
Strictly speaking, no. For instance, panpsychism would not require changing any natural laws to explains things we've seen, it might simply require us to accept that every posited entity carries with it some spec of consciousness, and that natural laws will aggregate consciousness in various unobservable (subjective) ways. Human brains are then an aggregation of consciousness that can finally reflect on and understand consciousness itself.
If you consider first-person consciousness to be irreducible to physical facts, that's probably an elegant way recover science with a somewhat unverifiable component. Seems more plausible to me that we're just mistaken about our own mental states.
Re: process of elimination, in a way, yes. I go into that below in my thread with The Ancient Greek. It's just epistemically more justifiable in so many ways.
But panpsychism is clearly ridiculous. Consciousness is linked to brains. I think I could discourage panpsychists from their beliefs by asking them would they prefer to be shot in the leg or the brain.
The argument only requires that you can introspection subjective states that you can't you can't fully describe. It doesn't require that subjective states are accurate representations of anything beyond that. In part icular, non physicality is not asserted purely on the basis of introspection.
The data include the subjective data, unless you are begging the question by ignoring that.
It would also be parsimonious if only consciousness existed, and matter were an illusion. Parsimony does not imply a unique ontology.
You can't prove matter exists without referencing your own experience.
> The argument only requires that you can introspection subjective states that you can't you can't fully describe.
"Can't fully describe" is just a god of the gaps argument.
> The data include the subjective data, unless you are begging the question by ignoring that.
I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable, therefore we cannot derive any reliable conclusions from it until its reliability is quantified.
> It would also be parsimonious if only consciousness existed, and matter were an illusion.
I disagree. Build a formal model of consciousness and then we have a basis for comparing its parsimony to the standard model of particle physics. We have no such thing, therefore this is no different than saying "god did it". The number of logical properties we must then assign to god/consciousness dwarfs the standard model.
> You can't prove matter exists without referencing your own experience.
"Experience" implicitly smuggles in qualia. I would only agree with the phrasing, "You can't prove matter exists without referencing your own perceptions", because perceptions don't implicitly assert that conscious experience exists.
Consciousness is required to argue the non-existence of consciousness. P-zombies on their own wouldn't suddenly start arguing about the existence of consciousness and qualia without being programmed to do so by some conscious entity.
In fact, the whole enterprise of science depends on our consciousness interacting with our qualia. You might argue, as Nagarjuna did, that consciousness and qualia have no existence in and of themselves, and are instead the emergent phenomena of the interaction of underlying processes—and those processes, when examined will be seen to have arisen from deeper processes—ad infinitum. However, Nagarjuna didn't stop there. He was willing to admit that the "illusion" of mind and qualia (generated by the underlying processes) was as functionally real as the underlying processes.
And invoking parsimony doesn't move your argument along. The Law of Parsimony implies that there is a correct explanation for a phenomenon. Saying there is nothing to explain is not parsimony, it's just refusing to consider the problem.
Also, the Mary's Room experiment has been done. Not that it really helps to resolve the philosophical loose ends...
Thanks! That makes a lot of sense to me. Very in line with Dennett. Also listened to the paper you linked, which fits as well.
(fwiw I totally agree that the two objections you listed are extremely unconvincing. I even consider p-zombies an argument in the opposite direction; if your theory permits the existence of p-zombies, that's a problem.)
One thing I'm wondering, if you're willing to elaborate, is how you square this picture with morality. If qualia doesn't exist, then consciousness either doesn't exist or is just a name for a high-level process; either way there's no actual experiential component to the universe; no "what it is like". (Or do you disagree with this?) This seems to imply there's no suffering. Do you just have a moral theory that works without any conscious states? Can you have suffering without 'what it is like' type experience? Or does it imply ethical nihilism?
I don't think it implies there is no suffering, it simply reframes what suffering is, similar to my solidity example. Solidity is not the absence of empty space, it's just a different property, like the inability to pass another solid through it (roughly); analogously, eliminating ineffability doesn't entail the absence of pain or suffering, pain and suffering are simply understood to be something else, like evolved preferences that avoid damage that harms our fitness. That still sounds like enough to ground a utilitarian ethics to me.
Other ethical frameworks don't rely on preferences or values in the same way so I don't think there's a problem there, ie. deontology or virtue ethics.
Why is this being done? If Dahl's works are so flawed, and so many of the passages need to be edited to the point of losing Dahl's characteristic nastiness and not even being recognizably Dahl any more, why not just toss the whole thing out? What's the point of keeping something so flawed?
The obvious answer is that modern corporations and woke writers are so bereft of genuine creative talent that even a dreadfully unprogressive straight white Englishman born over 100 years ago was creating categorically better art than all these modern 'enlightened' fools could ever dream of making themselves (or at least, if they don't recognize Dahl's actual greatness, they certainly acknowledge the enduring popularity his works have that their own works do not).
The edits made to Dahl's books feel to me like PR stunts that are intentionally stupid in an attempt to invoke toxoplasma of rage. I find it really hard to believe that anybody sincerely thought a book where one of the heroes famously owns a factory run by African slaves could be made to seem progressive by replacing the word "fat" with "enormous".
You're right, I had forgot about that. But even the 1972 version still has them being slaves from *some* foreign location, right? It's just left open-ended what continent they came from originally.
No it had been changed quite a bit I think. The story was that Wonka had "rescued" them from some terrible life of persecution and there was some sort of symbiosis in their working for him in the chocolate factory. But just looking at stuff online about the African version of the story (which I've not read) it also sounds like their was a bit of justification in the story of him "rescuing" them as opposed to capturing them in the way we know that many actual slaves were captured.
I wonder how many of the people protesting about the current sanitisation of the book know about the previous rewrite and whether in hindsight they would think that was a good or bad thing?
I read The Coral Island by RM Ballantyne to my kids when they were under 10 years old. It was written in 1857 and I have a very old copy that was given to my grandfather when he was in Sunday school.
It's a ripping adventure of boys stranded on a desert island and it also contains the N word in a description of marauding cannibals that come to the island.
When we came across that use of the word we were able to have a very useful discussion about it, including the idea of how language changes over time and why words that were thought to be innocuous in one place and time can be hurtful in another context.
Personally I think that simply changing an author's original text is going a bit too far, but perhaps this controversy will at least stimulate a bit of conversation between children and their parents about the importance of context in the use of any and all words.
But editing children's stories isn't a new thing and I remember a similar level of discussion (in the UK at least) when the gollywog character was edited out of the Noddy universe. I'm sure that some people who are on one side of the argument here might have been on the other side in that case.
It might also be interesting to think about why nobody seemed at all put out about terrible film versions of Dahl's stories, or wondering why it's just fine to express his ideas in a rather better musical but without the use of the offending words.
If it's not a case of vanilla censorship but the parallel production of institutionally approved alternative versions of books deemed problematic, then aren't we in "Hegellian Wound" territory a la Žižek?
Eg.: First you have a natural state of things, the original Dahl ideas and writing. Something comes along and disrupts this state by imposing it's own values/agenda - the Updaters - and they inflict a wound on the original. But Hegel comes in and says: wait, this wound is not fatal, actually it is a wound that contains the vehicle for it's own healing and transcendence. See, were it not for the attempt at vulgar re-writing, the original writing would not have a context in which to demonstrate it's own inherent virtue and value to society. The wounding makes the original stronger, in ways that previously were not thought possible.
Who's the "institution" doing the approving? Is it the publisher? If so then every book Puffin Books ever published after requesting changes from the author (i.e. the normal editing process) was an "institutionally approved alternative version"
An alternative answer would be that the updaters believe that Dahl's books lie somewhere in the grey zone between "unsalvageably old-fashioned" and "better than all modern children's fiction", and that the update will help sell more books to woke parents.
Philip Pullman worked very hard in His Dark Materials to make Satan seem like the good guy, and while there is something compelling in his vision, I am suspicious of someone who wants to make the Prince of Lies into a rebel hero, and God into a petty dictator.
I mean, I basically believe the YHVH of the Old Testament is insane, but Jesus still loved Him, so I don't think it's quite as straightforward as Pullman presents.
I could have almost been the inspiration for Augustus Gloop when first reading Charlie And The Chocolate Factory when I was 11.
And yet, I loved the story, and read the also-wonderful James And The Giant Peach shortly afterwards. It never occurred to me to be offended by either book.
I was much more aware of Dahl's edginess when reading his books to my children years and years later.
> What's the point of keeping something so flawed?
If you're the beneficiaries of the Dahl estate, the benefits are obvious.
The whole thing is probably best seen in the context of sacrifice. In this case, the spotlight has swung around to Dahl's privately expressed views about Jews, and a sacrifice was necessary to appease the powers that be. You can't change Dahl's opinions about Jews, but you _can_ change his published books, so you do that, and the spotlight moves on somewhere else for now.
It doesn't matter what the changes are, it just matters that you genuflect appropriately when you're called out.
The copyrights aren't particularly near expiration. AFAICT they don't start till around 2060. Barring changes in copyright law, the originals will go into the public domain at the same time they would have otherwise, even if the bowdlerized version remains protected.
And it will have no effect on the copyright of new adaptations into visual media, which is presumably where the real money is.
The announcement that the originals would be published by a separate imprint came after the outcry, and in response to it.
More, apparently owners of the books in ebook form are also seeing their copies updated, rather than retaining the books they bought or bring given a choice of keeping Dahl's work or the getting the unlabeled collaboration.
The response does seem to have dissuaded Dahl's US and European publishers from following suit with the changes, at least for now.
It may be that Dahl's sales are down, but thus far no one making that claim has presented sales data (that I've seen). Dahl's alleged unpopularity seems to be belied by the fact that they remain in print and keep being adapted into films and major stage productions.
The idea that works whose draw has always been their subversive nastiness will gain sales by being made less nasty at least calls for some evidence.
The discussion surrounding large language models (LLMs) and their relationship to AGI has been utterly horrendous. I believe LLMs and their intellectual descendants will be as transformative to society as the transistor. This technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at starting such a discussion.
To start off, I will respond to a very common dismissive criticism and show why it fails.
>It's just matrix multiplication; it's just predicting the next token
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing `the model parameters` as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
For the sake of sparking further discussion, I offer a positive argument for the claim that LLMs "understand" to a significant degree in some contexts. Define understanding as the capacity to engage significantly with some structure in appropriate ways and in appropriate contexts. I want to argue that there are structures that LLMs engage with in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of chatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples of such poems so I won't offer an example. The set of valid poems sit along a manifold in high dimensional space. This space is highly irregular, there is no simple function that can decide whether some point (string of text) is on the poem-manifold. It follows that points on the manifold are mostly not simple combinations of other points on the manifold. Further, the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize the ability of chatGPT to construct poems. We can rule out simple combinations of poems previously seen. The fact that chatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is generalizing over samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. The quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept (at least as far as poetry that rhymes goes).
So we can conclude that chatGPT contains some structure that well models the human concept of poetry. Further, it engages with this model in appropriate ways and appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
>It's just matrix multiplication; it's just predicting the next token
This is not a criticism. This is an explanation.
The criticism is that LLMs repeatedly produce nonsensical or logically incoherent utterances, and can be easily, reliably induced to do so. Those are commonly handwaved by "it's just growing pains, we just need to train them more", or something to that effect. What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place. It's just retrieving common utterances associated with "pound of feathers", in this case, comparisons to "pound of [something heavy]", and substitutes the terms to match the query.
When an LLM says that "[person A] and [person B] couldn't have met, because [person A] was born in 1980 and [person B] died in 2017, so they were not alive at the same time", it's not failing to make a logical argument, it's mimicking a common argument. It can substitute the persons' actual birth/death dates, but it cannot tell what the argument itself, or the concepts within it, represent.
And look, the argument may be wrong, you're free to disagree, but you need to actually disagree. You're not doing that. Your entire point boils down to, people are only saying [that one line you cherry-picked from their arguments] because they fail to understand basic concepts. Honestly, read it, it does. Now, if you want the discussion to be non-horrendous, try assuming they understand them quite well and are still choosing to make the arguments they make.
Not an explanation, but rather a description. People treat it as an explanation when it is anything but, as the OP explains.
>When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place.
Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans. This is an example of the kind of bad arguments I'm calling out. It's fallacious reasoning, plain and simple.
>What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
The supposed distinction between a reasoner and a simulator needs to be demonstrated. The "simulated rainstorm doesn't get me wet" style arguments don't necessarily apply in this case. If cognition is merely a kind of computation, then a computer exhibiting the right kind of computation will be engaging in cognition with no qualification.
>but you need to actually disagree. You're not doing that.
I'm disagreeing that a common pattern of argument does not demonstrate the conclusion they assert. That is a sufficient response to a fallacious argument. Now, there's much more to say on the subject, but my point in the OP was to start things off by opening the discussion in a manner that hopefully moves us past the usual sneers.
>Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Failure of human reasoning does in fact demonstrate lack of understanding in humans.
I mean, I realize what you're actually trying to say - that an individual failure of an individual human does not disprove the potential for some humans to succeed. But that's exactly the fundamental issue with your line of argumentation - you're assuming the discussion is philosophical (and a bunch of AI specialists literally doesn't understand the concept of emergent behavior, etc.), while it's actually empirical. Nobody denies neural networks can exhibit [whatever marker of general intelligence you choose], because proof by example: human beings. The whole disagreement is about whether the actually existing LLMs do. And, further down the line, whether current direction of research is a reasonable way to get us the ones that do. (I mean, to reuse your own metaphor, you could, theoretically, discover working electronic devices by connecting transistors randomly. It does not constitute a denial of this possibility to claim that, in practice, you won't.)
>It's just matrix multiplication; it's just predicting the next token
This is as uncompelling a response as "computers are just flipping ones to zeroes or vice versa, what's the big deal?"
> The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements.
Yes, I'm not sure why this isn't obvious. There's an adage in programming, "code is data". This is as profound as the equivalence between energy and matter. LLMs and other learning models are inferring code (behaviour) from the data they're trained on. In fact, a recent paper showed that a transformer augmented with external memory is Turing complete.
So basically, learning models could learn to compute *anything computable* if exposed to the right training set. What's particularly mind boggling to me is that it's often people familiar with programming and even learning models that are overly dismissive.
OC LW/ACX Saturday (3/4/23) Space Georgism and Music as Human Aposematism
https://docs.google.com/document/d/1ZZiXyQNlYz3sfwRRmXxOM8T-hRA0ijRt2-HPe-eskzs/edit?usp=sharing
Hi Folks!
I am glad to announce the 20th of a continuing Orange County ACX/LW meetup series. Meeting this Saturday and most Saturdays.
Contact me, Michael, at michaelmichalchik@gmail.com with questions or requests.
Meetup at my house this week, 1970 Port Laurent Place, Newport Beach, 92660
Saturday,3/4/23, 2 pm
Activities (all activities are optional)
A) Two conversation starter topics this week will be. (see questions on page 2)
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
2) Music in Human Evolution | Melting Asphalt
https://meltingasphalt.com/music-in-human-evolution/
B) We will also have the card game Predictably Irrational. Feel free to bring your favorite games or distractions.
C) We usually walk and talk for about an hour after the meeting starts. There are two easy-access mini-malls nearby with hot takeout food available. Search for Gelson's or Pavilions in the zipcode 92660.
D) Share a surprise! Tell the group about something that happened that was unexpected or changed how you look at the universe.
E) Make a prediction and give a probability and end condition.
F) Contribute ideas to the group's future direction: topics, types of meetings, activities, etc.
Conversation Starter Readings:
These readings are optional, but if you do them, think about what you find interesting, surprising, useful, questionable, vexing, or exciting.
1)Georgism... In Space! Just and proper political economy for an interplanetary civilization
Georgism... In Space! - by Sam Harsimony
How did Georgism define economic land, and why is it important for tax policy in space?
Can you explain how Georgism can be applied to space resources like energy, matter, and physical space?
How might governments in space subsidize the collection of solar energy?
What are some difficulties with taxing matter in space, and how might taxes need to be adjusted to avoid distortions?
How can physical space in space be taxed, and what factors might be considered in assessing the value of land in space?
How can excludable resources like broadband spectrum and orbits be properly managed in space?
Why is it important to strike a careful balance when implementing taxation policies for space colonization?
2) Music in Human Evolution | Melting Asphalt
https://meltingasphalt.com/music-in-human-evolution/
Other explanations
https://en.wikipedia.org/wiki/Evolutionary_musicology
Have you heard this explanation of the evolution of music before?
What other explanations have you heard?
What do you think of it compared to other hypotheses?
Are they mutually exclusive?
https://www.youtube.com/watch?v=JbxE9myZrsg&ab_channel=Dazza
3D printing a frog pavilion
I've seen talk about ending wildlife suffering, a goal which seems impractical, but while we're at ot. why not look at optimizing for pleasure for wild animals?
While ending wildlife suffering sounds like Fourier wanting to turn the seas into lemonade, that was a heartwarming video. I wonder what other approaches could there be to optimize pleasure for wild animals.
I'm extremely worried about AI risk lately, especially after the twin shocks of ChatGPT and Sydney. I want to do something to help. I live near DC and know people with political connections who could potentially help me get a policy meeting with a member of Congress. Two questions:
1. If I can get such a meeting, what specific policies should I propose? My own ideal is to ban all AI research, but I know that's an impossible ask.
2. Any insights on members of Congress who are particularly good targets to try and meet with specifically? I'm talking to my connections about this too, but I'm wondering if anyone here has knowledge specific to the AI risk field, in terms of which members may be receptive or have been receptive in the past, and have the power/motivation to try to do something about it.
1. Policy is a bit of an open question. This section of the online book Better Without AI (https://betterwithout.ai/pragmatic-AI-safety) may go into that, I haven't read that far yet. He has argued that AI mainly runs on hype, so if public perception turns negative, the whole thing would shut down. So policy may not be as effective as advertising.
2. Representative Ted Lieu wants to regulate AI (https://www.nbcnews.com/politics/congress/ted-lieu-artificial-intelligence-bill-congress-chatgpt-rcna67752), so maybe he has a clearer picture on who to reach out to to throw a wrench in the gears.
Fucking hell, I love that G. E. Moore was dumb enough to argue that holding up his hands in front of him and saying 'Here is one hand, here is another' was enough to prove the existence of an external reality, Jesus, I am going to murder every philosophical position I disagree with with that one.
You clearly didn't understand him if you think that was the sum total of his argument.
I later saw that there's some more to it, but the first link I saw said it was simply what I said. Actually, now that I saw the logical formulation, it really isn't better at all.
The trouble with appealing to common sense is that that is the death of philosophy. In particular, you really cannot appeal to common sense while trying to convince me I don't exist, because come on dude.
> In particular, you really cannot appeal to common sense while trying to convince me I don't exist
Per my other reply to you, I'm not trying to convince you that you don't exist, but merely that you don't have the existence you thought you did. This should not be surprising at all. Most cells in your body are replaced every 10 years. Are you the same person you were 10 years ago? What are the specific properties you ascribe to "existence" and "you"? This is not an obvious question, so why do you think you already have all of the answers?
As for Moore's proof, it's a simple argument in the end. Every argument of skepticism about the external world depends on concepts, evidence and even logic that we derived from the assumption that the external world exists, so if you then turn that around to question the existence of an external world, then you undermine all of the concepts, evidence and logic that form the core of your argument, so any such argument is self-defeating and necessarily less plausible than the external world just existing.
The existence I thought I did? I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Am I the same person from 10 years ago? No, but it is the same sentience. The witness has always been there.
About skepticism, it's at least partially derived from observation, but observing the world does not at all mean that the world is what it seems. I do think concepts, evidence and logic are all flawed and are just limiting viewpoints, useful to apprehend certain aspects of the truth but unable to contain all of it.
Likewise to eliminativism, first you had to read some philosophy to reach that conclusion, because you did not always believe you lacked consciousness. Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
> I am perfectly willing to believe I am not actually physical, but that doesn't mean I don't exist.
Nobody is claiming that you don't have some existence.
> Your consciousness had to experience that philosophy, so eliminativism is using consciousness to overthrow consciousness.
Nah, the information processes that constitute "me" threw out the fiction of consciousness after processing the information and being convinced by eliminativism and science as a whole.
It's not the same Moorean argument because what I'm throwing out is the qualia/phenomenal experience but not the perception, and the latter is all that's really needed.
The information processes did not throw out anything, you still have qualia, it's not like your waking life has become equivalent to deep sleep.
I had an interesting reaction the other day to an ACX post: it creeped me out. Severely. This became a whole post of my own:
Truth of Ice, Truth of Fire
https://squarecircle.substack.com/p/truth-of-ice-truth-of-fire
What's the best joke you've ever heard?
I'll start: One of my personal favourites is Norm Macdonald's gag about Bill Cosby on Comedians in Cars Getting Coffee. Everything about that joke is perfect.
That’s really tough. The best? How about the best Shaggy Dog joke? I think I can put my finger on that one. Norm MacDonald on Conan:
https://m.youtube.com/watch?v=jJN9mBRX3uo
One of my favorite norm mcdonald jokes:
No one ever said "life is fair"...
except for the russian
working at the fair
Truly awesome. A friend of mine used to play poker with Norm - he said he was so funny no one noticed he was winning. Perhaps they didn't care. Poker Patreon?
Works best told with a mild russian accent
but I can't do a russian accent in real life nor in text
Another favorite of mine, but works best if you're familiar with the joke it's riffing on:
https://www.newyorker.com/magazine/2013/11/18/guy-walks-into-a-bar
The ‘life is fair’ one reminds me of Melania Trump’s ‘Be Best’ campaign. No definite article in Slovenian either.
For what it's worth, I didn't think that one was even remotely funny.
I have no idea whether it's worth having a second opinion on whether it's worth clicking on a short link.
I never saw that one. That's classic, though!
https://youtu.be/ljaP2etvDc4
Ricky Gervais and Seinfeld discussing the philosophical layers of holocaust jokes is pretty funny, too...
https://www.youtube.com/watch?v=k_3Q9X03Yeg
But here's the only joke I can remember...
A dyslexic walks into a bra...
I can tell that one, because I'm dyslexic, and I have.
The second joke is one I like.
The Last Laugh is a documentary about holocaust humor, and very good.
I still think about that "guess you had to be there" joke.
F-ing brilliant. Note how Jerry is all over the provenance -- he knows it's a gem. I suspect it's Ricky's joke, but by saying he doesn't remember where he heard it, Jerry can't ask for it... Comedians are fiercely competitive.
Considering how many jokes I've heard you'd think I'd be able to remember them better. Also depends what counts as jokes; the funniest stuff tends to be improv or machine translation games.
Not the best, but one that always stuck with me was from Mitch Hedberg. "I went to the store to buy candleholders, but they were out of candleholders. So I bought a cake."
I was reading through an old bookmark and saw that Scott was still looking to review Nixonland eventually. I read the book myself after that banger book review, but found that it's actually part of a quartet:
1. Before the Storm: Barry Goldwater and the Unmaking of the American Consensus (https://www.amazon.com/Before-Storm-Goldwater-Unmaking-Consensus-ebook/dp/B0087GZE32) - "the 60s" (until Watts?)
2. Nixonland: The Rise of a President and the Fracturing of America (https://www.amazon.com/Nixonland-Rise-President-Fracturing-America-ebook/dp/B0013TTKL2) - ~Watts until Nixon's reelection
3. The Invisible Bridge: The Fall of Nixon and the Rise of Reagan (https://www.amazon.com/Invisible-Bridge-Fall-Nixon-Reagan-ebook/dp/B00HXGD5CE) - Nixon's reelection until Ford beat Reagan for the GOP nomination
4. Reaganland: America's Right Turn 1976-1980 (https://www.amazon.com/Reaganland-Americas-Right-Turn-1976-1980-ebook/dp/B083SS4251) - Carter beats Ford for '76 until Reagan beats him in '80 (?)
I've not read Before the Storm or Reaganland yet, but Nixonland flows near seamlessly into Invisible Bridge and I imagine the other two books segway cleanly too. I'd napkin that the whole thing is close to 150 hours of audiobook.
Even though Perlstein is quite partisan (he wrote an '05 book on how the DNC could take center stage again), I'm not sure anyone else had even attempted to write a consolidated history like this of American society/politics/discourse/mood for such periods of time.
I'd grown up around adults who made mention of things like Woodstock and Watergate or the '68 DNC riot, and even someone who still couldn't believe we'd elected an actor (Reagan) as president. It was all ancient history, with the current era being written on TV as and after the towers fell. Reading another generation's political experience connected *a lot* of dots in my zoomer mind. I'd defo push these books at any zoomer interested in politics frfr (or anyone seeking to understand the boomers).
Relevant to Scott and any Unsong readers:
"[Purim Torah]: Should we be afraid of Artificial Intelligence?"
https://judaism.stackexchange.com/questions/133443/ptij-should-we-be-afraid-of-artificial-intelligence
So the aspirin might prevent heart attack thing.
I know there has been back and forth on this and it is not the wonder drug it was thought to be for a while but I've been taking a single aspirin before going out to shovel a large amount of snow for a couple years as a sort of voodoo ritual to protect myself.
I'm 70 now but have no diagnosed cardio issues, and have good cholesterol numbers, good blood sugar, healthy weight, good blood pressure and an echo cardiogram shows no buildup of bad stuff in my heart. In the summer I still ride decent mileage on my bike, though my 'century' rides are all metric now. I also sit and crank my old Schwinn on a turbo trainer in the winter until I think I'm risking dying of boredom.
But... when my dad was my age he died from his third heart attack so I figure taking an aspirin before I go out and perform what used to be a pretty reliable widow maker chore, hell, it can't hurt.
Does this make sense to anyone who actually knows about this stuff?
What you want to be careful about is the fact that aspirin has a significant anti-clotting function, and one thing you *don't* want to have happen is a brain bleed that doesn't get sealed off right away, i.e. a hemorrhagic stroke, either a big one, or even the very small kind that you don't notice but which degrades your brain over time.
I don't have any useful advice about this, this is something you want to discuss carefully with a physician, I'm just observing the tradeoffs. People dismiss the anticlotting issues with aspirin because they think "oh a GI bleed is less scary than an MI" but they forget about the stroke possibility.
I haven't looked at the studies. But this...
https://www.hopkinsmedicine.org/health/wellness-and-prevention/is-taking-aspirin-good-for-your-heart
No idea, but the placebo effect is real.
The Biden Administration is obviously going to lose in the Supreme Court regarding the student-loan cancellation. And on the merits they should lose, though this Court would rule against them regardless.
Both the president and his staff know that's what's going to happen, and privately don't actually mind it. The whole thing is kabuki theater.
I have been getting a lot of usage out of the saying "if you're not going to do something, you might as well not do something big" recently. Everybody in politics seems to be following that mantra these days.
Heh!
Having done a bit more reading on the case I would adjust my prediction somewhat. The standing of the two set of plaintiffs, one in particular, appears a good deal weaker than I was aware of. I'd still wager in favor of the Court ruling against the administration. But there does seem to be some fair chance of 5 justices agreeing that the plaintiffs' standing to sue over this topic just doesn't pass a smell test, leading to a procedural ruling rather than a merits one.
https://www.washingtonpost.com/wellness/2023/02/27/acl-injuries-brain-neural-connections/
Training people to pay attention to their knees during rehab might be the wrong thing. Good knee mechanics need to not take cognitive resources.
I only read about half of that article, but I think that may be the wrong take. People with bad mechanics rely on conscious thought, possibly in the same way that chess masters instinctively know the good move while amateurs have think quite hard. But chess masters go through a long stage of thinking hard about every move before reaching that level, and "don't think too hard" would be terrible advice for someone looking to improve.
Similarly, focusing on your muscular form etc is often an important part of building good movement habits. The end goal should be to make it unconscious but that's not necessarily the way to get there.
Thoughs? Did I stop reading too early?
As I understand it, the problem was that people were being told to pay attention to their knees during rehab. Perhaps the issue is that they weren't gotten to the level of unconscious competence.
https://news.uci.edu/2023/02/27/early-life-stress-can-disrupt-maturation-of-brains-reward-circuits-promoting-disorders/
This is a mouse study.
"“We know that early-life stress impacts the brain, but until now, we didn’t know how,” Baram said. “Our team focused on identifying potentially stress-sensitive brain pathways. We discovered a new pathway within the reward circuit that expresses a molecule called corticotropin-releasing hormone that controls our responses to stress. We found that adverse experiences cause this brain pathway to be overactive.”
“These changes to the pathway disrupt reward behaviors, reducing pleasure and motivation for fun, food and sex cues in mice,” she said. “In humans, such behavioral changes, called ‘anhedonia,’ are associated with emotional disorders. Importantly, we discovered that when we silence this pathway using modern technology, we restore the brain’s normal reward behaviors.”
So there's this cluster of positions called "Illusionism" which is about doubting the existence of consciousness to various degrees, whatever that means exactly. I'm very interested in understanding how people think about this in more detail, so if anyone here is sympathetic to that set of ideas, I'd like to hear from you! Like, what is consciousness, what exactly does your position say about it, and why do you think that? (And if it's applicable, what do you see as the objections to that position, and why are they unconvincing?)
If it's relevant, I'm mostly through Dennett's "consciousness explained", and I think I understand his model quite well.
I'm very sympathetic to illusionism but I think it ultimately must fail. I go into some detail here if you're interested: https://www.reddit.com/r/naturalism/comments/zr6udy/a_challenge_to_illusionism_as_a_theory_of/
I'm a borderline eliminativist along the lines of Dennett. I think Graziano's framework for thinking about this on a neuroscience level is compelling:
https://www.pnas.org/doi/10.1073/pnas.2116933119
Eliminativism is specifically about the ineffability of mental states, the "what it is like". The illusion of consciousness might be like the "illusion" of solidity. We know solid objects are mostly empty space, but this doesn't eliminate the category of "solid", it just reframes what "solid" means. Somewhat analogously, we know that ineffable qualia are incompatible with a scientific picture of reality, and rejecting the naive first-hand "ineffability" as deceptive will permit us to reframe and properly understand what's going on with "consciousness".
I think this "ineffability" is the same sort of woo that so troubled philosophers a century ago when they were trying to explain how inanimate matter could lead to life, and so they invented vitalism. Vitalism is nowhere to be seen now because progressive improvements in our understanding of the mechanisms of life shrunk the explanatory gap to the point where it seemed implausible that there would be anything left to explain once that process neared completion. I think the same pattern will repeat with consciousness.
I think the objections to this are well-known: p-zombies, Mary's room, etc. P-zombies just aren't conceivable. A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. We lose nothing meaningful by just accepting we are those automatons. I don't find this particularly objectionable because I'm also a Compatibilist about free will.
For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics. Anytime you bring in some kind of infinity you start getting unintuitive results, and humans are notoriously bad at intuiting correct answers in such contexts. I think this is just another example.
Even if this were overcome in some revised Mary's room, I think there are a lot of reasons why Mary could still be surprised upon seeing red the first time (like the ability response), which are compatible with physicalism. There just isn't anything convincing there.
What are the physicalism compatible reasons Mary could still be surprised by red?
I've referenced the ability hypothesis elsewhere in this thread, so that's one example:
https://en.m.wikipedia.org/wiki/Knowledge_argument#Ability_hypothesis
"For Mary's room, I think the whole argument rests on her having "all knowledge". Basically, Mary has so much knowledge that she is able to answer an infinite series of questions about any conceivable topic that reduces to physical characteristics."
I think that's a misleading analogy., The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
> The axioms of Peano arithemtic are finite and not even lengthy, but they still allow you to answer an infinite series of questions.
Indeed, and yet almost every mathematician in the world was shocked when Godel effectively proved that Peano arithmetic was necessarily incomplete, which proves my point that humans are generally not great at intuiting results when infinities are involved, even when they're experts.
But you haven't shown that infinite are involved in a relevant way.
Nor has anyone shown that Mary's room is self consistent and so the infinity doesn't matter, so we're once again at the place where an intuition pump purporting to prove the existence of qualia does nothing of the kind.
How about saying what the inconsistency is, if you think there is one?
"A p-zombie world in which all of our current philosophers arguing about consciousness are just automatons lacking this "ineffability" is observably indistinguishable from our current world; asserting that this is not actually our world just smacks of assuming the conclusion that this ineffability exists to begin with. "
A zombie world would only look the same objectively. It would feel different subjectively to everyone in it, ie. it would feel like nothing. Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging....The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Yeh. I can't be sure that you aren’t a zombie but I’m not.
Funny, that's exactly what a zombie would say.
No the zombie wouldn’t be conscious. If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
There’s no science here - I think you are more “philosopher” than scientist anyway - basically all you are saying, in an verbal excrescence is that we can’t prove qualia therefore they don’t exist.
> If your definition of being a zombie is “something that thinks they are conscious” then that’s me, but then a zombie is indistinguishable from a conscious being.
That's the whole definition of a p-zombie. They are physically indistinguishable from so-called "conscious beings", talk, walk and speak exactly the same things as conscious beings, but instead of being conscious and talking about consciousness, they're just completely wrong about their own mental states while asserting they have consciousness.
> Ignoring the subjective evidence is amounts to ignoring consciousness, and is itself question begging.
Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
> The argument is then: there is no consciousness, so we are all zombies alreday, so there is no consciousness.
Not really, it's more like: why posit more entities than are necessary to explain the data? It's more parsimonious if p-zombie world were just our world and we're just mistaken about consciousness, and perceptions are commonly mistaken, so I should epistemically prefer "no p-zombies".
Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
After all, you can't actually prove that consciousness exists without simply referencing your perception of internal conscious experience. All of the logical arguments purporting to demonstrate its existence are fatally flawed. Unless you can prove it, there is no reason to posit its existence. That said, I agree a mechanistic theory should explain why we *believe* we have consciousness, and I think Graziano's paper which I referenced above is a great start on that.
So I've been reading this back and forth (thanks for having it!) and I feel like I understand the models of both sides and why you seem to be talking past each other, but mb I'm completely wrong! But here's an attempt to explain them. If anyone wants to give feedback on this, I'd be very interested.
I think the implicit disagreement that basically generates the arguments on both sides is what you take as the epistemic starting point. (I'm gonna represent both sides by Alice and Bob here so that if I mispresent someone it's less bad, also one side was argued by multiple people.) So Alice would argue that the way you reason about the world -- not about consciousness specifically, but about everything -- is that you have patterns of qualia and then react to them. That's how you navigate the world; you see qualia of something, like the image of a car in your visual field, and you react to it somehow. So the epistemic chain is something like [qualia] -> [interpretations of qualia based on experience/knowledge/intelligence/whatever] -> [conclusions about the world]. This qualia could be a non-material thing that acts on the body, or another aspect of the material stuff in the body; the distinction actually doesn't matter here.
So consequently, the qualia takes epistemic primacy. That's your *starting point*. Which means any theory has to explain qualia first and foremost. And it's a-priori impossible for science to rule out the existence of qualia because everything science does is *itself* based on qualia. If you're working in a lab, you don't start from the measurement itself; your starting point when you look at the display of an instrument is the *qualia* of the result that appears in your visual field, and then you conclude stuff based on that.
In particular, the argument from the paper,
> (1) for everything we know, there must be information in the brain
> (2) the brain's models are always fallible
> (3) therefore, our "knowledge" that we have qualia is fallible
completely misses the point, as does the illusionist framing in general. The reason is that all of these arguments only show that you can be misled about *what an input means*; that's what an illusion is. None of them show that the *input itself* is non-existent, which is the only thing that matters! Alice isn't claiming that her qualia are evidence for any particular thing in the external world -- such a claim could indeed be wrong! -- she's only talking about the qualia itself, and the argument above doesn't show how that could be nonexistent, and neither does any other argument made here. Again, it doesn't even make sense to her because *everything* you find out about the external world is itself based on qualia; it all starts at the same point.
Meanwhile, Bob identifies himself not with qualia but with his entire body as an information processing system. The qualia thing isn't the epistemic starting point; it's an *output* of the information processing system that is Bob (or anyone else). So conversely, the argument "you need to explain why there seems to be experience" misses the point because, well, the "seeming of experience" thingy is also just an output of the information processing system that is you. So you do have to explain why *the system produces this output*; you have to explain why Alice talks about 'kon-shush-nuhs' but you *don't* have to explain the experience thingy itself, because that's just something the information processing system talks about; it doesn't have to be true.
(This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status. The part that's confusing to Alice here is that you even model *yourself* from this third-person perspective, sort of.)
So as long as Alice waves her hands and stubbornly repeats that *no you really have to explain this experience thing, it's really there*, all that misses the point for Bob because it all assumes that the qualia is the epistemic starting point, which it isn't; again it's just an output. The only evidence that *would* count is, basically, anything that's valid from a third person perspective. So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence. Or if we argue about the a priori *complexity* of a universe with qualia in it, that could be relevant for the prior we can assign to both hypotheses. Or if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive. But appeals to the epistemic primacy of qualia aren't.
Does this sound right? I feel like if it is, then neither side has really provided evidence that's compelling the other side -- understandably so!
I don't think the qualiaphilic side need to lean very far towards the primacy of qualia, so long as consciousness is not ignored. In a way, what is epistemically primary is some notion of experience or perception including , but not limited to, qualia.
"if we can take the qualia hypothesis and use it to predict something about the neuro-anatomy about the human brain, something about how the brain processes high-level information on a functional level, that would be impressive"
But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Yeah, I took qualia as a standin for "subjective experience of any kind"
> But it's still not invalid to say that qualia exist without having novel causal properties , so long as the are identical to something else...if qualia are not.an entirely additional ontological posti, they they do not have to justify their existence with novel causal powers.
Not novel causal powers, but perhaps causal powers period -- even if they're also explainable in material terms?
I think you have the general contours of the situation. Some people take the primacy of qualia as a given, that it's a new kind of knowledge that simply cannot be be questioned because it can be directly experienced.
This seems to inherently beg the question to me. Scientific study always starts with "working definitions" that serve as a good starting point but require refinement as we develop a more complete, coherent picture of what's going on. We started with the premise that qualia exist because we had no reason to question their existence.
So we did our science and ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*. This state of affairs *requires revision to our assumptions and basic definitions*, as it would in any other scientific study, and we can devise alternative explanations, like eliminative materialism, that resolve this problem. But because Alice takes qualia as axiomatically true this is not a solution but an absurd kind of heresy, and rather than question the primacy of qualia, she would prefer to question various assumptions like reductionism or materialism.
There is no logical argument or evidence definitively demonstrating that qualia must exist or must have primacy, and the only argument from my side is epistemic parsimony and a recognition that nothing has been as effective as science at explaining the natural world of which we are a part.
Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
> So if we found that modeling an experiential component of qualia actually does a wonderful job explaining human *behavior*, that might be valid evidence.
Yes, and we do have evidence that qualia are not integral but merely components of an information processing system. Phenomena like blindsight show that humans reliably report objects in their blind spot even when they don't consciously experience it. This is clear evidence that conscious experience and qualia are simply not what we perceive them to be, that they are merely a component of a system.
Who said that qualia are not components of a system ? How do we perceive them?
> Edit: to clarify, in some sense I understand Alice's argument that science is built on qualia and therefore you cannot question qualia with science because that then undermines the very science you're using as proof, so that's self-defeating. The response to this that I've posted a few times now is that qualia are not actually essential to science, you need only perception. A machine learning algorithm with a sensor can do science, so the experiential part is not strictly necessary.
I think in the Alice model, it is possible to do science without qualia, but the evidence that you get from science -- and even the evidence that you are doing science at all -- is again qualia.
Anyway, I feel like this does point to a possible way that the problem could be resolved in principle. Like, Alice and Bob could agree that they can't agree on the epistemic starting point, so they could take the scientific validity of qualia as a crux. It'd be up to Alice to explain (a) how qualia works at all, (b) how a universe with qualia is philosophically simple, (c) how various phenomena like blind sight or the moving color thing from Dennett are compatible with a theory of qualia, and (d) how qualia makes functional predictions about the brain. If she could do all that, it ought to convince Bob that qualia exists after all.
>ultimately ended up where we are today, with a scientific picture of the world that is *incompatible with qualia*.
You haven't shown that.
There's an argument against dualist theories of qualia, based on physical closure, where Qualia would have nothing to.do, and are therefore an unnecessary posit.
There's an argument
against identity theory based on irreducibility. you haven't even mentioned it..
In line with your tendency to ignore identity theory.
So that's two arguments against two theories of qualia. They don't add up to an argument against qualia unless they are exhaustive.
"This is like Dennett's Heterophenomenology; we treat the output of the information processing system like a fictional story; we assume it's telling the truth but that just means we assume it thinks it has this thing; we don't give the thing it talks about special epistemic status"
But "qualia" as such doesn't appear in naive phenomenological reports because it is a philosophical term of art. The naive theory is that colours, etc, are properties of external objects that are perceived exactly as they are. Naive realism, as its called, is unsustainable scientifically because science requires a distinction between primary and secondary qualities. In addition, there are specific phenomena, such as blindsight and synaesthesia , where qualia are missing or unusual. Qualia aren't uniformly rejected by scientists. for all that some philosophers insist they are unscientific.
Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
> Objective, scientific data aren't a naive starting point either. Scientific objectivity is has to be trained, and the process consists of disregarding the subjective and unquantifiable -- which has to exist in the first place, in order to be disregarded!
This begs the question. Focusing on the quantifiable and objective is not an implicit assertion that the subjective and unquantifiable exists, it is an epistemic stratagem to focus on that which *can* be quantified *at this time*, and progressively build understanding to a point where the previously unquantifiable can then be quantified.
The opposite is true. Given that everybody has an internal conscious experience then that has to be explained. Any science that doesn’t explain it isn’t a science. In fact it’s just hand waving because we don’t understand the brain.
The belief in conscious experience has to be explained, the experience itself is a fiction. I recommend reading Graziano's paper.
> Only if you count subjective perceptions as reliable evidence. They are demonstrably not.
If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
> If subjective perceptions are not reliably evidence, aren't you knocking out all of science? There is no science if we can't observe reality.
I've responded to this point elsewhere in this thread as well, which I'll reproduce here:
> Science also does not require fully reliable perceptions or senses because it can quantify the unreliability via repeatability, and 1) restrict itself to the narrow domains in which perceptions are reliable, and 2) project measurements from unreliable or undetectable domains into the reliable domain. That's what instruments are for.
But how are you discerning which perceptions are reliable? And even if perceptions are unreliable, there is still the fact that we perceive. Reality could all be an illusion, but the illusion is being presented to someone.
> Said another way, what's more plausibly correct: our most successful and effective method of explaining and predicting the natural world, science, or our subjective perceptions which science has proven are demonstrably flawed in innumerable ways?
sorry to jump in, I'm just curious, does this sentence imply that all [theories of consciousness under which consciousness is real] necessarily contradict science? Like, there's no way to have a theory that posits the existence of consciousness but is consistent with the laws of physics (and hence science/effective explanations/etc.)?
This would sort of mean that a second, implicit reason why you like the Dennett approach is by process of elimination; all the alternatives are bad.
Strictly speaking, no. For instance, panpsychism would not require changing any natural laws to explains things we've seen, it might simply require us to accept that every posited entity carries with it some spec of consciousness, and that natural laws will aggregate consciousness in various unobservable (subjective) ways. Human brains are then an aggregation of consciousness that can finally reflect on and understand consciousness itself.
If you consider first-person consciousness to be irreducible to physical facts, that's probably an elegant way recover science with a somewhat unverifiable component. Seems more plausible to me that we're just mistaken about our own mental states.
Re: process of elimination, in a way, yes. I go into that below in my thread with The Ancient Greek. It's just epistemically more justifiable in so many ways.
But panpsychism is clearly ridiculous. Consciousness is linked to brains. I think I could discourage panpsychists from their beliefs by asking them would they prefer to be shot in the leg or the brain.
The argument only requires that you can introspection subjective states that you can't you can't fully describe. It doesn't require that subjective states are accurate representations of anything beyond that. In part icular, non physicality is not asserted purely on the basis of introspection.
The data include the subjective data, unless you are begging the question by ignoring that.
It would also be parsimonious if only consciousness existed, and matter were an illusion. Parsimony does not imply a unique ontology.
You can't prove matter exists without referencing your own experience.
> The argument only requires that you can introspection subjective states that you can't you can't fully describe.
"Can't fully describe" is just a god of the gaps argument.
> The data include the subjective data, unless you are begging the question by ignoring that.
I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable, therefore we cannot derive any reliable conclusions from it until its reliability is quantified.
> It would also be parsimonious if only consciousness existed, and matter were an illusion.
I disagree. Build a formal model of consciousness and then we have a basis for comparing its parsimony to the standard model of particle physics. We have no such thing, therefore this is no different than saying "god did it". The number of logical properties we must then assign to god/consciousness dwarfs the standard model.
> You can't prove matter exists without referencing your own experience.
"Experience" implicitly smuggles in qualia. I would only agree with the phrasing, "You can't prove matter exists without referencing your own perceptions", because perceptions don't implicitly assert that conscious experience exists.
Consciousness is required to argue the non-existence of consciousness. P-zombies on their own wouldn't suddenly start arguing about the existence of consciousness and qualia without being programmed to do so by some conscious entity.
In fact, the whole enterprise of science depends on our consciousness interacting with our qualia. You might argue, as Nagarjuna did, that consciousness and qualia have no existence in and of themselves, and are instead the emergent phenomena of the interaction of underlying processes—and those processes, when examined will be seen to have arisen from deeper processes—ad infinitum. However, Nagarjuna didn't stop there. He was willing to admit that the "illusion" of mind and qualia (generated by the underlying processes) was as functionally real as the underlying processes.
And invoking parsimony doesn't move your argument along. The Law of Parsimony implies that there is a correct explanation for a phenomenon. Saying there is nothing to explain is not parsimony, it's just refusing to consider the problem.
Also, the Mary's Room experiment has been done. Not that it really helps to resolve the philosophical loose ends...
https://www.npr.org/2014/01/11/261608718/wearable-sensor-turns-color-blind-man-into-cyborg
> "Can't fully describe" is just a god of the gaps argument.
I don't see why. It's just a direct observation, not intended to explain anything else.
> I'm not ignoring subjective data, I'm saying we have ample reasons to consider it unreliable,
You haven't given any.
And remember, neither physicalism not parsimony require you to be an eliminativist about consciousness, since identity theory is a thng.
Thanks! That makes a lot of sense to me. Very in line with Dennett. Also listened to the paper you linked, which fits as well.
(fwiw I totally agree that the two objections you listed are extremely unconvincing. I even consider p-zombies an argument in the opposite direction; if your theory permits the existence of p-zombies, that's a problem.)
One thing I'm wondering, if you're willing to elaborate, is how you square this picture with morality. If qualia doesn't exist, then consciousness either doesn't exist or is just a name for a high-level process; either way there's no actual experiential component to the universe; no "what it is like". (Or do you disagree with this?) This seems to imply there's no suffering. Do you just have a moral theory that works without any conscious states? Can you have suffering without 'what it is like' type experience? Or does it imply ethical nihilism?
Regarding ethics, I just came across a pretty good article which goes into eliminative materialism and ethics:
https://longtermrisk.org/the-eliminativist-approach-to-consciousness/
> This seems to imply there's no suffering.
I don't think it implies there is no suffering, it simply reframes what suffering is, similar to my solidity example. Solidity is not the absence of empty space, it's just a different property, like the inability to pass another solid through it (roughly); analogously, eliminating ineffability doesn't entail the absence of pain or suffering, pain and suffering are simply understood to be something else, like evolved preferences that avoid damage that harms our fitness. That still sounds like enough to ground a utilitarian ethics to me.
Other ethical frameworks don't rely on preferences or values in the same way so I don't think there's a problem there, ie. deontology or virtue ethics.
Good article from the Atlantic about the "updating" of Dahl's works
https://archive.md/ABuP0
Why is this being done? If Dahl's works are so flawed, and so many of the passages need to be edited to the point of losing Dahl's characteristic nastiness and not even being recognizably Dahl any more, why not just toss the whole thing out? What's the point of keeping something so flawed?
The obvious answer is that modern corporations and woke writers are so bereft of genuine creative talent that even a dreadfully unprogressive straight white Englishman born over 100 years ago was creating categorically better art than all these modern 'enlightened' fools could ever dream of making themselves (or at least, if they don't recognize Dahl's actual greatness, they certainly acknowledge the enduring popularity his works have that their own works do not).
The edits made to Dahl's books feel to me like PR stunts that are intentionally stupid in an attempt to invoke toxoplasma of rage. I find it really hard to believe that anybody sincerely thought a book where one of the heroes famously owns a factory run by African slaves could be made to seem progressive by replacing the word "fat" with "enormous".
The hero doesn't "famously own a factory run by African slaves" though does he? Because that bit of the story had already been changed in 1972.
I'm 56 and I was unaware of the Oompaloompa origin story (until just now) even though I read the book as a child.
But you are talking as if the fat/enormous edit were going ahead whilst leaving the African slave part unchanged.
Do you think that the book shouldn't have been changed (presumably by Dahl himself) in 1972?
You're right, I had forgot about that. But even the 1972 version still has them being slaves from *some* foreign location, right? It's just left open-ended what continent they came from originally.
No it had been changed quite a bit I think. The story was that Wonka had "rescued" them from some terrible life of persecution and there was some sort of symbiosis in their working for him in the chocolate factory. But just looking at stuff online about the African version of the story (which I've not read) it also sounds like their was a bit of justification in the story of him "rescuing" them as opposed to capturing them in the way we know that many actual slaves were captured.
I wonder how many of the people protesting about the current sanitisation of the book know about the previous rewrite and whether in hindsight they would think that was a good or bad thing?
I read The Coral Island by RM Ballantyne to my kids when they were under 10 years old. It was written in 1857 and I have a very old copy that was given to my grandfather when he was in Sunday school.
It's a ripping adventure of boys stranded on a desert island and it also contains the N word in a description of marauding cannibals that come to the island.
When we came across that use of the word we were able to have a very useful discussion about it, including the idea of how language changes over time and why words that were thought to be innocuous in one place and time can be hurtful in another context.
Personally I think that simply changing an author's original text is going a bit too far, but perhaps this controversy will at least stimulate a bit of conversation between children and their parents about the importance of context in the use of any and all words.
But editing children's stories isn't a new thing and I remember a similar level of discussion (in the UK at least) when the gollywog character was edited out of the Noddy universe. I'm sure that some people who are on one side of the argument here might have been on the other side in that case.
It might also be interesting to think about why nobody seemed at all put out about terrible film versions of Dahl's stories, or wondering why it's just fine to express his ideas in a rather better musical but without the use of the offending words.
Context is everything.
If it's not a case of vanilla censorship but the parallel production of institutionally approved alternative versions of books deemed problematic, then aren't we in "Hegellian Wound" territory a la Žižek?
Eg.: First you have a natural state of things, the original Dahl ideas and writing. Something comes along and disrupts this state by imposing it's own values/agenda - the Updaters - and they inflict a wound on the original. But Hegel comes in and says: wait, this wound is not fatal, actually it is a wound that contains the vehicle for it's own healing and transcendence. See, were it not for the attempt at vulgar re-writing, the original writing would not have a context in which to demonstrate it's own inherent virtue and value to society. The wounding makes the original stronger, in ways that previously were not thought possible.
Who's the "institution" doing the approving? Is it the publisher? If so then every book Puffin Books ever published after requesting changes from the author (i.e. the normal editing process) was an "institutionally approved alternative version"
An alternative answer would be that the updaters believe that Dahl's books lie somewhere in the grey zone between "unsalvageably old-fashioned" and "better than all modern children's fiction", and that the update will help sell more books to woke parents.
Philip Pullman agrees. The books should be allowed to go out of print, and people should buy his books instead.
Philip Pullman worked very hard in His Dark Materials to make Satan seem like the good guy, and while there is something compelling in his vision, I am suspicious of someone who wants to make the Prince of Lies into a rebel hero, and God into a petty dictator.
I mean, I basically believe the YHVH of the Old Testament is insane, but Jesus still loved Him, so I don't think it's quite as straightforward as Pullman presents.
There was an article c. 25 years ago which posited that Pullman was trying to write the anti-Narnia Chronicles.
I didn't read Pullman, but my son read at least one (The Golden Compass?) and found it mediocre.
I felt the same way reading Ayn Rand many years ago.
It sounds as though Pullman spent too much time moralizing and not enough writing an entertaining story.
I've read all his books. So has my wife who's a teacher with a particular interest in children's literature. So have my kids.
We all thought it was pretty entertaining.
I could have almost been the inspiration for Augustus Gloop when first reading Charlie And The Chocolate Factory when I was 11.
And yet, I loved the story, and read the also-wonderful James And The Giant Peach shortly afterwards. It never occurred to me to be offended by either book.
I was much more aware of Dahl's edginess when reading his books to my children years and years later.
> What's the point of keeping something so flawed?
If you're the beneficiaries of the Dahl estate, the benefits are obvious.
The whole thing is probably best seen in the context of sacrifice. In this case, the spotlight has swung around to Dahl's privately expressed views about Jews, and a sacrifice was necessary to appease the powers that be. You can't change Dahl's opinions about Jews, but you _can_ change his published books, so you do that, and the spotlight moves on somewhere else for now.
It doesn't matter what the changes are, it just matters that you genuflect appropriately when you're called out.
I don't know, I started hearing about it a couple of months ago, but this could be Baader-Meinhof at work.
One theory which sounds plausible: https://www.theguardian.com/commentisfree/2023/feb/26/updating-roald-dahl-same-old-story-david-mitchell (making textual changes means a fresh copyright term to monetise)
The copyrights aren't particularly near expiration. AFAICT they don't start till around 2060. Barring changes in copyright law, the originals will go into the public domain at the same time they would have otherwise, even if the bowdlerized version remains protected.
And it will have no effect on the copyright of new adaptations into visual media, which is presumably where the real money is.
It's hard to see much of an angle there.
The announcement that the originals would be published by a separate imprint came after the outcry, and in response to it.
More, apparently owners of the books in ebook form are also seeing their copies updated, rather than retaining the books they bought or bring given a choice of keeping Dahl's work or the getting the unlabeled collaboration.
https://www.thetimes.co.uk/article/roald-dahl-collection-books-changes-text-puffin-uk-2023-rm2622vl0
The response does seem to have dissuaded Dahl's US and European publishers from following suit with the changes, at least for now.
It may be that Dahl's sales are down, but thus far no one making that claim has presented sales data (that I've seen). Dahl's alleged unpopularity seems to be belied by the fact that they remain in print and keep being adapted into films and major stage productions.
The idea that works whose draw has always been their subversive nastiness will gain sales by being made less nasty at least calls for some evidence.
"apparently owners of the books in ebook form"
Owners? I don't think that's the right word here.
Probably not.
Ebook buyers may want to look into backing up their purchases with Calibre, as a hedge against these sorts of shenanigans.
The discussion surrounding large language models (LLMs) and their relationship to AGI has been utterly horrendous. I believe LLMs and their intellectual descendants will be as transformative to society as the transistor. This technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at starting such a discussion.
To start off, I will respond to a very common dismissive criticism and show why it fails.
>It's just matrix multiplication; it's just predicting the next token
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing `the model parameters` as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
For the sake of sparking further discussion, I offer a positive argument for the claim that LLMs "understand" to a significant degree in some contexts. Define understanding as the capacity to engage significantly with some structure in appropriate ways and in appropriate contexts. I want to argue that there are structures that LLMs engage with in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of chatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples of such poems so I won't offer an example. The set of valid poems sit along a manifold in high dimensional space. This space is highly irregular, there is no simple function that can decide whether some point (string of text) is on the poem-manifold. It follows that points on the manifold are mostly not simple combinations of other points on the manifold. Further, the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize the ability of chatGPT to construct poems. We can rule out simple combinations of poems previously seen. The fact that chatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is generalizing over samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. The quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept (at least as far as poetry that rhymes goes).
So we can conclude that chatGPT contains some structure that well models the human concept of poetry. Further, it engages with this model in appropriate ways and appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
>It's just matrix multiplication; it's just predicting the next token
This is not a criticism. This is an explanation.
The criticism is that LLMs repeatedly produce nonsensical or logically incoherent utterances, and can be easily, reliably induced to do so. Those are commonly handwaved by "it's just growing pains, we just need to train them more", or something to that effect. What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place. It's just retrieving common utterances associated with "pound of feathers", in this case, comparisons to "pound of [something heavy]", and substitutes the terms to match the query.
When an LLM says that "[person A] and [person B] couldn't have met, because [person A] was born in 1980 and [person B] died in 2017, so they were not alive at the same time", it's not failing to make a logical argument, it's mimicking a common argument. It can substitute the persons' actual birth/death dates, but it cannot tell what the argument itself, or the concepts within it, represent.
And look, the argument may be wrong, you're free to disagree, but you need to actually disagree. You're not doing that. Your entire point boils down to, people are only saying [that one line you cherry-picked from their arguments] because they fail to understand basic concepts. Honestly, read it, it does. Now, if you want the discussion to be non-horrendous, try assuming they understand them quite well and are still choosing to make the arguments they make.
>This is not a criticism. This is an explanation.
Not an explanation, but rather a description. People treat it as an explanation when it is anything but, as the OP explains.
>When an LLM proclaims that "a house weighs the same as a pound of feathers", it's better not to think of it as a reasoning error, but as a demonstration that no reasoning happens within it in the first place.
Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans. This is an example of the kind of bad arguments I'm calling out. It's fallacious reasoning, plain and simple.
>What the skeptics are saying is that, no, in fact, those failures are fundamental features of those models, best explained by the models being just - to use Scott's terminology, if the Gary Marcus's one is offensive - simulators.
The supposed distinction between a reasoner and a simulator needs to be demonstrated. The "simulated rainstorm doesn't get me wet" style arguments don't necessarily apply in this case. If cognition is merely a kind of computation, then a computer exhibiting the right kind of computation will be engaging in cognition with no qualification.
>but you need to actually disagree. You're not doing that.
I'm disagreeing that a common pattern of argument does not demonstrate the conclusion they assert. That is a sufficient response to a fallacious argument. Now, there's much more to say on the subject, but my point in the OP was to start things off by opening the discussion in a manner that hopefully moves us past the usual sneers.
> than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Yes it does. Or in that particular human anyway. Not that humans are rational and therefore rationality isn’t that important to consciousness.
It may be that some kind of consciousness is emerging here, but the burden of proof is on the true believers rather than the skeptics.
>Failure modes in an LLM do not demonstrate a lack of understanding/reasoning/etc anymore than failure modes of human reasoning demonstrate a lack of understanding/reasoning/etc in humans.
Failure of human reasoning does in fact demonstrate lack of understanding in humans.
I mean, I realize what you're actually trying to say - that an individual failure of an individual human does not disprove the potential for some humans to succeed. But that's exactly the fundamental issue with your line of argumentation - you're assuming the discussion is philosophical (and a bunch of AI specialists literally doesn't understand the concept of emergent behavior, etc.), while it's actually empirical. Nobody denies neural networks can exhibit [whatever marker of general intelligence you choose], because proof by example: human beings. The whole disagreement is about whether the actually existing LLMs do. And, further down the line, whether current direction of research is a reasonable way to get us the ones that do. (I mean, to reuse your own metaphor, you could, theoretically, discover working electronic devices by connecting transistors randomly. It does not constitute a denial of this possibility to claim that, in practice, you won't.)
>It's just matrix multiplication; it's just predicting the next token
This is as uncompelling a response as "computers are just flipping ones to zeroes or vice versa, what's the big deal?"
> The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements.
Yes, I'm not sure why this isn't obvious. There's an adage in programming, "code is data". This is as profound as the equivalence between energy and matter. LLMs and other learning models are inferring code (behaviour) from the data they're trained on. In fact, a recent paper showed that a transformer augmented with external memory is Turing complete.
So basically, learning models could learn to compute *anything computable* if exposed to the right training set. What's particularly mind boggling to me is that it's often people familiar with programming and even learning models that are overly dismissive.