I think psychedelics in general have various degrees of affinity for various receptors, but I can't speak to the difference you find here... are you certain it wasn't the setting.
I find that usually settings and substance I decide to take are tightly related.
The most "this is very selective for H2A" drug I ever tried is DOM, and I do agree it feels more "cerebral" than other psychedelics
Interesting, good new thoughts and questions... (I just finished Pollan's "How to Change Your Mind" last weekend, so it's on my mind. So to speak.) I was wondering about the H1A/H2A receptors as well, vis-a-vis SSRI/SNRIs, not being a trained anything except musician, and my question boils down to this: as I understand it, the inhibitor drug's purpose is to inhibit re-uptake, to free up serotonin or norepinephrin so that ostensibly you have more to go around, so you ...what? Feel or think more easily? But they are functioning by inhibiting the receptors, which may be receivers for other things as well, right? So what might a person be missing by inhibiting the receptors?
The receptors are different from the transporters. An SSRI inhibits the transporter. Something that inhibits a receptor is called an antagonist. An example of a 5ht2a antagonist is cyproheptidine, it's used to treat serotonin syndrome
Cyproheptadine is also used to promote appetite and weight gain (especially but not only used in people who stop eating when they are on stimulants).
Its use in treating serotonin syndrome is motivated theoretically but current toxicology literature contains vanishingly little evidence that it is actually helpful for this purpose.
Apparently calicivirus (like norovirus) works by releasing a toxin that causes a serotonin storm inducing vomiting, so I could see the connection. (Learned this after moving to Sweden and learning the hard way about "vinterkräkssjukan", the Winter Vomit Disease, which is an annual event here hence the mechanism is studied heavily. Also, you can feel it coming on by the tingly rush that I would have normally associated with psychedelic drugs, were I not in Sweden.)
Perhaps better examples of therapeutic 5-HT[2] antagonists are the atypical antipsychotics. It seems that this explains much of their advantages over the older generations, which are just DA antagonists.
Transporter means some mechanism which transports eg Serotonin into the cell. After that it's not available anymore to activate receptors. An inhibitor slows down this transportation prozess which leaves more Serotonin available to play on receptors, right?
I had a recent experience in meditation that made me think of prediction error: i felt like my body was both 'really big' and 'really small' at the same time. I eventually interpreted this as being something like 'the error bars on my body have blown up'. I figured my brain must be computing 'where my body is in space', that this computation must have some accompanied 'error' signal with it, and (maybe from the act of relaxing while staying alert?) my brain eventually said 'huh, i guess this could maybe be anywhere'
I've noticed, too, when my kids start fussing aggressively, if i'm really mindful, i can notice some feeling like "this loud noise should not be happening now" - which looks like prediction error, and thus suffering.
As for why money feels good (even though it's a big surprise), i think this is probably because it changes a bunch of other predictions about future needs being met. And, anecdotally, i've found that i sometimes have a negative emotional response when i see bitcoin going up dramatically in one day, even though i like bitcoin and hold a position in it. I think this is because it isn't really updating predictions about my future needs being met (i expect these to be met regardless), and instead is just a general 'some unexpected thing happened' type of prediction.
If you take this feeling to its limit, i think you'd simultaneously feel like 'you' were everywhere and nowhere all at once. This sounds like a plausible description of ego loss. So maybe the mechanism of ego loss is that your error bars on 'where you are' grow and grow until they are effectively unbounded.
There are studies of meditation that have shown that it reduces blood flow to a region of the brain which helps locate you in space. As a result, you can reach a point where you cease to be able to position yourself and do feel, "both nowhere and everywhere."
I experienced the feeling prior to having read about the studies. The experience was preceded by multiple occurrences of the feeling of (sort of) leaving my body (as if sitting up from my lying position).
One thing you notice with insight practice is that your mind tends to organise space so that "I" am somewhere behind my eyes, my legs are "below", my arms slightly above there, and the door is "over there". Of course, your legs are 0 inches away from your body, as are your arms. The mind uses the position of the eyes as sight is such a loud sense, I guess. During meditation you have a much more refined sensory input for the mind to model off. So, yes, the door may be some distance from your body, but how far is largely dependent on sight, which you've restricted. Generalise this, and the mind starts to realise all perceptual space is relative to the current input. Feeling large and small at the same time (Alice in Wonderland syndrome, I've heard it referred to as) is the result; there isn't a very clear perception of space, so the size of the space you occupy could be pretty much anything.
Exactly - all of these spatial feelings are the result of computation; they aren't actually being sensed. "I" is a bit of computation that stitches a bunch of distances and angles together.
If we're taking "sight" to mean "computation by brain of some visual data", I'm not sure there's a meaningful distinction between "fuzzed"(?) proprioception during meditation and "not sensing anything". It may be that your brain, having had its normal chatter/monologue stilled, and receiving very little of touch/sight/sound inputs, pays way more attention to proprioception, leading to the odd "floaty" feelings (though this is pure speculation on my part).
Re: ego death, I've assumed it would involve leaving behind concrete thoughts and concepts and moving into a mental realm of pure abstract ideas (i.e. there would be no "I" or subjective consciousness there to experience it); it seems communicating anything about a state of mind beyond concrete thought (using words, concepts, and other concrete thoughts) is more or less impossible. Potentially the large/small feelings are an (early?) step in the process but I've not experienced ego death myself.
I think 'ego death' is just quite a loose term that a lot of people throw around speculatively without a precise meaning. I'm fairly experienced with insight meditation. Firstly, when your mind starts loosening its organisation of space as described, the thing you start to notice with proprioception and thought is that *thoughts tend to have a location*. Verbal thoughts are often fuzzily somewhere between the ears, or perhaps towards the throat, visual thoughts may appear larger or smaller or around the eyes somehow, emotions tend to be bodily, and a lot of it is related to muscular tension. So this expansive feeling of perceptual space not having as defined a size or centre also applies to thoughts. This can create a perception that thoughts and proprioception are things that come and go in this larger space of perception. This is just a perception though and not an ontological truth necessarily. Secondly, once you go far enough with this type of practice the 'volume' of thought and proprioception goes right down and eventually vanishes. However, there is still a perception of, well, being a perceiver, perceiving something, even if that something is a giant space or a nice feeling or a kind of confusing not-quite-nothing-but-not-quite-something. Eventually the mind figures out even this tension and the perceiver vanishes too - until it arises again and you become self aware.
Whether this is what people mean by ego death, I'm not sure, I guess that term could refer to a spectrum of the mind becoming less interested or caught by the usual perceptions of body, thought and emotion. I would be skeptical of anyone who claims to have 'ego-deathed (died?)' in any permanent way. In my experience that's kind of impossible, though you can train the mind to be less attached to the usual train of thought and perceive it mostly as background noise.
Perhaps this is the origin of Leary's notion in _Exo-Psychology_ that psychedelics develop humans' capacity to navigate and feel comfortable in zero G.
> I had a recent experience in meditation that made me think of prediction error: i felt like my body was both 'really big' and 'really small' at the same time
I often have the same experience, not when meditating, but when suffering from a fever.
I experience very similar feelings while falling asleep. Although, I would say more while letting my mind drift, so perhaps not while following a genuine path to sleep. I begin to feel simultaneously stretched out, as if my limbs were many metres away, and incredibly small, as if the place in my body where I perceive my consciousness to be was shrinking endlessly.
Would you say that your feelings align with a description like that? I know that people do experience their limbs extending as they drift into sleep, but I have found that my experiences do not quite align with the accounts I have heard. Yet, I am not a regular meditator.
> one remaining problem here is why and how some prediction errors get interpreted as rewards
This happened to me a few years ago and it wasn't a pleasant experience. My career took a very sharp, but positive turn and it made me realize that my priors on myself were really off. I ended up in a position that I was shooting for but got there about a decade earlier than I thought I would. It made me realize how low my self-esteem had been, how bad at relationships I was (long story how this fits), and a few other matters.
I have one other experience of a prediction error turning positive in investing. It made me realize that I had just gotten lucky, that I didn't really know what I was doing, and that I should just stick to index funds (though perhaps index funds are no longer the best long-term vehicle if certain economists are to be believed).
Index funds sort of ride on the price discovery carried out by active traders. The more money is “dumb,” the more mis-pricing there will be for active traders to exploit. As active trading becomes less popular it becomes more lucrative.
That's a self-correcting problem though. I think as an amateur investor with no special knowledge you still shouldn't expect to be able to exploit the market.
Sounds like another one of the 'trust me, I'm an expert' self delusion so typical in 'investing'. As far as I understand - dart throwing monkeys still earn more on average then the typical 'trader', 'investor' - 'expert'. 'Successfull traders/investors' are mostly only the surviving lucky ones in an ocean of losers. Sure, there are super forecasters in these days. But they just operate with shorter time frames, scenarios and probabilities wie error margins. They cannot predict the next 20 years of any stock neither.
Coming back to 'beating dumb money' - were will be lucky individuals but way more losers. Professionals like banks often just live of the fees they take and do very poor in real gains from their investments. Totally different things are taking advantage of insider knowledge (employees, management, owners, venture capitalists, main investors) or silently buying into something cheap (e.g penny stocks, crypto etc) before talking housewife's into put their money into the thing and then sell high to them.
I had a very similar experience - I unexpectedly got a very large raise at work (I'm in finance, most of my colleagues at my level had just been poached), and the raise itself was more than I used to make in a year pre-working-in-finance. I was delighted at the time, got home late that night, and then broke down and cried - it was a sufficiently large emotional shock to push me (after a few steps) into the effective altruism camp hard, and I've been there since.
>First of all - predictive coding identifies suffering with prediction error. This conflicts with common sense.
This is something I really don't get. The predictive coding model seems super interesting and have a lot of explanatory power but it seems weird to suppose that it is a global model that explain everything brain related. Why identify suffering with prediction error, instead of the more common sens idea that prediction error contributes/is a type of suffering?
I think the intuition here is that there should be _some_ concrete way of summarizing what suffering is. If we ask the question "What is it that all suffering has in common"?, the only thing i can think of, in common sense terms, is "it doesn't feel good" - but this has to do with qualia, and not anything we could use as part of a predictive model to understand _how_ the world works.
The 'suffering is prediction error' hypothesis meshes with the key insight of buddhism, i.e. that 'suffering comes from attachment/desire/clinging'. It looks like essentially the same argument: you feel bad in situations X,Y,Z because in each of those situations, you have an expectation that differs from reality.
>I think the intuition here is that there should be _some_ concrete way of summarizing what suffering is.
yes indeed but I think the answer is really obvious: suffering is the qualia associated with things that decrease your selective value (during evolutionary times obviously, not right now). And yes, having a messed up prediction can decrease your selective value quite a bit, but so can perfectly predicted traumas of various nature.
>The 'suffering is prediction error' hypothesis meshes with the key insight of buddhism, i.e. that 'suffering comes from attachment/desire/clinging'.
Yes but, whereas this is an interesting idea, I think wanting it to be the nature of suffering instead of something that can cause suffering among other things is strange and obviously false.
> I think wanting it to be the nature of suffering instead of something that can cause suffering among other things is strange and obviously false.
I agree. It feels very forced. My current best hypothesis for the nature of suffering rather is wanting to avoid some thing.
If you are trying to achieve a thing, and predict that it goes the way you prefer, but then it turns out that your prediction was wrong, you suffer (because you wanted to avoid that it goes that way). So in a sense your prediction being wrong is linked to your suffering.
But if you don't care either way how a thing goes, predict that it goes a certain way, but then it turns out that your prediction was wrong, you don't suffer (because you didn't want to avoid that it goes that way)
And if you are trying to achieve a thing, predict that it does not go the way you prefer, and it turns out that your prediction was right, you still suffer (because you wanted to avoid that it goes that way, even though you predicted your actions were not sufficient to achieve that.)
This would explain why people (especially anxious people, but not only) tell themselves something won't work out, when they really really want it to work out. They are trying to protect themselves from this type of suffering. They work hard at making the negative prediction, can list all the reasons it probably won't work out, and can become very convinced it won't. They are often quite shocked when it does work out well.
HOWEVER, the strategy really really really doesn't work. When the situation actually doesn't work out as they wanted, they not only suffer just as much as if they had been predicting/believing it would work out, that suffering comes on top of all the suffering they've done while they were predicting it wouldn't.
(And then sometimes they get the added suffering of 'and this proves that I really suck/my life really sucks/all my negative predictions will come true forever'. Which then creates more suffering despite some future negative predictions being correct.)
So clearly there's something else/more going on here besides/in addition to/on top of prediction error.
> yes indeed but I think the answer is really obvious: suffering is the qualia associated with things that decrease your selective value
I think the idea is to explain away qualia, not double down on it. Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction errors would do that.
Also, suffering doesn't necessarily reduce your selective value either as your ability to tolerate suffering can be adaptive. I don't think viewing prediction error in a purely first-order sense will explain all of our complexity, but the brain could be applying similar prediction machinery for higher-order thoughts as well, which would lead to suffering from anxieties and other thoughts that are far divorced from physical circumstances.
>I think the idea is to explain away qualia, not double down on it. Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction errors would do that.
Qualia just means a subjective experience. I do not understand how "Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction error" explain away qualia any more than "Suffering as some kind of thing that usually tend to decrease selective value"
>Also, suffering doesn't necessarily reduce your selective value
Yes of course. But the idea is that suffering comes from things (physical pain, emotional pain) that usually or even just relatively frequently decreases selective value in the past. Similarly, pleasure (from sex, food, socializing, etc..) comes from things that usually increased our selective value in the past. Suffering/pleasure is a stick/carrot system that our genes use to make us replicate them more efficiently.
> Qualia just means a subjective experience. I do not understand how "Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction error" explain away qualia any more than "Suffering as some kind of thing that usually tend to decrease selective value"
Because plenty of factors that don't cause suffering also decrease selective value, so it's too broad a definition. Also, factors which cause suffering don't necessarily reduce selective value, so it's also too narrow a definition. Also, suffering is presumably an internal state It just failures to capture too much.
The definition I suggested at least focuses on the internal subjective mental state itself that purportedly characterizes suffering. It's not fully satisfactory since qualia are too wooly to be so trivially summarized.
>Because plenty of factors that don't cause suffering also decrease selective value,
I don't think so. It seems to me that all the fitness decreasing things that natural selection could reasonably be expected to work on are in fact associated with suffering. Ie, for natural selection to ba able to act, it must be things that were present during evolutionary time and things that could be a least partly avoided by actions. What would be your counter examples?
>Also, suffering is an internal mental state, so your definition just fails to capture enough.
My understanding is that there hasn't been any even vaguely convincing theory explaining the presence of qualia. So mine (I say mine but I stronlgy suspect that this is the common theory for avolutionary biologist) does not explain qualia indeed, but predictive processing does not either.
>If we ask the question "What is it that all suffering has in common"?,
A few years ago I came to the answer that "suffering is the persistent perturbation of a negative feedback loop" - the qualitative equivalent would be "get me away from this". I think this is roughly consistent with the prediction error hypothesis, except if it's possible to suffer under a completely relaxed prediction error-resolver - if you're no longer internally adjusting to new stimulus data, but still suffer anyway, that would be an instance of suffering without a negative feedback loop (or alternatively, with a negative feedback loop of strength 0).
It made more sense to me when I thought of "prediction error" as another way of saying "deviation from a set point". I hope this is correct because I think the [suffering = prediction error] model isn't useful without some consideration of set points, as well as allowance for different set points to vary in their flexibility. That's the only way I can accept a model that includes suffering due to electrocution and suffering due to a pay cut under the same "prediction error" umbrella.
It is often said in the fire poi/"flow arts" community that practicing on acid - and other psychedelics, but mostly acid - is god's own cheat code. This matches up very nicely with George's theory about the proper use of psychedelics.
The only pedantic completion I'd have to this is that I don't think all "introspection" done on psychedelics results in weirdness.
For example, you could introspect vision and figure out: When I see a repeating pattern I don't actually see the "true" number of objects, instead, my mind kind of infers it as a repeating pattern and translates it into however many objects it wants, so I end up perceiving the number of stairs based on my rate of exhaustion, or more/fewer posts in a fence depending on how hard I focus on it
But if you introspect on concepts where there's no "reasonable" explanation to be found or no particular insight left to gain.
For example death, or meaning or consciousness itself, or "true love" or whatever. If you introspect specifically because those concepts are bothering you, then it gets weird. This is not to say doing so is always bad, I used to be afraid of death (like, existential dread lasting half an hour every few days or weeks for 10+ years afraid) and after 500mcg and 14 hours that basically stopped... maybe I got stuck with an unadaptive mental model as a tradeoff, but then I'm fine with the tradeoff from a negative utilitarian perspective. I know many other people that walked through these kinds of issues on psychedelics. But there's certainly a point where it gets weird.
In other words, I think there are facets of the mind that it can introspect with a small chance of it going bonkers (e.g. sight, sound), and facets where the risk is high (especially given repeated trips, especially given high dosages and or predispositions for certain types of psychosis).
As an aside, I find it interesting that quitting alcohol abuse has a high success rate by either joining a strongly religious community (e.g. AA) or taking psychedelics, which would fit the idea of "build a weird mental model to allow your frontal cortex to intervene in a very well-formed set of patterns". But then again, those are small n studies, and psychedelics seem to work for everything in small n studies in ways that really contradict the behaviours of users (e.g. 80% efficacy in terms of smoking cessation)
Bill W, founder of AA apparently tripped on Belladona at Towns Hospital in NY in 1934 for his own mystical insight that started his sobriety, and was *very* interested in the LSD research and had several sessions in LA with Sidney Cohen in 1956.
I think that psychiatry as a discipline sometimes overemphasises the role of specific receptor subtypes and underemphasises the role of where in the highly specialised structure of the brain those receptors are, in terms of the type of cognition they are involved in.
In terms of reward and reward prediction error, I also feel that there is a wealth of research on dopamine and the dopaminergic system being more involved in this.
If we recall at all of the brains dopamine is produced in the substantia infra and ventral tegmental area, and then projects into the limbic system, via the mesolimbic pathway and into the rest of the cortex, frontal cortex first, via the mesocortical pathway, then we can consider the effects of dopamine on reward.
There is a wealth of data on dopaminergic stimulation of the nucleus accumbens (limbic) as being involved in reward and reinforcement learning. There is also evidence of the orbitofrontal cortex connections to the limbic system as being critically involved in reward, but also reward anticipation, and reward prediction errors. Rolls writes extensively on this for example. De Youngs studies on brain volume and personality find that OFC volume and extroversion are positively correlated, which he interprets as fundamentally reward seeking.
I don’t know the exact distribution of DA vs 5HT receptors in these regions (and I’m not sure who does), but common sense understanding of the emotional impact of serotonergics vs dopaminergics would be that dopaminergics produce a euphoria (reward) as well as highly motivating people (reward expectation) to move, socialise, or even do otherwise unrewarding tasks such as cleaning ones kitchen or revising for a exam when one has ADHD.
I’d also speculate that dopamine is actually the key neurotransmitter signalling pleasant surprise (a positive reward prediction error) and unpleasant shock (a negative reward prediction error). None of these emotions are produced by people taking serotinergics such as SSRIs.
In contrast, I think that both SSRIs primarily stimulating 5HT1A and psychedelics primarily stimulating 5HT2A are actually involved more in *changing* the weighting of neuronal connections which we’re making those prediction errors, through plasticity which psychedelics in particular have been shown to increase.
Studies on gambling back this idea up too. In rat brains, an unexpected reward is accompanied by a DA release whilst a loss was associated with a 5HT release. The interpretation here was that DA affected plasticity by making the Britons that fire together, wire together making the animal feel like “I really want to do that movement or behaviour again!” and thus explaining DAs role in habit formation and addiction, whereas 5HT is involved in increasing plasticity by reducing signal weights, when a prediction error is detected. Thus, the serotonergic system is involved in the recognition at some level that “this prediction did not work, I need to change my behaviour and do something different” which is massively enhanced if a person is taking SSRIs or psychedelics.
Therefore, I’d be wary of attribution of cause to 5HT for prediction error, but more if an effect of prediction error in producing changes in signal weighting via plasticity to change behaviour.
If you’ve ever used CBT to treat depression or anxiety you know how people make negative prediction errors, and how helping people to recognise this and change their behaviour is the key to treatment, and how an SSRI can really assist this process. SSRIs don’t produce reward (euphoria), reward prediction (motivation) or reward prediction error (surprise/shock) which I think are all dopaminergic. SSRIs help people change the reward prediction errors they are making that terrible things will happen, and thus help them to change their thinking and behaviour
I wrote this on my phone whilst walking my dog, hence no references and typos. It should read Substantia Nigra above, not Infra, and for some reason neuron autocorrected to Briton at one point
-If you get $1 million because you're an ordinary middle-class person and a crypto billionaire semi-randomly decides to give you $1 million one day, you will be very happy.
Do we actually know being "very happy" would be the typical instant reaction? I once won $10,000, about 25% of my income at the time, in some promotional lottery I hadn't realized I had been entered in. I was going through a miserable time in life and my reaction was to be angry and annoyed, because it felt like a cruel joke. Only after a few weeks, when I used some of the money to buy a new guitar, did I feel happy to have it.
Maybe that was like stepping in a cold shower and suffering at first but then getting used to it? Or maybe because I was depressed, I didn't have a normal reaction?
I think it mattered that I didn't even know I was entered in the lottery. If someone buys a lottery ticket, perhaps a part of them predicts they will win. Otherwise, why did they buy the ticket?
I think it's very safe to say that "very happy" would be the typical instant reaction. I don't know any other person who has been irritated by receiving a large amount of money.
As another data point, I would not be happy if I checked my bank balance and discovered it had increased by $1million. I'd be a touch sick with worry over whether I'd be blamed for the mix-up, a bit annoyed at the incompetence of everyone involved, somewhat tense over having to call the bank and sort this mess out, etc.
If everything was in order and a plausible reason was adduced to account for the money being there, and it being legitimately mine, *then* I'd be mighty pleased, but until then I'd just be upset.
Fits with Buddhist ideas; you don't suffer because you are in pain, you suffer because you don't think of pain as something that should be happening to you.
Surely what makes a cold shower decrease in pain over time is not an adjustment in your predictions, but a physical adjustment in the contrast in skin temperatures.
> one remaining problem here is why and how some prediction errors get interpreted as rewards
Objectively unlikely events could easily confirm our model of the world rather than violating it, if the model was suitably wrong to begin with.
Plausibly, our money-centric culture—and in particular our parasocial relationships with affluent pseudo-peers—causes our brains to maintain a prediction amounting to "I am wealthy, or soon will be, and any indication to the contrary is some terrible mistake." The conflict of this prediction with reality creates suffering, but we get stuck unable to update away from it, because our media habits keep giving us more fake evidence that it's true (which is maybe not really our fault since whole industries are dedicated to ensuring that very thing). A sudden windfall, then, won't feel surprising; it will feel like the natural order restored.
Now I'm no historian, but I'd guess that if you're a premodern farmer and you have an exceptional crop, your model already represented that as something that could and indeed *ought to* happen; although it was surprising in some statistical sense, you aren't surprised—the world feels more correct rather than less. But if the king rides up and hands you a fistful of glittering gold, aren't you more likely to be suspicious and scared than happy? Surely this is some mistake or trick or trap; surely the other villagers will turn on you, or bandits will kill you for it, or something—anything—will punish you, because suddenly you're outside the natural order.
I think this is still true for a lot of working-class (use your vocabulary of choice here) people - my reaction to anything (especially financially) good happening used to be to squint and ask 'what's the catch'.
> At its limit, this theory says that all action takes place through the creation and resolution of prediction errors - I stand up by "predicting" on a neurological level that I will stand up, and then my motor cortex tries to resolve the "error" by making me actually stand.
Thinking out loud: there are several programming languages whose core primitive operation is some version of this. Prolog being a conventional example.
In Prolog and other languages incorporating such features, 'programming' takes the form of, first, writing 'facts', and then second, writing underspecified facts. The program executes by "resolving the error" by figuring out which values need to be what to make your underspecified facts true. This is used a lot for logic programming (it's in the name, after all).
If you described this programming language as "resolving pattern-match error" to execute, that would be technically correct. Perhaps "prediction error" functions similarly in cognition. Perhaps, at some level what is technically happening is prediction error being resolved, but in practice that is an obtuse way of framing it
Off topic: you comment on 2 subjects I find interesting. 1. how do our brains choose goals - crude answer we feel tension when how we want the world to be is different from how it is; 2. what do drugs do to our brains. It would be awesome if you wrote a piece on how the 2 overlap.
So we have: A. how the world is; B. how I perceive the world; C. what I want the world to be; D. how much the disparity upsets me, E what I am doing about it. We can acheive our goals by moving the world closer to how we want it to be, looking at things in a different way, by changing what we want, by stopping caring about the difference. What is the attraction of different drugs? Which of these 4 ways of reducing that tension do they address? e.g. heroin - at a guess B and D; cocaine at a guess B and E. But I know nothing in these fields, what does the attraction of different drugs tell us about the model of motivation and consciousness?
My nutshell understanding of Buddhist enlightenment is reaching a point we’re you understand the world is exactly the way it is supposed to be. End of tension.
> one remaining problem here is why and how some prediction errors get interpreted as rewards... This suggests there's still something we don't understand about prediction error and suffering
I think if you were describing a theory you were less attracted to, then instead of saying that this is one remaining thing we don't understand, then you'd say that this is a class of counterexamples that totally blows up the theory.
You've got three examples of the "prediction errors -> suffering", of which one fits (losing money), one doesn't fit (gaining money) and one only fits if you torture the definitions of words until they submit (feeling pain). Most of the other examples of prediction errors that I can think of don't seem to cause suffering; e.g. when watching a movie it's not at all unpleasant when the camera cuts to another angle.
I think there's a lot of good in predictive processing, but this particular "prediction errors -> suffering" aspect seems a bit of a dead end to me.
> then you'd say that this is a class of counterexamples that totally blows up the theory.
I don't see how that follows. Mathematics contains all sorts of conjectures that we believe to be true but haven't yet been proven to be true. Historically, most such conjectures somehow turned out to be true despite the fact that we couldn't precisely connect the dots at the time.
It could easily be true that predictive coding has simple first-order mapping to some types of suffering, but requires more complex higher-order mappings to other types of suffering that aren't immediately apparent.
> Most of the other examples of prediction errors that I can think of don't seem to cause suffering; e.g. when watching a movie it's not at all unpleasant when the camera cuts to another angle.
Or maybe watching a movie primes you to expect such cuts. It also primes you to expect decent acting, but if the acting is horrible I certainly suffer enough to turn the movie off.
Consider whether an adult from a hundred years ago who's used to watching Charlie Chaplin would suffer during some modern movies with super-rapid cuts, shaky cams and blasting audio tracks.
I find the idea that we shouldn't use psychedelics for introspection a bit troubling: I've been studying up on psychocybin based psychotherapy lately, and it seems to get very promising results in treating chronic depression and anxiety. Yet the entire model of therapy is based on internal introspection: you give people an extremely high dosage of psychocybin, and then you put a blindfold on them and play gentle music so they aren't distracted by the outside world and can focus on internal introspection. Could this be a dangerous therapy model to pursue? The studies done Dr. Roland Griffiths seem to show good outcomes, but I'm not sure how well you can measure "weirdness" after treatment.
> the 10% of Americans who use psychedelics mostly don't end out as weird as they did
Huh. All the people I know who are really into psychedelics also have very eclectic beliefs about religion or humanity or whatever. Maybe you're right about dosing (note the "really into", not "10% of the people I know"). Or maybe we've all learned to be ok with some types of cognitive weirdness in a way that it's acceptable to be a yogi barista inventing your own religion and nobody really makes a big deal out of it anymore.
2c on Active Inference Framework and reward/suffering: (context: I have developed an AIF-based model that's being published in Entropy soon, so I can speak to the math/theory side of it - less so to the psych/neurology angle)
Friston is generally cavalier with his writing and this seems to me like an example of that. If you're abstractly just interpreting animals as simple self-regulating systems (trying to stand up, or stay fed, etc) whose only ambition is to maintain homeostasis, the prediction error / suffering metaphor works great, in the following sense: the reward function is endogenous, so any behavior is a self-fulfilling prophecy. But if you're considering a sophisticated agent that's composed of multiple subsystems, capable of foresight/delaying reward, attaching value to symbols, etc, then when isolating an individual loop/subsystem, you must accept that in general the reward function will be exogenous (provided by whatever the relevant desire-generating subsystem is), and the discrepancy between desired and observed/predicted future can be arbitrarily large. This reduces the philosophical issue to an empirical or modeling issue: given a system composed of multiple interacting AIF subsystems, what is the specification of desire that produces the "correct" behavior in each subsystem?
Note also that, while this decomposition is totally compatible with the highest-level system being (approximately) only concerned with its homeostasis, it does not require it, and indeed, it is a fair question whether any real-life animals fit that bill. Indeed, Gaia theorists may well argue that only the biosphere as a whole can count as an endogenously motivated system, with each individual animal or biome being at least partially exogenously motivated...
Did MK-ULTRA yield any useful information? It might be interesting to know how people react to LSD without knowing they are taking it. I realize this program was insanely evil but it still may have had interesting results.
"This is good insofar as you're suffering less, but bad insofar as you've adjusted to stop caring about a bad thing or thinking of it as something that needs solving [...]."
I'd expect lowering prediction-error==suffering to zero is a complete cure for depression. Think ugh-fields. The less horrible they feel, the easier they are to address.
If Terrible Issue X doesn't compel you to action with the Terrible Feelings gone, perhaps X was never worth addressing in the first place.
I know you’ve been writing about predictive coding for years but some ideas stick out to me now.
Driving is pleasurable (and being a passenger less so) because I get to make and fulfill a lot of correct predictions.
The musicianship in sound engineering is a process of perceiving and resolving prediction errors. You hear the gap between what the mix is and what it “wants” to be, then you move your fingers on the board to bring it in line.
Software design shares this “is vs. wants to be” structure although the feedback loop is much slower. “Wants to be” is the same kind of intuitive, black box oracle. This confuses and frustrates those who do not have the same kind of oracle inside themselves and yearn for some kind of logical, rules-based system to substitute for it.
Would you say you're a generally well-coordinated person? Do you have a tendency to daydream?
I've got a pretty unhealthy relationship with driving - I'm generally an extremely chill person, but despite getting my driving license and not having any objectively negative experiences like a crash, driving terrifies me, and that reaction being so at odds with my usual temperament, am trying to figure out why.
Even in rush hour standstill traffic? Are you commuting a lot by car? Do you have something to occupy yourself with while commuting? Do you like all parts of driving? (Finding a parking spot when they are few, idiots trying to overtake you when there's no space, etc?)
Sure, math is fun or if you’re a Brit I guess maths are fun, but I don’t think the problems you are concerned with here are going to yield to some nth order predicate calculus.
There is a reason they call it meta (beyond) physics. William James knew this. He used nitrous oxide to get a better ‘understanding’ of Hegel.
I think if you insist on sticking to the strictly rational you aren’t going to get where you want to go.
I think the explanation for the last one is very simple: Our brain has a set of limiting priors because evolutionary, they helped in advancing human society. It was just not good for society to have lots of individuals stepping outside of the beaten path.
Psychedelics widen your thinking by reducing these priors. This is very interesting (and in many cases helpful) to the individual, but historically not to society because peasants should do their field work and cleaners should clean rooms instead of thinking about the meaning of life.
So, the weird psychedelicists are weird from a position of usefulness to society, but maybe not from their own position. And this weirdness doesn't happen as much anymore because society has - at least in some progressive areas - adapted to psychedelics. We now know how to steer psychedelicists so they stay useful to society. Just look at all the post experience integration trainings and sitter guidance that are out there
My understanding is that everyone who understands evolution considers group selection practically impossible, so things being "good for society" has no biological explanatory value.
I tend to disagree. Being part of society is essential for humans. If you didn't fit in, you much less chance to survive. At least you would have much less ability to reproduce because fewer people would marry "that weirdo"
You appear to me to be retreating from the bailey of selection of traits that are bad for the individual but good for society to the motte of selection of traits that are good for both the individual and for society.
I think that group selection might have played some role in the past of human evolution due to the factor of warfare between small groups, so selection for group cohesion was probably a thing. However, you don't need to invoke it at all in order to see that the "widening of priors", even temporary, could carry a pretty heavy risk in the rather perilous ancestral environments. Eat something you're not supposed to, or hunt somewhere dangerous, or drink from a tainted source and sans modern medicine and you're dead meat. And that's even without the considerations like overstepping social boundaries which could turn your own group on you.
Not necessarily I feel, in that case those traits might still be beneficial on an individual level for gaining status in larger groups. Plus you could have smaller coalitions with differential reproductive success within larger groups - like attracts like I guess?
I feel like if it were purely individual level selection, you'd see a slightly different set of traits of some sorts getting elevated. Basically, I can't bring myself to definitively exclude group selection and it even might play a small role in more individualistic societies since material success is frequently determined by the faithfulness of your allies or harmony within a clannish group, and material success has an effect on reproductive success.
Quoting George: "I found it curious that people who take psychedelics for introspection usually end up as religious cranks or burnouts. While people that take psychedelics because "they are fun" don't seem to experience many negative side effects."
- - -
For certain definitions of "introspection," this sounds to me like "People who expect a substance to remedy their existential malaise end up disappointed, but generally satisfied people who use a substance to have a bit more fun usually have fun."
I have a chronic pain condition so I get a lot of opportunity to analyze the experience of pain. Most kinds of pain can in fact be zeroed out completely with sustained application of attention. The trick is, unintuitively, to focus on the pain as precisely as possible. The pain sensation will dissolve and disaggregate into a bunch of distinct sensory signals, none of which carries a "suffering" valence. I think what's happening is you're devoting your whole consciousness to "predicting" what the next moment of pain-sensation will be like, and having done that, the pain does indeed become incorporated into your overall prediction, and thus stops being pain. In keeping with this model, the trick only works if the sort of pain you're attending to is a static (or regularly pulsing) kind.
As soon as your attention wavers, the pain comes back, because you stop actively predicting it. I think pain is a relatively unique sort of signal that resists being ignored, for obvious reasons.
Sorry you are living in a less than optimal state. Glad you have something of a handle on it. I’ve used the same technique and can generally decline opioids even for broken bones. There is pain yes, suffering not so much. You are right about it only working only on static or regular too. I experienced some random spasms in one of my glutes a couple years ago. Using the infuriatingly useless 1 to 10 scale I suppose it would come in at a 2. Still the unpredictably of the cramps caused enough distress to see an MD for a muscle relaxer scrip. Good luck with your technique.
I think this extends even to mental kinds of suffering like anxiety, *if* you're able to not attach some important meaning to the sensation (e.g. a threat warning, or a fence against moral corruption).
"In keeping with this model, the trick only works if the sort of pain you're attending to is a static (or regularly pulsing) kind."
In my experience it's possible to anticipate even "unlikely" painful events, taking away their suffering valence if they occur. Often takes the shape of "being prepared for the worst case", and has no epistemic cost.
Sorry that you have to live with this condiyion and glad that seem to have found a good way of managing it. Thank you for the very interesting comment. It is indeed very unintuitive that focusing on the pain makes it disappear!
Hmmmm, seems like high H2TA (- Optimism - Plasticity - Sensitivity to stimuli - Ease of learning) may also explain why some people get so STUCK in high gain-high cost/loss situations such as living with an (originally amazing and still occasionally great) abusive partner. They keep trying to figure it out, make it work, and believe at some deep molecular level that this can be accomplished. They may have successfully applied these strategies to many many other situations in their lives. They may also assume that their partner also sees the world this way, and is also working hard to improve things (and the abuser may have figured out how to encourage this assumption).
I've seen Michael Pollan's book referenced here, he mentions research indicating down-regulation of the default mode network during psychedelics - does this fit with the induction of a 'bias towards thinking of problems as solveable'? (Maybe.)
The classic paper on the subject (Berg et al. (1998) in Molecular Pharmacology) is one of my personal all-time favourites within the field, with very nice graphs illustrating the difference between various agonists.
PS. And any discussion involving MDMA of course has to include the fact that other 5-HT releasers (such as fenfluramine) don't produce similar effects. Fenfluramine apparently has some sort of recreational (or atleast gets-you-buzzed) effects in higher doses, but nothing comparable to the fairly unique entactogenic effects of MDMA. Personally I'm actually wondering about whether part of the reason might be that MDMA is a tryptophan hydroxylase inhibitor, so that it's somehow related to partial depletion of 5-HT rather than indiscriminate excess.
And of course, MDMA is a 5-HT[2A] agonist as well, something which subjectively becomes pretty obvious in higher doses.
I've taken LSD and shrooms a couple dozen times over the past three or four years, "just for fun" — and my beliefs are essentially the same as they were before I started this (still a boring old stick-in-the-mud materialist who insists on skepticism re: Illuminati and Hidden Masters and psychic powers and so forth).
On the other hand, out of two friends who took psychedelics with the intention of doing some "deeper" introspection than I engaged in, one is just the same — still grounded and reasonable — and the other went totally whacky. Not sure this supports the hypothesis in the post... but n = 3, of course.
My two principle LSD experiences as a teenager were playing the first mass effect game and watching the sci-fi channel Dune and children of dune TV miniseries in a weekend acid binge around 2007.
Anecdotal effects include a decade long lasting sci fi space optimism infusion, an intrinsic faith in the padishah emperor, and the vague knowledge that Butlerian jihad against the AIs leads to space Jesus thousands of years later.
Not sure, if I understand everything correctly - but I think @scott says that antidepressants (think SSRI) push button one which let's your brain get happier with the world as it is. While psychedelics would push button two which let's you get motivated to solve the world's problems right now. I can only tell about my own experience in taking SSRI since 9 months now - they seam to push both buttons inside my brain. I'm way less concerned with anything, getting way easier to sleep after 45+ years of having hard times every evening with endless circles of thoughts. And I'm so muchess reluctant to start anything, I have a lot higher drive and motivation to do the first step of any small to large scale change I'll find is necessary. My impression is, that I'm much less stuck in toxic thinking like "every tiny piece is in some kind connected with ever other piece - so no way to find out where to begin with". In generall I've become a much more pragmatic person with a higher motivation to get things done now and getting less distracted by the endless number of tiny things related.
I have always had this narrative that my cold tolerance shot up inexplicably after starting SNRIs, so that's my selective anecdata related to these theories.
"First of all - predictive coding identifies suffering with prediction error. This conflicts with common sense."
Predictive coding does not actually make this claim, it's solely an operational interpretation of the bayesian brain hypothesis. Though some researchers, like Thomas Metzinger have suggested that emotional affect is related to the rate at which uncertainty is resolved within the context of the bayesian brain hypothesis.
Nonetheless, if it is true that prediction error is related to affect, it's important to understand that prediction error in this context exists at multiple levels in the brain heirarchy. If we have a discussion about you stabbing my arm and you subsequently do so, I will have successfully squashed prediction error at the low levels of the visual perception hierarchy. But the higher levels, extending out to those those that might exist in the association cortex, have a probabilistic prior that includes something like "My self model dislikes bodily harm", and in the behavioural (active inference) context, "My self model is not being harmed". This is a prior that has probably developed on an evolutionary timescale. Thus by allowing you to stab my arm I would actually be behaving in a way that would have dramatic higher expected prediction error (expected free energy).
I don’t know anyone who takes LSD at parties regularly. It is too intense and lasts too long for folks who just want to unwind for an evening. Festivals maybe. But most people I know who regularly take it are more like growth mindset people who just want to experience new things and see what else the brain can do. Not people trying to fix themselves.
I've taken a number of psychedelics specifically for introspection and I can't say they modified my beliefs much, partly probably because I never felt belief that isn't backed up by concrete action can really do anything. But they definitely helped me out to figure out what I needed to do to change my life for the better. I'd say the "crank or burnout" transition can only happen if you have some sorts of prior mystic view of the world that elevates religious or supernatural "secret" types of knowledge over the observed material reality, which might've been more common in the last century, pre-Internet, around when these substances first surfaced.
And as for how acid and transhumanism come together...I didn't have to take those drugs to obtain transhumanist beliefs - rather, I had transhumanist beliefs in the first place that then led me to explore chemically altered states of consciousness just in case they could give rise to any interesting insights, which I'd say they did in a mostly practical way - made me change some of my hobbies, study certain topics and switch occupations for the better. My overall outlook and preferences stayed exactly the same, yet I started to feel like I could be doing more interesting/productive things with my time than I normally used to.
I guess my advice to would-be users is simply to avoid accepting ideas you get on trips in the long-term unless they also happen to entirely make sense when fully sober. You're just vividly exploring alternate possibilities that may or may not make sense in the end. :)
My initial inchoate thought (which I haven’t considered further, just finished work, tired); I wonder how this fits in psychotic depression, BPD or rapid cycling BPAD?
I find it frustrating when authors don't say whether they are talking about presynaptic or postsynaptic 5-HT1A receptors, as the presynaptic receptors are known to have multiple functional activities aside from their "primary" role of negative feedback.
When publishing work related to 5-HT2A, speculations should also consider
1. the role of 5-HT2C receptors, and perhaps the 5-HT(2A)-(2C)
2. how the ligands used in animal studies such as DOI have a much greater affinity for heterodimeric receptors like 5-HT(2A)-mGluR(2), -D(2), -CB(1), etc and often should not be conflated with native serotonergic activity.
I think psychedelics in general have various degrees of affinity for various receptors, but I can't speak to the difference you find here... are you certain it wasn't the setting.
I find that usually settings and substance I decide to take are tightly related.
The most "this is very selective for H2A" drug I ever tried is DOM, and I do agree it feels more "cerebral" than other psychedelics
Interesting, good new thoughts and questions... (I just finished Pollan's "How to Change Your Mind" last weekend, so it's on my mind. So to speak.) I was wondering about the H1A/H2A receptors as well, vis-a-vis SSRI/SNRIs, not being a trained anything except musician, and my question boils down to this: as I understand it, the inhibitor drug's purpose is to inhibit re-uptake, to free up serotonin or norepinephrin so that ostensibly you have more to go around, so you ...what? Feel or think more easily? But they are functioning by inhibiting the receptors, which may be receivers for other things as well, right? So what might a person be missing by inhibiting the receptors?
The receptors are different from the transporters. An SSRI inhibits the transporter. Something that inhibits a receptor is called an antagonist. An example of a 5ht2a antagonist is cyproheptidine, it's used to treat serotonin syndrome
Thank you. I'll do more research as I can.
Cyproheptadine is also used to promote appetite and weight gain (especially but not only used in people who stop eating when they are on stimulants).
Its use in treating serotonin syndrome is motivated theoretically but current toxicology literature contains vanishingly little evidence that it is actually helpful for this purpose.
Apparently calicivirus (like norovirus) works by releasing a toxin that causes a serotonin storm inducing vomiting, so I could see the connection. (Learned this after moving to Sweden and learning the hard way about "vinterkräkssjukan", the Winter Vomit Disease, which is an annual event here hence the mechanism is studied heavily. Also, you can feel it coming on by the tingly rush that I would have normally associated with psychedelic drugs, were I not in Sweden.)
Perhaps better examples of therapeutic 5-HT[2] antagonists are the atypical antipsychotics. It seems that this explains much of their advantages over the older generations, which are just DA antagonists.
Transporter means some mechanism which transports eg Serotonin into the cell. After that it's not available anymore to activate receptors. An inhibitor slows down this transportation prozess which leaves more Serotonin available to play on receptors, right?
I had a recent experience in meditation that made me think of prediction error: i felt like my body was both 'really big' and 'really small' at the same time. I eventually interpreted this as being something like 'the error bars on my body have blown up'. I figured my brain must be computing 'where my body is in space', that this computation must have some accompanied 'error' signal with it, and (maybe from the act of relaxing while staying alert?) my brain eventually said 'huh, i guess this could maybe be anywhere'
I've noticed, too, when my kids start fussing aggressively, if i'm really mindful, i can notice some feeling like "this loud noise should not be happening now" - which looks like prediction error, and thus suffering.
As for why money feels good (even though it's a big surprise), i think this is probably because it changes a bunch of other predictions about future needs being met. And, anecdotally, i've found that i sometimes have a negative emotional response when i see bitcoin going up dramatically in one day, even though i like bitcoin and hold a position in it. I think this is because it isn't really updating predictions about my future needs being met (i expect these to be met regardless), and instead is just a general 'some unexpected thing happened' type of prediction.
Re: feeling very large/very small during meditation, I've had this happen several times. It is very odd and I'm not sure what to make of it.
If you take this feeling to its limit, i think you'd simultaneously feel like 'you' were everywhere and nowhere all at once. This sounds like a plausible description of ego loss. So maybe the mechanism of ego loss is that your error bars on 'where you are' grow and grow until they are effectively unbounded.
There are studies of meditation that have shown that it reduces blood flow to a region of the brain which helps locate you in space. As a result, you can reach a point where you cease to be able to position yourself and do feel, "both nowhere and everywhere."
I experienced the feeling prior to having read about the studies. The experience was preceded by multiple occurrences of the feeling of (sort of) leaving my body (as if sitting up from my lying position).
One thing you notice with insight practice is that your mind tends to organise space so that "I" am somewhere behind my eyes, my legs are "below", my arms slightly above there, and the door is "over there". Of course, your legs are 0 inches away from your body, as are your arms. The mind uses the position of the eyes as sight is such a loud sense, I guess. During meditation you have a much more refined sensory input for the mind to model off. So, yes, the door may be some distance from your body, but how far is largely dependent on sight, which you've restricted. Generalise this, and the mind starts to realise all perceptual space is relative to the current input. Feeling large and small at the same time (Alice in Wonderland syndrome, I've heard it referred to as) is the result; there isn't a very clear perception of space, so the size of the space you occupy could be pretty much anything.
Exactly - all of these spatial feelings are the result of computation; they aren't actually being sensed. "I" is a bit of computation that stitches a bunch of distances and angles together.
If we're taking "sight" to mean "computation by brain of some visual data", I'm not sure there's a meaningful distinction between "fuzzed"(?) proprioception during meditation and "not sensing anything". It may be that your brain, having had its normal chatter/monologue stilled, and receiving very little of touch/sight/sound inputs, pays way more attention to proprioception, leading to the odd "floaty" feelings (though this is pure speculation on my part).
Re: ego death, I've assumed it would involve leaving behind concrete thoughts and concepts and moving into a mental realm of pure abstract ideas (i.e. there would be no "I" or subjective consciousness there to experience it); it seems communicating anything about a state of mind beyond concrete thought (using words, concepts, and other concrete thoughts) is more or less impossible. Potentially the large/small feelings are an (early?) step in the process but I've not experienced ego death myself.
I think 'ego death' is just quite a loose term that a lot of people throw around speculatively without a precise meaning. I'm fairly experienced with insight meditation. Firstly, when your mind starts loosening its organisation of space as described, the thing you start to notice with proprioception and thought is that *thoughts tend to have a location*. Verbal thoughts are often fuzzily somewhere between the ears, or perhaps towards the throat, visual thoughts may appear larger or smaller or around the eyes somehow, emotions tend to be bodily, and a lot of it is related to muscular tension. So this expansive feeling of perceptual space not having as defined a size or centre also applies to thoughts. This can create a perception that thoughts and proprioception are things that come and go in this larger space of perception. This is just a perception though and not an ontological truth necessarily. Secondly, once you go far enough with this type of practice the 'volume' of thought and proprioception goes right down and eventually vanishes. However, there is still a perception of, well, being a perceiver, perceiving something, even if that something is a giant space or a nice feeling or a kind of confusing not-quite-nothing-but-not-quite-something. Eventually the mind figures out even this tension and the perceiver vanishes too - until it arises again and you become self aware.
Whether this is what people mean by ego death, I'm not sure, I guess that term could refer to a spectrum of the mind becoming less interested or caught by the usual perceptions of body, thought and emotion. I would be skeptical of anyone who claims to have 'ego-deathed (died?)' in any permanent way. In my experience that's kind of impossible, though you can train the mind to be less attached to the usual train of thought and perceive it mostly as background noise.
This is pretty useful; thank you.
Perhaps this is the origin of Leary's notion in _Exo-Psychology_ that psychedelics develop humans' capacity to navigate and feel comfortable in zero G.
> I had a recent experience in meditation that made me think of prediction error: i felt like my body was both 'really big' and 'really small' at the same time
I often have the same experience, not when meditating, but when suffering from a fever.
Same, I've felt this during a fever.
What sort of meditation were you doing where this happened?
Breath meditation - just keeping my awareness on the breath and returning it when i noticed it had wandered
Fascinating.
I experience very similar feelings while falling asleep. Although, I would say more while letting my mind drift, so perhaps not while following a genuine path to sleep. I begin to feel simultaneously stretched out, as if my limbs were many metres away, and incredibly small, as if the place in my body where I perceive my consciousness to be was shrinking endlessly.
Would you say that your feelings align with a description like that? I know that people do experience their limbs extending as they drift into sleep, but I have found that my experiences do not quite align with the accounts I have heard. Yet, I am not a regular meditator.
> one remaining problem here is why and how some prediction errors get interpreted as rewards
This happened to me a few years ago and it wasn't a pleasant experience. My career took a very sharp, but positive turn and it made me realize that my priors on myself were really off. I ended up in a position that I was shooting for but got there about a decade earlier than I thought I would. It made me realize how low my self-esteem had been, how bad at relationships I was (long story how this fits), and a few other matters.
I have one other experience of a prediction error turning positive in investing. It made me realize that I had just gotten lucky, that I didn't really know what I was doing, and that I should just stick to index funds (though perhaps index funds are no longer the best long-term vehicle if certain economists are to be believed).
Wait what? Why would index funds not be the best long-term vehicle? I need to know, I was just about to invest my first $10,000.
Index funds sort of ride on the price discovery carried out by active traders. The more money is “dumb,” the more mis-pricing there will be for active traders to exploit. As active trading becomes less popular it becomes more lucrative.
That's a self-correcting problem though. I think as an amateur investor with no special knowledge you still shouldn't expect to be able to exploit the market.
Sounds like another one of the 'trust me, I'm an expert' self delusion so typical in 'investing'. As far as I understand - dart throwing monkeys still earn more on average then the typical 'trader', 'investor' - 'expert'. 'Successfull traders/investors' are mostly only the surviving lucky ones in an ocean of losers. Sure, there are super forecasters in these days. But they just operate with shorter time frames, scenarios and probabilities wie error margins. They cannot predict the next 20 years of any stock neither.
Coming back to 'beating dumb money' - were will be lucky individuals but way more losers. Professionals like banks often just live of the fees they take and do very poor in real gains from their investments. Totally different things are taking advantage of insider knowledge (employees, management, owners, venture capitalists, main investors) or silently buying into something cheap (e.g penny stocks, crypto etc) before talking housewife's into put their money into the thing and then sell high to them.
I had a very similar experience - I unexpectedly got a very large raise at work (I'm in finance, most of my colleagues at my level had just been poached), and the raise itself was more than I used to make in a year pre-working-in-finance. I was delighted at the time, got home late that night, and then broke down and cried - it was a sufficiently large emotional shock to push me (after a few steps) into the effective altruism camp hard, and I've been there since.
The description of H2A stimulation sounds unsettlingly familiar.
>First of all - predictive coding identifies suffering with prediction error. This conflicts with common sense.
This is something I really don't get. The predictive coding model seems super interesting and have a lot of explanatory power but it seems weird to suppose that it is a global model that explain everything brain related. Why identify suffering with prediction error, instead of the more common sens idea that prediction error contributes/is a type of suffering?
I think the intuition here is that there should be _some_ concrete way of summarizing what suffering is. If we ask the question "What is it that all suffering has in common"?, the only thing i can think of, in common sense terms, is "it doesn't feel good" - but this has to do with qualia, and not anything we could use as part of a predictive model to understand _how_ the world works.
The 'suffering is prediction error' hypothesis meshes with the key insight of buddhism, i.e. that 'suffering comes from attachment/desire/clinging'. It looks like essentially the same argument: you feel bad in situations X,Y,Z because in each of those situations, you have an expectation that differs from reality.
>I think the intuition here is that there should be _some_ concrete way of summarizing what suffering is.
yes indeed but I think the answer is really obvious: suffering is the qualia associated with things that decrease your selective value (during evolutionary times obviously, not right now). And yes, having a messed up prediction can decrease your selective value quite a bit, but so can perfectly predicted traumas of various nature.
>The 'suffering is prediction error' hypothesis meshes with the key insight of buddhism, i.e. that 'suffering comes from attachment/desire/clinging'.
Yes but, whereas this is an interesting idea, I think wanting it to be the nature of suffering instead of something that can cause suffering among other things is strange and obviously false.
> I think wanting it to be the nature of suffering instead of something that can cause suffering among other things is strange and obviously false.
I agree. It feels very forced. My current best hypothesis for the nature of suffering rather is wanting to avoid some thing.
If you are trying to achieve a thing, and predict that it goes the way you prefer, but then it turns out that your prediction was wrong, you suffer (because you wanted to avoid that it goes that way). So in a sense your prediction being wrong is linked to your suffering.
But if you don't care either way how a thing goes, predict that it goes a certain way, but then it turns out that your prediction was wrong, you don't suffer (because you didn't want to avoid that it goes that way)
And if you are trying to achieve a thing, predict that it does not go the way you prefer, and it turns out that your prediction was right, you still suffer (because you wanted to avoid that it goes that way, even though you predicted your actions were not sufficient to achieve that.)
This would explain why people (especially anxious people, but not only) tell themselves something won't work out, when they really really want it to work out. They are trying to protect themselves from this type of suffering. They work hard at making the negative prediction, can list all the reasons it probably won't work out, and can become very convinced it won't. They are often quite shocked when it does work out well.
HOWEVER, the strategy really really really doesn't work. When the situation actually doesn't work out as they wanted, they not only suffer just as much as if they had been predicting/believing it would work out, that suffering comes on top of all the suffering they've done while they were predicting it wouldn't.
(And then sometimes they get the added suffering of 'and this proves that I really suck/my life really sucks/all my negative predictions will come true forever'. Which then creates more suffering despite some future negative predictions being correct.)
So clearly there's something else/more going on here besides/in addition to/on top of prediction error.
> yes indeed but I think the answer is really obvious: suffering is the qualia associated with things that decrease your selective value
I think the idea is to explain away qualia, not double down on it. Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction errors would do that.
Also, suffering doesn't necessarily reduce your selective value either as your ability to tolerate suffering can be adaptive. I don't think viewing prediction error in a purely first-order sense will explain all of our complexity, but the brain could be applying similar prediction machinery for higher-order thoughts as well, which would lead to suffering from anxieties and other thoughts that are far divorced from physical circumstances.
>I think the idea is to explain away qualia, not double down on it. Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction errors would do that.
Qualia just means a subjective experience. I do not understand how "Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction error" explain away qualia any more than "Suffering as some kind of thing that usually tend to decrease selective value"
>Also, suffering doesn't necessarily reduce your selective value
Yes of course. But the idea is that suffering comes from things (physical pain, emotional pain) that usually or even just relatively frequently decreases selective value in the past. Similarly, pleasure (from sex, food, socializing, etc..) comes from things that usually increased our selective value in the past. Suffering/pleasure is a stick/carrot system that our genes use to make us replicate them more efficiently.
> Qualia just means a subjective experience. I do not understand how "Suffering as some kind of internal dissonance or conflict resulting from multiply layered prediction error" explain away qualia any more than "Suffering as some kind of thing that usually tend to decrease selective value"
Because plenty of factors that don't cause suffering also decrease selective value, so it's too broad a definition. Also, factors which cause suffering don't necessarily reduce selective value, so it's also too narrow a definition. Also, suffering is presumably an internal state It just failures to capture too much.
The definition I suggested at least focuses on the internal subjective mental state itself that purportedly characterizes suffering. It's not fully satisfactory since qualia are too wooly to be so trivially summarized.
> Also, suffering is presumably an internal state It just failures to capture too much.
This should read: Also, suffering is an internal mental state, so your definition just fails to capture enough.
>Because plenty of factors that don't cause suffering also decrease selective value,
I don't think so. It seems to me that all the fitness decreasing things that natural selection could reasonably be expected to work on are in fact associated with suffering. Ie, for natural selection to ba able to act, it must be things that were present during evolutionary time and things that could be a least partly avoided by actions. What would be your counter examples?
>Also, suffering is an internal mental state, so your definition just fails to capture enough.
My understanding is that there hasn't been any even vaguely convincing theory explaining the presence of qualia. So mine (I say mine but I stronlgy suspect that this is the common theory for avolutionary biologist) does not explain qualia indeed, but predictive processing does not either.
>If we ask the question "What is it that all suffering has in common"?,
A few years ago I came to the answer that "suffering is the persistent perturbation of a negative feedback loop" - the qualitative equivalent would be "get me away from this". I think this is roughly consistent with the prediction error hypothesis, except if it's possible to suffer under a completely relaxed prediction error-resolver - if you're no longer internally adjusting to new stimulus data, but still suffer anyway, that would be an instance of suffering without a negative feedback loop (or alternatively, with a negative feedback loop of strength 0).
It made more sense to me when I thought of "prediction error" as another way of saying "deviation from a set point". I hope this is correct because I think the [suffering = prediction error] model isn't useful without some consideration of set points, as well as allowance for different set points to vary in their flexibility. That's the only way I can accept a model that includes suffering due to electrocution and suffering due to a pay cut under the same "prediction error" umbrella.
It is often said in the fire poi/"flow arts" community that practicing on acid - and other psychedelics, but mostly acid - is god's own cheat code. This matches up very nicely with George's theory about the proper use of psychedelics.
Sorry, what is the link to George’s commentary? I think that’s what’s being excerpted in the quotes, but I can’t see the link to the original.
https://cerebralab.com/Stress_and_Serotonin
Sorry, it's in there now.
The only pedantic completion I'd have to this is that I don't think all "introspection" done on psychedelics results in weirdness.
For example, you could introspect vision and figure out: When I see a repeating pattern I don't actually see the "true" number of objects, instead, my mind kind of infers it as a repeating pattern and translates it into however many objects it wants, so I end up perceiving the number of stairs based on my rate of exhaustion, or more/fewer posts in a fence depending on how hard I focus on it
But if you introspect on concepts where there's no "reasonable" explanation to be found or no particular insight left to gain.
For example death, or meaning or consciousness itself, or "true love" or whatever. If you introspect specifically because those concepts are bothering you, then it gets weird. This is not to say doing so is always bad, I used to be afraid of death (like, existential dread lasting half an hour every few days or weeks for 10+ years afraid) and after 500mcg and 14 hours that basically stopped... maybe I got stuck with an unadaptive mental model as a tradeoff, but then I'm fine with the tradeoff from a negative utilitarian perspective. I know many other people that walked through these kinds of issues on psychedelics. But there's certainly a point where it gets weird.
In other words, I think there are facets of the mind that it can introspect with a small chance of it going bonkers (e.g. sight, sound), and facets where the risk is high (especially given repeated trips, especially given high dosages and or predispositions for certain types of psychosis).
As an aside, I find it interesting that quitting alcohol abuse has a high success rate by either joining a strongly religious community (e.g. AA) or taking psychedelics, which would fit the idea of "build a weird mental model to allow your frontal cortex to intervene in a very well-formed set of patterns". But then again, those are small n studies, and psychedelics seem to work for everything in small n studies in ways that really contradict the behaviours of users (e.g. 80% efficacy in terms of smoking cessation)
as an aside, the name of the blog (which in hindsight I regret) doesn't have a double `L`
Bill W, founder of AA apparently tripped on Belladona at Towns Hospital in NY in 1934 for his own mystical insight that started his sobriety, and was *very* interested in the LSD research and had several sessions in LA with Sidney Cohen in 1956.
I think that psychiatry as a discipline sometimes overemphasises the role of specific receptor subtypes and underemphasises the role of where in the highly specialised structure of the brain those receptors are, in terms of the type of cognition they are involved in.
In terms of reward and reward prediction error, I also feel that there is a wealth of research on dopamine and the dopaminergic system being more involved in this.
If we recall at all of the brains dopamine is produced in the substantia infra and ventral tegmental area, and then projects into the limbic system, via the mesolimbic pathway and into the rest of the cortex, frontal cortex first, via the mesocortical pathway, then we can consider the effects of dopamine on reward.
There is a wealth of data on dopaminergic stimulation of the nucleus accumbens (limbic) as being involved in reward and reinforcement learning. There is also evidence of the orbitofrontal cortex connections to the limbic system as being critically involved in reward, but also reward anticipation, and reward prediction errors. Rolls writes extensively on this for example. De Youngs studies on brain volume and personality find that OFC volume and extroversion are positively correlated, which he interprets as fundamentally reward seeking.
I don’t know the exact distribution of DA vs 5HT receptors in these regions (and I’m not sure who does), but common sense understanding of the emotional impact of serotonergics vs dopaminergics would be that dopaminergics produce a euphoria (reward) as well as highly motivating people (reward expectation) to move, socialise, or even do otherwise unrewarding tasks such as cleaning ones kitchen or revising for a exam when one has ADHD.
I’d also speculate that dopamine is actually the key neurotransmitter signalling pleasant surprise (a positive reward prediction error) and unpleasant shock (a negative reward prediction error). None of these emotions are produced by people taking serotinergics such as SSRIs.
In contrast, I think that both SSRIs primarily stimulating 5HT1A and psychedelics primarily stimulating 5HT2A are actually involved more in *changing* the weighting of neuronal connections which we’re making those prediction errors, through plasticity which psychedelics in particular have been shown to increase.
Studies on gambling back this idea up too. In rat brains, an unexpected reward is accompanied by a DA release whilst a loss was associated with a 5HT release. The interpretation here was that DA affected plasticity by making the Britons that fire together, wire together making the animal feel like “I really want to do that movement or behaviour again!” and thus explaining DAs role in habit formation and addiction, whereas 5HT is involved in increasing plasticity by reducing signal weights, when a prediction error is detected. Thus, the serotonergic system is involved in the recognition at some level that “this prediction did not work, I need to change my behaviour and do something different” which is massively enhanced if a person is taking SSRIs or psychedelics.
Therefore, I’d be wary of attribution of cause to 5HT for prediction error, but more if an effect of prediction error in producing changes in signal weighting via plasticity to change behaviour.
If you’ve ever used CBT to treat depression or anxiety you know how people make negative prediction errors, and how helping people to recognise this and change their behaviour is the key to treatment, and how an SSRI can really assist this process. SSRIs don’t produce reward (euphoria), reward prediction (motivation) or reward prediction error (surprise/shock) which I think are all dopaminergic. SSRIs help people change the reward prediction errors they are making that terrible things will happen, and thus help them to change their thinking and behaviour
I wrote this on my phone whilst walking my dog, hence no references and typos. It should read Substantia Nigra above, not Infra, and for some reason neuron autocorrected to Briton at one point
-If you get $1 million because you're an ordinary middle-class person and a crypto billionaire semi-randomly decides to give you $1 million one day, you will be very happy.
Do we actually know being "very happy" would be the typical instant reaction? I once won $10,000, about 25% of my income at the time, in some promotional lottery I hadn't realized I had been entered in. I was going through a miserable time in life and my reaction was to be angry and annoyed, because it felt like a cruel joke. Only after a few weeks, when I used some of the money to buy a new guitar, did I feel happy to have it.
Maybe that was like stepping in a cold shower and suffering at first but then getting used to it? Or maybe because I was depressed, I didn't have a normal reaction?
I think it mattered that I didn't even know I was entered in the lottery. If someone buys a lottery ticket, perhaps a part of them predicts they will win. Otherwise, why did they buy the ticket?
I think it's very safe to say that "very happy" would be the typical instant reaction. I don't know any other person who has been irritated by receiving a large amount of money.
As another data point, I would not be happy if I checked my bank balance and discovered it had increased by $1million. I'd be a touch sick with worry over whether I'd be blamed for the mix-up, a bit annoyed at the incompetence of everyone involved, somewhat tense over having to call the bank and sort this mess out, etc.
If everything was in order and a plausible reason was adduced to account for the money being there, and it being legitimately mine, *then* I'd be mighty pleased, but until then I'd just be upset.
Fits with Buddhist ideas; you don't suffer because you are in pain, you suffer because you don't think of pain as something that should be happening to you.
Surely what makes a cold shower decrease in pain over time is not an adjustment in your predictions, but a physical adjustment in the contrast in skin temperatures.
> one remaining problem here is why and how some prediction errors get interpreted as rewards
Objectively unlikely events could easily confirm our model of the world rather than violating it, if the model was suitably wrong to begin with.
Plausibly, our money-centric culture—and in particular our parasocial relationships with affluent pseudo-peers—causes our brains to maintain a prediction amounting to "I am wealthy, or soon will be, and any indication to the contrary is some terrible mistake." The conflict of this prediction with reality creates suffering, but we get stuck unable to update away from it, because our media habits keep giving us more fake evidence that it's true (which is maybe not really our fault since whole industries are dedicated to ensuring that very thing). A sudden windfall, then, won't feel surprising; it will feel like the natural order restored.
Now I'm no historian, but I'd guess that if you're a premodern farmer and you have an exceptional crop, your model already represented that as something that could and indeed *ought to* happen; although it was surprising in some statistical sense, you aren't surprised—the world feels more correct rather than less. But if the king rides up and hands you a fistful of glittering gold, aren't you more likely to be suspicious and scared than happy? Surely this is some mistake or trick or trap; surely the other villagers will turn on you, or bandits will kill you for it, or something—anything—will punish you, because suddenly you're outside the natural order.
I think this is still true for a lot of working-class (use your vocabulary of choice here) people - my reaction to anything (especially financially) good happening used to be to squint and ask 'what's the catch'.
> At its limit, this theory says that all action takes place through the creation and resolution of prediction errors - I stand up by "predicting" on a neurological level that I will stand up, and then my motor cortex tries to resolve the "error" by making me actually stand.
Thinking out loud: there are several programming languages whose core primitive operation is some version of this. Prolog being a conventional example.
In Prolog and other languages incorporating such features, 'programming' takes the form of, first, writing 'facts', and then second, writing underspecified facts. The program executes by "resolving the error" by figuring out which values need to be what to make your underspecified facts true. This is used a lot for logic programming (it's in the name, after all).
If you described this programming language as "resolving pattern-match error" to execute, that would be technically correct. Perhaps "prediction error" functions similarly in cognition. Perhaps, at some level what is technically happening is prediction error being resolved, but in practice that is an obtuse way of framing it
Off topic: you comment on 2 subjects I find interesting. 1. how do our brains choose goals - crude answer we feel tension when how we want the world to be is different from how it is; 2. what do drugs do to our brains. It would be awesome if you wrote a piece on how the 2 overlap.
So we have: A. how the world is; B. how I perceive the world; C. what I want the world to be; D. how much the disparity upsets me, E what I am doing about it. We can acheive our goals by moving the world closer to how we want it to be, looking at things in a different way, by changing what we want, by stopping caring about the difference. What is the attraction of different drugs? Which of these 4 ways of reducing that tension do they address? e.g. heroin - at a guess B and D; cocaine at a guess B and E. But I know nothing in these fields, what does the attraction of different drugs tell us about the model of motivation and consciousness?
My nutshell understanding of Buddhist enlightenment is reaching a point we’re you understand the world is exactly the way it is supposed to be. End of tension.
> one remaining problem here is why and how some prediction errors get interpreted as rewards... This suggests there's still something we don't understand about prediction error and suffering
I think if you were describing a theory you were less attracted to, then instead of saying that this is one remaining thing we don't understand, then you'd say that this is a class of counterexamples that totally blows up the theory.
You've got three examples of the "prediction errors -> suffering", of which one fits (losing money), one doesn't fit (gaining money) and one only fits if you torture the definitions of words until they submit (feeling pain). Most of the other examples of prediction errors that I can think of don't seem to cause suffering; e.g. when watching a movie it's not at all unpleasant when the camera cuts to another angle.
I think there's a lot of good in predictive processing, but this particular "prediction errors -> suffering" aspect seems a bit of a dead end to me.
Yup, this just seems wrong on its face.
> then you'd say that this is a class of counterexamples that totally blows up the theory.
I don't see how that follows. Mathematics contains all sorts of conjectures that we believe to be true but haven't yet been proven to be true. Historically, most such conjectures somehow turned out to be true despite the fact that we couldn't precisely connect the dots at the time.
It could easily be true that predictive coding has simple first-order mapping to some types of suffering, but requires more complex higher-order mappings to other types of suffering that aren't immediately apparent.
> Most of the other examples of prediction errors that I can think of don't seem to cause suffering; e.g. when watching a movie it's not at all unpleasant when the camera cuts to another angle.
Or maybe watching a movie primes you to expect such cuts. It also primes you to expect decent acting, but if the acting is horrible I certainly suffer enough to turn the movie off.
Consider whether an adult from a hundred years ago who's used to watching Charlie Chaplin would suffer during some modern movies with super-rapid cuts, shaky cams and blasting audio tracks.
I find the idea that we shouldn't use psychedelics for introspection a bit troubling: I've been studying up on psychocybin based psychotherapy lately, and it seems to get very promising results in treating chronic depression and anxiety. Yet the entire model of therapy is based on internal introspection: you give people an extremely high dosage of psychocybin, and then you put a blindfold on them and play gentle music so they aren't distracted by the outside world and can focus on internal introspection. Could this be a dangerous therapy model to pursue? The studies done Dr. Roland Griffiths seem to show good outcomes, but I'm not sure how well you can measure "weirdness" after treatment.
> the 10% of Americans who use psychedelics mostly don't end out as weird as they did
Huh. All the people I know who are really into psychedelics also have very eclectic beliefs about religion or humanity or whatever. Maybe you're right about dosing (note the "really into", not "10% of the people I know"). Or maybe we've all learned to be ok with some types of cognitive weirdness in a way that it's acceptable to be a yogi barista inventing your own religion and nobody really makes a big deal out of it anymore.
New to the neighborhood question: Has Michael Pollan’s book been hashed out here?
Frequently discussed and/or referenced, but to my knowledge not focused on as the primary topic.
2c on Active Inference Framework and reward/suffering: (context: I have developed an AIF-based model that's being published in Entropy soon, so I can speak to the math/theory side of it - less so to the psych/neurology angle)
Friston is generally cavalier with his writing and this seems to me like an example of that. If you're abstractly just interpreting animals as simple self-regulating systems (trying to stand up, or stay fed, etc) whose only ambition is to maintain homeostasis, the prediction error / suffering metaphor works great, in the following sense: the reward function is endogenous, so any behavior is a self-fulfilling prophecy. But if you're considering a sophisticated agent that's composed of multiple subsystems, capable of foresight/delaying reward, attaching value to symbols, etc, then when isolating an individual loop/subsystem, you must accept that in general the reward function will be exogenous (provided by whatever the relevant desire-generating subsystem is), and the discrepancy between desired and observed/predicted future can be arbitrarily large. This reduces the philosophical issue to an empirical or modeling issue: given a system composed of multiple interacting AIF subsystems, what is the specification of desire that produces the "correct" behavior in each subsystem?
Note also that, while this decomposition is totally compatible with the highest-level system being (approximately) only concerned with its homeostasis, it does not require it, and indeed, it is a fair question whether any real-life animals fit that bill. Indeed, Gaia theorists may well argue that only the biosphere as a whole can count as an endogenously motivated system, with each individual animal or biome being at least partially exogenously motivated...
For those who would like to read George's review: https://cerebralab.com/Stress_and_Serotonin
Did MK-ULTRA yield any useful information? It might be interesting to know how people react to LSD without knowing they are taking it. I realize this program was insanely evil but it still may have had interesting results.
"This is good insofar as you're suffering less, but bad insofar as you've adjusted to stop caring about a bad thing or thinking of it as something that needs solving [...]."
I'd expect lowering prediction-error==suffering to zero is a complete cure for depression. Think ugh-fields. The less horrible they feel, the easier they are to address.
If Terrible Issue X doesn't compel you to action with the Terrible Feelings gone, perhaps X was never worth addressing in the first place.
I know you’ve been writing about predictive coding for years but some ideas stick out to me now.
Driving is pleasurable (and being a passenger less so) because I get to make and fulfill a lot of correct predictions.
The musicianship in sound engineering is a process of perceiving and resolving prediction errors. You hear the gap between what the mix is and what it “wants” to be, then you move your fingers on the board to bring it in line.
Software design shares this “is vs. wants to be” structure although the feedback loop is much slower. “Wants to be” is the same kind of intuitive, black box oracle. This confuses and frustrates those who do not have the same kind of oracle inside themselves and yearn for some kind of logical, rules-based system to substitute for it.
Would you say you're a generally well-coordinated person? Do you have a tendency to daydream?
I've got a pretty unhealthy relationship with driving - I'm generally an extremely chill person, but despite getting my driving license and not having any objectively negative experiences like a crash, driving terrifies me, and that reaction being so at odds with my usual temperament, am trying to figure out why.
I too absolutely love driving. I never get tired of it.
For another data point: I'd say I'm not well-coordinated, and I daydream more than anyone else I know.
Even in rush hour standstill traffic? Are you commuting a lot by car? Do you have something to occupy yourself with while commuting? Do you like all parts of driving? (Finding a parking spot when they are few, idiots trying to overtake you when there's no space, etc?)
Sure, math is fun or if you’re a Brit I guess maths are fun, but I don’t think the problems you are concerned with here are going to yield to some nth order predicate calculus.
There is a reason they call it meta (beyond) physics. William James knew this. He used nitrous oxide to get a better ‘understanding’ of Hegel.
I think if you insist on sticking to the strictly rational you aren’t going to get where you want to go.
I think the explanation for the last one is very simple: Our brain has a set of limiting priors because evolutionary, they helped in advancing human society. It was just not good for society to have lots of individuals stepping outside of the beaten path.
Psychedelics widen your thinking by reducing these priors. This is very interesting (and in many cases helpful) to the individual, but historically not to society because peasants should do their field work and cleaners should clean rooms instead of thinking about the meaning of life.
So, the weird psychedelicists are weird from a position of usefulness to society, but maybe not from their own position. And this weirdness doesn't happen as much anymore because society has - at least in some progressive areas - adapted to psychedelics. We now know how to steer psychedelicists so they stay useful to society. Just look at all the post experience integration trainings and sitter guidance that are out there
My understanding is that everyone who understands evolution considers group selection practically impossible, so things being "good for society" has no biological explanatory value.
I tend to disagree. Being part of society is essential for humans. If you didn't fit in, you much less chance to survive. At least you would have much less ability to reproduce because fewer people would marry "that weirdo"
You appear to me to be retreating from the bailey of selection of traits that are bad for the individual but good for society to the motte of selection of traits that are good for both the individual and for society.
I think that group selection might have played some role in the past of human evolution due to the factor of warfare between small groups, so selection for group cohesion was probably a thing. However, you don't need to invoke it at all in order to see that the "widening of priors", even temporary, could carry a pretty heavy risk in the rather perilous ancestral environments. Eat something you're not supposed to, or hunt somewhere dangerous, or drink from a tainted source and sans modern medicine and you're dead meat. And that's even without the considerations like overstepping social boundaries which could turn your own group on you.
Shouldn't any such group-selected traits have disappeared as soon as groups started interacting without complete genocide?
Not necessarily I feel, in that case those traits might still be beneficial on an individual level for gaining status in larger groups. Plus you could have smaller coalitions with differential reproductive success within larger groups - like attracts like I guess?
But if they are beneficial on an individual level, there is no need to posit group selection, is there?
I feel like if it were purely individual level selection, you'd see a slightly different set of traits of some sorts getting elevated. Basically, I can't bring myself to definitively exclude group selection and it even might play a small role in more individualistic societies since material success is frequently determined by the faithfulness of your allies or harmony within a clannish group, and material success has an effect on reproductive success.
Quoting George: "I found it curious that people who take psychedelics for introspection usually end up as religious cranks or burnouts. While people that take psychedelics because "they are fun" don't seem to experience many negative side effects."
- - -
For certain definitions of "introspection," this sounds to me like "People who expect a substance to remedy their existential malaise end up disappointed, but generally satisfied people who use a substance to have a bit more fun usually have fun."
This does not seem like it should be a surrpise.
I have a chronic pain condition so I get a lot of opportunity to analyze the experience of pain. Most kinds of pain can in fact be zeroed out completely with sustained application of attention. The trick is, unintuitively, to focus on the pain as precisely as possible. The pain sensation will dissolve and disaggregate into a bunch of distinct sensory signals, none of which carries a "suffering" valence. I think what's happening is you're devoting your whole consciousness to "predicting" what the next moment of pain-sensation will be like, and having done that, the pain does indeed become incorporated into your overall prediction, and thus stops being pain. In keeping with this model, the trick only works if the sort of pain you're attending to is a static (or regularly pulsing) kind.
As soon as your attention wavers, the pain comes back, because you stop actively predicting it. I think pain is a relatively unique sort of signal that resists being ignored, for obvious reasons.
Sorry you are living in a less than optimal state. Glad you have something of a handle on it. I’ve used the same technique and can generally decline opioids even for broken bones. There is pain yes, suffering not so much. You are right about it only working only on static or regular too. I experienced some random spasms in one of my glutes a couple years ago. Using the infuriatingly useless 1 to 10 scale I suppose it would come in at a 2. Still the unpredictably of the cramps caused enough distress to see an MD for a muscle relaxer scrip. Good luck with your technique.
I think this extends even to mental kinds of suffering like anxiety, *if* you're able to not attach some important meaning to the sensation (e.g. a threat warning, or a fence against moral corruption).
"In keeping with this model, the trick only works if the sort of pain you're attending to is a static (or regularly pulsing) kind."
In my experience it's possible to anticipate even "unlikely" painful events, taking away their suffering valence if they occur. Often takes the shape of "being prepared for the worst case", and has no epistemic cost.
Sorry that you have to live with this condiyion and glad that seem to have found a good way of managing it. Thank you for the very interesting comment. It is indeed very unintuitive that focusing on the pain makes it disappear!
Hmmmm, seems like high H2TA (- Optimism - Plasticity - Sensitivity to stimuli - Ease of learning) may also explain why some people get so STUCK in high gain-high cost/loss situations such as living with an (originally amazing and still occasionally great) abusive partner. They keep trying to figure it out, make it work, and believe at some deep molecular level that this can be accomplished. They may have successfully applied these strategies to many many other situations in their lives. They may also assume that their partner also sees the world this way, and is also working hard to improve things (and the abuser may have figured out how to encourage this assumption).
I've seen Michael Pollan's book referenced here, he mentions research indicating down-regulation of the default mode network during psychedelics - does this fit with the induction of a 'bias towards thinking of problems as solveable'? (Maybe.)
Any discussion of the effects of 5-HT[2A] agonist psychedelics has to include the (not-so-recently-discovered) fact that different GPCR agonists can produce different effects. It's not a simple one-dimensional "more or less"-scale - it's multi-dimensional. See for example https://juniorprof.wordpress.com/2010/09/21/agonist-directed-trafficking-of-receptor-stimulus-pharm-551a-berg-et-al-1998/
The classic paper on the subject (Berg et al. (1998) in Molecular Pharmacology) is one of my personal all-time favourites within the field, with very nice graphs illustrating the difference between various agonists.
PS. And any discussion involving MDMA of course has to include the fact that other 5-HT releasers (such as fenfluramine) don't produce similar effects. Fenfluramine apparently has some sort of recreational (or atleast gets-you-buzzed) effects in higher doses, but nothing comparable to the fairly unique entactogenic effects of MDMA. Personally I'm actually wondering about whether part of the reason might be that MDMA is a tryptophan hydroxylase inhibitor, so that it's somehow related to partial depletion of 5-HT rather than indiscriminate excess.
And of course, MDMA is a 5-HT[2A] agonist as well, something which subjectively becomes pretty obvious in higher doses.
I've taken LSD and shrooms a couple dozen times over the past three or four years, "just for fun" — and my beliefs are essentially the same as they were before I started this (still a boring old stick-in-the-mud materialist who insists on skepticism re: Illuminati and Hidden Masters and psychic powers and so forth).
On the other hand, out of two friends who took psychedelics with the intention of doing some "deeper" introspection than I engaged in, one is just the same — still grounded and reasonable — and the other went totally whacky. Not sure this supports the hypothesis in the post... but n = 3, of course.
My two principle LSD experiences as a teenager were playing the first mass effect game and watching the sci-fi channel Dune and children of dune TV miniseries in a weekend acid binge around 2007.
Anecdotal effects include a decade long lasting sci fi space optimism infusion, an intrinsic faith in the padishah emperor, and the vague knowledge that Butlerian jihad against the AIs leads to space Jesus thousands of years later.
Not sure, if I understand everything correctly - but I think @scott says that antidepressants (think SSRI) push button one which let's your brain get happier with the world as it is. While psychedelics would push button two which let's you get motivated to solve the world's problems right now. I can only tell about my own experience in taking SSRI since 9 months now - they seam to push both buttons inside my brain. I'm way less concerned with anything, getting way easier to sleep after 45+ years of having hard times every evening with endless circles of thoughts. And I'm so muchess reluctant to start anything, I have a lot higher drive and motivation to do the first step of any small to large scale change I'll find is necessary. My impression is, that I'm much less stuck in toxic thinking like "every tiny piece is in some kind connected with ever other piece - so no way to find out where to begin with". In generall I've become a much more pragmatic person with a higher motivation to get things done now and getting less distracted by the endless number of tiny things related.
Hope this makes sense to you.
I have always had this narrative that my cold tolerance shot up inexplicably after starting SNRIs, so that's my selective anecdata related to these theories.
I think you sound pretty naive about the subjective experiences and motivations of people utilizing psychedelics.
"First of all - predictive coding identifies suffering with prediction error. This conflicts with common sense."
Predictive coding does not actually make this claim, it's solely an operational interpretation of the bayesian brain hypothesis. Though some researchers, like Thomas Metzinger have suggested that emotional affect is related to the rate at which uncertainty is resolved within the context of the bayesian brain hypothesis.
Nonetheless, if it is true that prediction error is related to affect, it's important to understand that prediction error in this context exists at multiple levels in the brain heirarchy. If we have a discussion about you stabbing my arm and you subsequently do so, I will have successfully squashed prediction error at the low levels of the visual perception hierarchy. But the higher levels, extending out to those those that might exist in the association cortex, have a probabilistic prior that includes something like "My self model dislikes bodily harm", and in the behavioural (active inference) context, "My self model is not being harmed". This is a prior that has probably developed on an evolutionary timescale. Thus by allowing you to stab my arm I would actually be behaving in a way that would have dramatic higher expected prediction error (expected free energy).
I don’t know anyone who takes LSD at parties regularly. It is too intense and lasts too long for folks who just want to unwind for an evening. Festivals maybe. But most people I know who regularly take it are more like growth mindset people who just want to experience new things and see what else the brain can do. Not people trying to fix themselves.
I've taken a number of psychedelics specifically for introspection and I can't say they modified my beliefs much, partly probably because I never felt belief that isn't backed up by concrete action can really do anything. But they definitely helped me out to figure out what I needed to do to change my life for the better. I'd say the "crank or burnout" transition can only happen if you have some sorts of prior mystic view of the world that elevates religious or supernatural "secret" types of knowledge over the observed material reality, which might've been more common in the last century, pre-Internet, around when these substances first surfaced.
And as for how acid and transhumanism come together...I didn't have to take those drugs to obtain transhumanist beliefs - rather, I had transhumanist beliefs in the first place that then led me to explore chemically altered states of consciousness just in case they could give rise to any interesting insights, which I'd say they did in a mostly practical way - made me change some of my hobbies, study certain topics and switch occupations for the better. My overall outlook and preferences stayed exactly the same, yet I started to feel like I could be doing more interesting/productive things with my time than I normally used to.
I guess my advice to would-be users is simply to avoid accepting ideas you get on trips in the long-term unless they also happen to entirely make sense when fully sober. You're just vividly exploring alternate possibilities that may or may not make sense in the end. :)
My initial inchoate thought (which I haven’t considered further, just finished work, tired); I wonder how this fits in psychotic depression, BPD or rapid cycling BPAD?
I find it frustrating when authors don't say whether they are talking about presynaptic or postsynaptic 5-HT1A receptors, as the presynaptic receptors are known to have multiple functional activities aside from their "primary" role of negative feedback.
When publishing work related to 5-HT2A, speculations should also consider
1. the role of 5-HT2C receptors, and perhaps the 5-HT(2A)-(2C)
2. how the ligands used in animal studies such as DOI have a much greater affinity for heterodimeric receptors like 5-HT(2A)-mGluR(2), -D(2), -CB(1), etc and often should not be conflated with native serotonergic activity.