75 Comments

> They're learning the whole map of value, like a topographic map.

This means that if value is objective, if goodness is the basic ground of being, and physics comes out of goodness as “the best rules based system by which conscious organisms can exist”, then all our brains are doing is trying to model the most accurate copy of base reality. And evolution is pointing in that same direction - selecting for organisms which best reflect the base reality of value.

Expand full comment

Care to build on that last point? I don't see why that would necessarily be the case, but perhaps in misinterpreting the statement.

Expand full comment

That one is more of a stretch / more hand-wavey, but basically it goes like this: if what evolution is selecting for is organisms that are more fit (better at self propagating, etc) , then, in the limit, after infinite amounts of evolution over constantly shifting environments, i think what you'd end up selecting for is organisms that had the most possible accurate maps of the whole space of possibilities.

Expand full comment

Wouldn't that imply a kind of evolutionary 'memory'? I don't see the mechanism by which evolved organisms within a given niche would in a material sense have an advantage in previous niches (e.g. it's not clear to me that whales-like creatures could evolve into cow-like creatures or could more effectively propagate within pastoral environments, even if ancestral whales were cow-like).

Expand full comment
Feb 12, 2022·edited Feb 12, 2022

You're right that this logic doesn't work for cows < -> whales, for example. But i think this 'memory' does exist for humans, in part because we live in so many niches, and in part because of our learning capabilities.

In other words, it's like most species are maps of some tiny feature of the global objective value landscape - whales are maps of the niche they inhabit, cows are maps of the niche they inhabit, and those niches are local value maxima.

In other words, there are disaster situations that only humans could survive, by getting off the planet and going elsewhere. We are also _incredibly_ good at adapting to different niches, unlike almost any other species. And we do this using 'animal memories' - i.e. before advanced manufacturing technology, people living in cold places would use the skin of other animals who had evolved specifically for the cold, to keep themselves warm and dry.

So you can have this image of god as the ultimate objective value landscape, and then humans as these tiny, fuzzy, noisy copies of god that come into being briefly, try to reflect the whole landscape, and then fade away. Over the history of humanity, but especially he last few hundred years, we've gotten to be ever more reflective of that total value landscape. This also works for humanity as a superoragnism - the total values of _humanity_ are more accurate reflections than any one human. It's like existence is this recursive matrioshka doll, a noisy-almost fractal containing many flawed copies of itself, but iterating towards more copies, which are more precise and accurate replicas of the whole.

If you don't believe objective value exists, _some_ of this image still works, but you can only talk about the human superorganism having _some_ value system which evolves and changes over time, and individual humans kind of having some copies of it - but you lose the notion of convergence to an actual ground truth, and still have to answer 'why these laws of physics' ?

Expand full comment

no, goodness comes out of physics. if i have a map of england, and england objectively physically exists, it doesn't follow that england is the basic ground of being, and physics comes out of england.

Expand full comment

Your “England” example is asserting the conclusion, since England is a subset of physical reality. In the same way, the laws of physics are a subset of the possible mathematical structures. Why these laws of physics?

You seem to be just restating an untestable assertion: that everything comes out of physics. Why do you beleive this? Is there any evidence that would convince you it’s not true?

Expand full comment

i misread your comment. I thought you were arguing "This means that if value is objective, THEN goodness is the basic ground of being". But you didn't say "then", you said "if".

so you were asserting it, not arguing for it as a conclusion. you did not explain why you believe this, or how you would test it.

and yes of course, if magic was real, i could be convinced that not everything comes out of physics. but i think the evidence weighs against the existence of magic.

https://brianpansky.fandom.com/wiki/Is_Physicalism_True%3F

Expand full comment

Is it true that there is no highest prime number? If so, Can you provide a physical explanation for that fact?

Expand full comment

i think we've gone off-topic, so I've moved my reply to the off-topic thread here:

https://astralcodexten.substack.com/p/open-thread-210/comment/5035802

Expand full comment

There is no highest prime number. I can give a mathematical proof of this. I'm not sure what you're looking for as a physical explanation.

I will prove that, for every prime number, there is another prime number higher than it.

Let's start with a prime number N. Now construct the number N!+1. (Recall that 5! = 5x4x3x2x1.) N! is divisible by N and every number less than N. If we divide N!+1 by N or any number less than N, the remainder will be 1. So N!+1 is not divisible by N or any number less than N. This means that either N!+1 is prime or all of its prime factors are larger than N.

Let's do some examples.

2 is prime. 2!+1 = 3 is a prime number larger than 2.

3 is prime. 3!+1 = 7 is a prime number larger than 3.

5 is prime. 5!+1 = 121 = 11 x 11 is not prime, but its prime factors are all larger than 5.

7 is prime. 7!+1 = 5,041 = 71 x 71 is not prime, but its prime factors are all larger than 7.

11 is prime. 11!+1 = 39,916,801 is a prime number larger than 11.

13 is prime. 13!+1 = 6,227,020,801 = 83 x 75,024,347 is not prime, but its prime factors are all larger than 13.

See also: https://xkcd.com/622/

Expand full comment

What kind of "goodness" are you then referring to, which is supposed to be the "ground of being"? Would this definition not imply that "badness" is the "ground of not-being"?

Then "not-being" is by definition not something to worry about. If there's nothing "bad" to worry about, I cannot see the justification for calling "the basic ground" possesing a property called "goodness".

Whereas with values, I might call an act in the spirit of "friendship" (a component of "goodness") being imbued with a high positive value. An act in the spirit of "disloyalty" (a component of "badness") may be imbued with a high negative value.

This concept of value allows for objects to have negative values, as well.

Your concept of objective values does not allow for things to have negative value, it appears to me.

But then why do you use the term objective "value" then?

Expand full comment

Maybe 'value' is a better word than "goodness"; i think what i'm suggesting might be the 'ground of being' (yes i get that this is vague, will unpack it ) as being "the same good/bad distinction that individuals humans model with our moral intuition, that religions attempt to value with their various prescriptions and proscriptions, and which our brains model as the map of value."

In other words, if empiricism says: "material reality is the ground of being; numbers and moral values are both physical concepts", and Max Tegmark style Neoplatonism (is that what people are calling it?) says "Mathematical reality is the ground of being. The laws of physics are just mathematical structures, and moral values are _also_ just mathematical structures as well", what i am thinking is about an idea that says "the landscape of value, the good/bad distinction, is, _itself_, at the root of everything, and this distinction, value, good vs bad, by nature of its a priori existence along with mathematics, gives rise to the laws of physics, since "conscious entities exploring a world" is fundamentally a valuable thing

I can't think of any empirical distinction between these three philosophies. There's no experiment you can do to disprove Platonism. Whether or not it makes more sense to see material reality as an extension of mathematical realty, or vice versa, is a value judgement. But if values are at the core, then then the "problem" of "how do we choose which of these three equi-predictive ways of looking at the world" gets answered by "value itself is the best basis important because without values you cant' answer _any_ question, since why value truth over falsehood"

This "value is at the core of reality" perspective seems to synthesize the other fields nicely, and it leads to a nice 'return to reality' - ultimately, all our beliefs do is allow us to choose our actions based upon some system of values. It looks like our brains focus a lot on building some map of value - but what is the territory of that map? It's now kind of a 'social taboo' among our crowd to say 'hey, that map really does have a territory.'

If that territory doesn't exist, why should i prefer truth to falsehood? Why use bayesian reasoning instead of casting cow entrails? You can't argue _any_ of this without 'smuggling in' a value system. I think we should be upfront about that rather than sneaking it into the back door: we all share a valuing of truth over falsehood, and i think we should talk about that valuing directly, and that the very least, be _open_ to the possiblity that there is some ground truth about 'a correct map of values in a brain'.

As to whether there is anything "bad" to worry about, it then becomes akin to asking "is there such a sting as absolute zero of value", which is what Sam Harris talks about when he says "can we all at least agree that everyone suffering the worst possible pain imaginable forever is something _nobody_ wants, and then build up from there?" Of course, maybe there ARE negatives and that no, there isn't a meaningful absolute zero, just as it turns out that negative temperatures are meaningful.

Expand full comment

Thanks for clarifying.

First of all, I do like your imagery of "maps of value" and the idea of entities evolving to becoming an "encoded map" of its environment. It's a neat mathematical metaphor of describing evolution.

"Whether or not it makes more sense to see material reality as an extension of mathematical realty, or vice versa, is a value judgement."

There, I disagree. Beliefs have to pay rent. The rent comes in the form of reducing my prediction error. Neither theory could "make more sense" to me. As you pointed out: They are not empirically distinguishable. Therefore, they do not have a role to play in my sensemaking-process. So I cannot believe either. Nor could I believe in your third one.

This is not to say that there is not "a base reality/ground of being" that exists!

Given that the objects of my perception are best explained to be downstream phenomena of such a "a base reality" and deeper investigation reveals ever-deeper layers of it, I'm very happy to assume so without needing absolute confirmation. But only because a thing probably exist, does not imply that it's nature is truly knowable in any resolution of my choice.

If there is a base-level, then it's certainly out of my scope of perception.

As for value judgements I can obviously only value such theories on their aesthetic merits. Perhaps your theory on base reality is conceptually more beautiful, compared to the other theories you have mentioned. I do not think I understand any of them completely.

To me they appear like dense jargon-heavy word salad. What for example is "mathematical reality"? Perhaps you can define it or I could look it up and the answer would be sufficiently rigorous for me to go "oh ok, now I'm confident reasoning about this concept".

But all those theories exist in a purely philosophical realm where words have twisted meanings and unstated assumptions are very easily smuggled into writing.

And perhaps knowing the assumptions and the redefined words, I can see the clear thinking behind it. But my suspicion is, that the underlying thought is muddled after all. My prior is that untestable theories that make no predictions cannot be trusted. Without predictions, there can be no prediction failure.

And so there is no punishment for being contradictory or conceptually unsound.

Reasoning under uncertainty is no problem. But I'm doubtful whether reasoning under complete uncertainty could ever meet.... a reasonable performance standard.

Apart from each logical step of your theory, I do not understand why it's so important to you. I could understand, if this was a mere aesthetic thought experiment. This statement seems to show, that it's fundamentally important to you somehow:

"If that territory doesn't exist, why should i prefer truth to falsehood? Why use bayesian reasoning instead of casting cow entrails?"

How I reason is simply not related to the hypothetical (non-)existence of a territory. Why would it be?

And the answer is:

People have Bayesianly reasoned that the brain itself uses Bayesian reason as a fundamental computing principle.

So even if you were to use cow entrails for reasoning purposes, you'd merely believe that that's the reasonable thing to do, because of Bayesian reasoning.

Knowing about this apparent low-level computing principle, one might as well consciously embrace it, since we have Bayesian evidence that there's nothing else that has better performance.

I'm trying to get better at writing. Make it less chaotic.

Did I manage to clearly and comprehensibly state my points and perspective?

Expand full comment
Feb 14, 2022·edited Feb 14, 2022

I'm very much enjoying this conversation; I believe you did convey _some_ points and perspective clearly, although i can't be certain that these are, indeed, _all_ of your points, or your precise perspective. But the writing makes _a_ perspective pretty clear. I'm willing to proceed on the assumption that this perspective is close enough to yours that my Reponses will be meaningful :)

I'll try to respond roughly in line.

I think if there's one important place where we disagree, it's here:

> Beliefs have to pay rent. The rent comes in the form of reducing my prediction error

And my question is "why is reducing prediction error the ONLY acceptable form of rent?"

For example, what if a belief does nothing to change how i anticipate the world, but the belief reduces the computational work necessary to reason abut the world, and generates all kinds of entertaining thought experiments, which then go on to increase my willpower and emotional energy? Does that belief count as 'paying rent'? Why or why not?

This blog post is a thought experiment which i think shows pretty clearly, 'making predictions only matters to us because they inform our actions, which change how we feel':

https://apxhard.com/2021/01/18/the-hypnotoad-pill/

> What we all want is to feel good, and to do good in the world, with the definition of ‘good’ in both cases being somewhat hand-wavy. Predictive models have always been subservient to this goal.

What i want to do is have us talk DIRECLTY about what we mean by good - the territory to which that map corresponds - the way most religions and philosophies throughout history have tried to do. I think what is happening with the rationality community is that we really ARE creating a new religion, but we don't like that word 'religion' and like to think of ourselves as being fundamentally different from OTHER gorups which have created norms and rituals.

Clearly, we are creating something that 'looks and acts' like a religion: there are rules around behavior, but those rules are scope-constrained to refer _only_ to practices of thought and belief. Except they aren't, because effective altruism says 'we should try to do the most good without donations', but this then raises the question: "what do we mean by good?" - and there seems to be a taboo over talking about this _directly_.

Or, maybe most succinctly: when we say 'beliefs should pay rent in terms of constraining anticipation', how exactly does that belief - that norm, that 'definition of what is good'- pay rent? The whole 'belief package' of rationalism, which includes the norm which says 'only have beliefs which constraint anticipation' - it's got to be represented in that map in our brains somehow, right?

I added this questoin to the relevant lesswrong post - maybe that's a better way of interfacing with the community:

https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences?commentId=bZP85nx3vRtF8hoQm

> To me they appear like dense jargon-heavy word salad. What for example is "mathematical reality"? Perhaps you can define it or I could look it up and the answer would be sufficiently rigorous for me to go "oh ok, now I'm confident reasoning about this concept".

I like this kind of question, and i think i'm guilty of "epistemic autism" (a term i made up and wrote about here: https://apxhard.com/2021/04/26/p-%E2%8A%82-np-causes-political-polarization/). I've largely self constructed a map of territory that others have mapped out using existing terms. I'm trying to learn to use other people's terminology, or at least point to some other 'intellectual authority figure' when using my terms.

by 'mathematical reality' i mean the 'level IV multiverse' as described by max tegmark here:

https://philnotesblog.wordpress.com/2018/07/28/max-tegmark-the-mathematical-level-iv-multiverse/

This interview of Joscha Bach, by Lex Fridman, goes in the same direction:

https://www.youtube.com/watch?v=P-2P3MSZrBM&t=10050s

And it talks about similar concepts we're getting onto here.

I think most people in the lesswrong community take something like 'physicalism' as their own "epistemic boundary conditions", and think in a framework which says "a physical world clearly exists, it is good to use only bayesian inferences about that to reason, if we are going to donate, let's do so in order to maximize the good done, but don't talk directly about good because it's not meaningful in terms of Bayesian inference."

If there IS some real territory that these value-maps are mapping out, i think that's _extremely_ important to understand what that territory is. Beleving there is a real territory out there seems, to me, to reduce the inherent condtradictions in our worldview. In my goal, 'constraining anticipation' is a subset of reducing contradictions in experience. When i _can_ anticipate fewer things, i can reduce the contradictions between how i want the world to be, and how it actually is, by both acting more effectively in the world, and worrying about fewer non-happening (or, unlikely to happen) events. And THAT - being more active in the world, more capable of navigating to more valuable places on the map - THAT feels like the true rent that we want our beliefs to pay. But its as if this community still carries the baggage of religion, in this case by refusing to look directly at the question of whether those value-maps in our brain correspond to real territory.

Expand full comment

Hmm.... you gave me too many branching paths. Too many questions (some having too many degrees of freedom in interpretation) with too many possible answers I could give.

The trouble is, that I can find decent answers/clarify my perspective in a couple minutes. Sometimes seconds. And generate far too much to write about. Translating my own mind state to coherent text takes me hours!

Especially since during writing, it will refine itself, which means I must rewrite and unless I decide that I'm done with it, it could continue ad infinitum. At some point, writing becomes frustrating and boring, at which I stop.

So please understand, that it may seem that I mostly ignored what you wrote. I did write more on the hypnotoad-thing and minimizing-prediction error thing. But instead of refining I rather discarded it. I don't want to spend the next hours in hyperfocus to get this right.

re: epistemic autism

I find the term “epistemic autism” very intuitive. It sells the complexity of autism short, but I don’t mind stereotyping as inaccurate culturally shared assumptions help with efficient communication.

But just to be clear, here: “epistemic solipsism” would be a more accurate term, yes?

Not that I would want to use it, as it doesn’t exactly roll off the tongue.

Assuming we understand this in the same way, I readily agree that I suffer from this as well.

Different individuals (or even traditions) exploring the same territory leads to Idiosyncratic understanding and terminology. They may end up simply missing preexisting maps of similar predictive value, because the mutual intelligibility is low.

Speaking of such maps...

re: Rationalists being materialists, unwilling to discuss "the good" directly

I consider the Stoics to be the OG rationalists in their values, perspective.

They have an obvious claim to this from their direct appreciation of reason/rationality. Though us moderns have numeracy, research and a far greater conceptual understanding of everything, a modern-day rationalists lack a framework or perhaps consciousness of virtue, which is central to the Stoics. They call virtue the “highest good” and talk at length about it directly. And they offer multiple harmonious views and perspectives on it.

One among them is that “virtue” consists in “following nature”/”living in accordance with nature”.

This refers to the “nature of reality”, so does this idea not match yours?

I recommend reading “Of a Happy Life” Book 1 to 5 to check if you're not trying to reinvent a notion from virtue ethics rather than physics, after all?

Do not worry, those are not actually books. It's maybe 6 to 7 pages in total. Stoicism might be hyped up by loud personalities these days, giving it their own spin, which might make it off-putting.

But the original Seneca texts survived for 2000 years for good reason.

They're just incredibly enjoyable and provide more insight than anything from those ancient, pre-scientific times has any right to.

Expand full comment

I have enjoyed response, and the entire conversation very much. I'll need to drop out now - family duties are getting nuts. But just wanted to say thanks for the enjoyabel dialogue.

R.e. branching: that's a problem i have too. Have you seen 'roam', the note-taking tool? It's great for communication around this kind of thing:

https://roamresearch.com/#/app/apxhard/page/kzfVJ9LUX

And yes, I'm very much versed in stoicism. I do agree that it's great. The point i'm making in all of this is that i don't' see how stoicism constrains anticipation. We might 'cheat' a bit and say 'the package of stoic beliefs will make you feel happier/more contented', but this feels like cheating, because we are dodging the question of "what do i want". Putting really work into figuring out what i want has been _highly_ useful to me, in my life - but 'beliefs must constraint anticpation!' seems to say 'these kinds of beliefs aren't allowed, because normative beliefs don't constraint anticipation'

It just seems that insisting "all beliefs must pay rent" and "the ONLY acceptable rent is constraining anticipation" doesn't even allow me to believe things like "it is important to take care of my kids" - i can only think about the consequences of doing it vs. not doing it, without reasoning directly about values, since values don't constraint anticipation.

Is there a solution to this that i'm not seeing?

Sorry i probably wont' be able to respond but i'll try asking this question again in another forum elsewhere.

Expand full comment

D'oh, if I'd noticed Steve Byrnes' comment at the time I wouldn't have felt the need to add anything!

Expand full comment
author

It was good to have both, sometimes it's easier to understand things when you hear them said two different ways.

Expand full comment

And once in a while a third version is worth hearing, especially if the first two aren't quite making sense/sinking in.

There's probably a limit, tho'..

Expand full comment

Reminds me of Amdahl's law.

Expand full comment

> It’s all nice and well to say “high status leaders are powerful, so people should evolve a tendency to suck up to them”. But in order to do that, you need some specific thing that happens in the genome - an adenine switched to a guanine, or something - to give people a desire to suck up to high-status leaders.

Maybe genes setup some structure where self-learning happens? Or learning based on observation?

Expand full comment
author

That's what reinforcement learning is!

Expand full comment

I'll say this much; while there's a lot of learning built on top of loneliness, I don't think loneliness is learned. I think "be part of society" *is* actually hardcoded by evolution.

Expand full comment

Youtube finally convinced me to start watching Robert Sapolsky lectures after years of recommending them to me. One point Sapolsky makes in that the idea that evolution has to happen via a bunch of individual mutations in which a single adenine changes to a guanine (or whatever), thus slightly changing the shape of one single protein, is actually overly simplistic. That is *one* way a mutation can happen, but you could also, for example, get a mutation that changes the shape of the protein that splices other bits of DNA together--leading to really major changes in the resulting proteins, and possibly to *several* new proteins rather than just one, depending on interactions with enzymes. And there were some other examples too of mechanisms whereby a change in a single base pair could have a dramatically amplified effect. (Source: https://www.youtube.com/watch?v=dFILgg9_hrU)

Expand full comment
Feb 11, 2022·edited Feb 11, 2022

>One of the ideas that’s had the biggest effect on me recently is thinking about how small the genome is and how poorly it connects to the brain. It’s all nice and well to say “high status leaders are powerful, so people should evolve a tendency to suck up to them”. But in order to do that, you need some specific thing that happens in the genome - an adenine switched to a guanine, or something - to give people a desire to suck up to high-status leaders. Some change in the conformation of a protein has to change the wiring of the brain in some way such that people feel like sucking up to high-status leaders is a good idea. This isn’t impossible - evolution has managed weirder things - but it’s so, so hard. Humans have like 20,000 genes. Each one codes for a protein. Most of those proteins do really basic things like determine how flexible the membrane of a kidney cell should be. You can’t just have the “how you behave towards high status leaders” protein shift into the “suck up to them” conformation, that’s not how proteins work!

>You should penalize theories really heavily for every piece of information that has to travel from the genome to the brain. It certainly should be true that people try to spin things in self-serving ways: this is Trivers’ theory of self-deception and consciousness as public relations agent. But that requires communicating an entire new philosophy of information processing from genome to brain. Unless you could do it with reinforcement learning, which you’ve already got.

Yeah, I definitely agree. A lot of non-biologists don't really understand this. Also, a lot of those sketchy "transgenerational epigenetic inheritance" papers are making the same mistake: until a specific epigenetic variant is identified as responsible, I don't believe any of them.

(During mammalian gametogenesis, most epigenetic marks are completely erased, with only a few exceptions. So transgenerational epigenetic inheritance in humans would be quite surprising. In plants or worms, sure, but not in humans. Vitamin C deprivation in pregnancy might affect 2 generations, but that wouldn't really be "transgenerational".)

Humans definitely continue to evolve, and biology definitely matters (much more than people give credit for). But there's no "suck up to leaders" allele.

Expand full comment

I would love to see some discussions of topics like this from when rule based AI was dominant. I wonder if people had equally convincing arguments for why the brain was doing decision tree things. If so, I'd downweight speculation like this. If not, I'd upweight.

Expand full comment
author

I don't have that (and would be equally fascinated by it), but "the brain is doing reinforcement learning" seems like more or less Behaviorism, which predates AI of any sort.

Expand full comment

Ah fair enough. Since I only know about these things from an ML angle, I had assumed the commenters' mentions of gradients and value prediction necessitated a connection to modern RL, but I guess that's not the case.

Expand full comment

I'm pretty sure AI reinforcement learning was explicitly based on "let's try doing what the brain does", in fact.

Expand full comment

I think that the motivation was more "this works with animals in experiments" rather than any specific observations made on brains.

Expand full comment

The way I see it (e.g. from 1987 book Parallel Distributed Processing) back when rule based AI was dominant, those people who were actually looking at how the brain does things were arguing that that no, brain is definitely not doing decision tree things and that's why we should invest in brain-inspired approaches to replace rule based AI; even if at the time rule based AI "worked" and the proposed direction did not. (yet)

Expand full comment
Feb 11, 2022·edited Feb 11, 2022

Re: conflict theory and your response:

Even relatively well-intentioned people will intensively scrutinize arguments for a policy against their interests (since they want to be able to argue against that position), and not scrutinize an argument for a policy that's in their interests (since they want to use the argument).

Also, it can be not only conflicting interests, but also conflicting values. E.g. a rich person who supports a high level of redistribution for moral reasons (rather than because of his own interests) may still argue about factual questions (such as how redistribution affects the economy's performance) in a dishonest manner with people with different values, since it's less possible to convince people about terminal values than about facts; vice-versa for a poor person who believes in a natural right to property.

Expand full comment

The "ultimate reward" where you get eaten by a lion also "doesn't happen", in the sense that your brain will never find itself updating on the fact that you just died.

The *species* can "learn" that failing-to-notice lions leads to death through natural selection, but *individuals* can't learn it from personal experience. That's a whole nother feedback loop.

Expand full comment
author

This is why I specified "mauled by a lion" instead of eaten in my original post.

Expand full comment

Lions aren't a great example for that; almost any other wild animal would be better.

(Lion attacks are usually predatory i.e. if the lion is not violently stopped you will wind up in its belly. You win or you die; there is no middle ground. This is in contrast to most animal attacks, which are intended to stop you killing the animal/stop you killing the animal's children/stop you stealing from the animal, and thus permit a sublethal animal victory.)

Expand full comment

A goose mauling might be a good example, since they are hardly ever fatal but always extremely memorable. An optimal combination for reinforcement.

Expand full comment

Physical outcomes are non-discrete. It's not "do I spot the lion or not", it's "do I spot the lion now, 5 seconds from now, 10 minutes from now, or only when he's on top of me eating my entrails" and every value in between. The sooner you spot the lion, the easier it will be to avoid a horrible death, and "easier" is learnable, right?

Of course, for all values beyond "too late" the lion will eat you and you'll learn nothing (and probably penalizing your "dumb and short-sighted" genes), but for all the other values, you're incentivized to learn to see the lion as soon as possible, because "see lion sooner" means you spend less time hiding in bushes / running for your life, which are strongly unpleasant experiences.

Children learn early that threat-spotting is valuable, via games like hide-and-seek etc. By repeated trial and error, we find "early spotting threat" gives "less frantic running" without the terrible "actual lion catches you" learning-removal event. Then, it helps that we're good at generalizing from "spot other player" to "spot predator/incoming traffic/enemy soldier/threat in general". And, once we've optimized for threat detection enough that we mostly survive threats, still we keep optimizing for it even more, because the sooner you spot the threat, the easier it is to avoid it. So it gets imprinted better and better, leading us to peaceful deaths at a ripe old age.

Expand full comment

I think this theory still needs to explain why we find games fun and continue to play them?

Expand full comment

Not so much a theory, as a counter to the idea that looking away from the lion is the optimal response from a machine-learning perspective.

As for an explanation for why we like playing games: wouldn't evolution select for this feature? Those who don't like to play games are avoiding an opportunity to test survival behavior in a non-survival-critical context. And we continue to play after our formative years, because you're never "too good" at survival, you can always get a little bit better, and doing so in a safe context is preferable.

Expand full comment

My model: Finding games fun is evolved behaviour, as it increases survival. If we find it fun we will do it as often as possible. Probably it increases survival both by learning to cooperate with our peers and training various general skills, such as spotting and avoiding lions etc. Interestingly, we don't usually know why we play, ---we just do it for fun. Meanwhile, we do know why we are avoiding the lion - we can imagine getting mauled or killed. We can also avoid threats we have not had time evolve fear of, such as traffic or guns.

Expand full comment

I'm pretty confused by Gabriel's explanation of motivated reasoning. "salience = probability of being right * value achieved if right" seems to pattern-match rationally choosing the plan with the highest expected value, which...doesn't sound like my concept of motivated reasoning.

e.g. Suppose I flip a coin, and you'll win money if it's heads. Before checking the result, you go on a spending spree that you'll be able to afford IFF you won.

If you did that because the spending spree had the highest expected value out of all actions you could've taken, I wouldn't describe that as "motivated reasoning", but as "good strategy". It's "motivated reasoning" if the plan has a higher best-case outcome but a worse average outcome.

Maybe Gabriel means that this expected-value calculation is being applied NOT to the selection of a plan (which would be rational), but to the selection of a belief (which is not)? That is, the probability of the coin landing heads or tails are equal, but heads is better for you, so (probability of theory) * (how much you want that theory to be true) is higher. But if that's the story, then why would that happen? Whatever we do clearly isn't perfect reasoning, but it should be viewable as an approximation of a good algorithm that works in common situations, and I don't see how this system can be viewed that way--why is "how much you want that theory to be true" involved in the epistemics *at all*?

Or maybe I'm pattern-matching too hard and Gabriel doesn't mean "expected value" at all, they literally mean JUST "probability of being right * value achieved if right" while completely ignoring the "probability of being wrong * value lost if wrong" part of an expected-value calculation? Then the spending spree is favored because it has more potential upside, and the system doesn't even consider the potential downside. But this seems implausible--I'd expect average human strategy to be substantially worse if our blind spot were THAT big.

Is there some other interpretation I've completely missed?

Expand full comment

Rather than selecting a plan or belief, it's about selecting attention. Expected value was an analogy: What grabs your attention isn't just what your implicit beliefs say is likely, but also what you hope or fear.

So whenever you're thinking about a matter where your brain hasn't been well trained about what's likely and what's not, then your motives play the bigger role in what grabs your attention. You still might revise or reject the thought once it's in your attention. It's gonna be an uphill slog though, until you get enough feedback to develop really good priors.

Expand full comment

Sounds like you're suggesting that attention and belief are being mixed up, such that paying attention to something tends to make you believe it?

Reminds me a bit of "privileging the hypothesis".

Expand full comment

"One of the ideas that’s had the biggest effect on me recently is thinking about how small the genome is and how poorly it connects to the brain. It’s all nice and well to say “high status leaders are powerful, so people should evolve a tendency to suck up to them”."

Wouldn't it be enough for people to evolve a tendency to trust their parents specifically, and their parents to impart the information that it's a good idea to suck up to high status leaders? As in, you don't need to meet high status people to suck up to, to adopt this belief.

I think there's an argument that can be made about this not being qualitatively different to just learning it by exposure to the situation, but from my position as someone pretty naive about neuroscience, it does seem pretty different. In the one case you're getting direct incentives and disincentives, in the other someone is telling you in abstract that you'll get those incentives and disincentives and you trust they're right about it (and your disincentive scenario is "my parents will be angry if I don't memorise this correctly", which applies to both the information about the incentive and disincentive).

I am almost surely overcomplicating this, but my immediate reaction to "you need some specific thing that happens in the genome for [incentive/disincentives known by the brain to change]" (which is how your comments here parsed to me, which may already be wrong) was to think "no?! you obviously don't, culture is part of human evolution these days."

I'm commenting for completion's sake, not because I think this is some grand insight (heck, the abstract, rough way I've written about it is a poor way to share a grand insight even if it were one!). I am *very* much assuming I'm just confused, quite likely more so because I focussed on this example in particular, rather than giving this a holistic think-over.

Expand full comment

Came here to say this but found you'd said it better - the sociome exists

Expand full comment

>As in, you don't need to meet high status people to suck up to, to adopt this belief.

"High-status" is relative. If you have engaged socially with more than one person at the same time, you have experienced someone who detectably has higher status than someone else.

I don't think the "social instinct" is learned - people who haven't suffered tangible harm due to social exclusion still find it unpleasant - but the specific "suck up to high-status people" seems like it could be just everyone learning it independently.

Expand full comment

Is there a belief that there would be only one cause of motivated reasoning?

having been in that 'letter from the IRS' state, i'd attribute it to associating a high cost to _reading_ the letter now (i.e. it will make bad) and a belief that it would be better if i can delay reading the letter until later, when i feel better. I don't think most people look at the IRS letter and go 'nope don't ever need to read that' - it's more like, the cost of reading it _now_ always seems worse than reading it _later_.

for political views, doesn't 'crony beliefs' explain this concept? Why bother investigating a subject if there's a high probability that my peers will say i'm a nazi?

if that 'landscape of value' includes _all possible things that are good and not good_ - wouldn't that landscape also have to include "i will spend a lot of time reasoning about this and then feel bad", or "i will find a conclusion that my friends will think is stupid"? I.e. could the landscape should have escher-like loops in it where you attach negative value to the act of modifying the landscape?

Expand full comment
Feb 12, 2022·edited Feb 12, 2022

> None of this ever gets confirmed by any kind of ground truth, because I am HODLing and will never sell my Bitcoins until I retire. So how come I don’t start hallucinating that the arrow is green and points up?

As someone who has been HODLing for a long time - the red arrows don't make me feel bad. The green arrows don't make me feel good. If anything red arrows make me feel good because i can buy more bitcoin at a lower price.

The 'ground truth' that confirms my holding makes sense is new developments like, say, 'now an entire country uses bitcoin', or 'multiple fortune 500 companies hold bitcoin', or 'everyone agrees inflation is now going to be an issue'. If you imagine what it's like to have held bitcoin for a decade, you feel pretty damn confident that most people don't understand it, and that the price is basically a noisy measure of adoption of your own way of thinking - and it basically goes up and to the right, long term. Watching Michael Saylor, CEO of MicroStrategy, borrow hundreds of millions of dollars to buy bitcoin, or seeing Jack Dorsey say 'hyperinflation is coming' act as more kinds of 'ground truth' about the reality that matters: what other people think about bitcoin.

It seems like this scenario implicitly assumes that people use dollars as some base unit of value, and map everything onto dollars. Is that right?

Expand full comment
Feb 12, 2022·edited Feb 12, 2022

Something that's bothered me since the first post is that the first post presents the idea of a problem that cannot be solved by a reinforcement learning process.

But this is impossible. Or rather, such a problem might exist. But any problem that can be solved by a human can be solved by a reinforcement learning process, because humans are an outcome of the reinforcement learning process called "evolution". They do not contribute anything to the problem-solving process that wasn't developed by a reinforcement learning process.

Expand full comment

"Within-lifetime RL" is different from "evolution considered as an RL process", right? They're both RL, but they're still different. I thought it was sufficiently clear from context that this was a discussion about the former not the latter. I have some related discussion in the final section of this post: https://www.lesswrong.com/posts/frApEhpyKQAcFvbXJ/reward-is-not-enough

Expand full comment
Feb 13, 2022·edited Feb 13, 2022

Different how? The claim was that reinforcement learning, the conceptual approach to a problem, is not capable of solving some simple problems. And that claim is easy to falsify. What's relevant about the distinction you want to draw?

Expand full comment

I presume that the intended claim was "within-lifetime RL is not capable of solving some simple problems", but the words "within-lifetime" were omitted because they were clear from context.

Anyway, I'm not sure we're disagreeing about anything substantive here.

Expand full comment

Hi there, sorry I missed the followup comments last time around. To clarify, like KJZ I am also coming at this from an ML side so am out of my depth biologically.

Maybe a nice way of clarifying this discussion would be to distinguish different kinds of reward functions, call them hedonistic rewards (e.g. eating a brownie, seeing numbers go up) and utilitarian rewards (e.g. avoiding lions, not hallucinating that numbers go up). Different people will weigh these differently (e.g. they would/wouldn't eat the brownie), but in some (most?) cases the utilitarian value of not hallucinating or not being eaten is still likely to be higher than the small payoff that you'd get otherwise.

(I also realize that this argument serves as kind of a silver bullet, since we can always appeal to some notion of an optimal strategy that is creating the behaviour that we want to explain, but I guess my point is that the examples we've seen so far can still quite easily be seen as reinforcement.)

Expand full comment

The brownies thing seems to be entirely explicable by the theory that you are not a unitary individual but multiple competing processes, some of which have a shorter optimisation timescale than others?

Expand full comment

The missing piece in this whole thing seems to be the fact that learning is an imperfect process. Because of computational/bandwidth limits the various parts of your brain can't really act like they have access to some global prior on all possible future sequences of experience which is then updated by conditioning.

At some level, your brain modules have to settle for a heuristic. Neural nets can be very complex so that heuristic might be pretty damn complicated but it can't capture the whole story and the reason our brains are more than just a brain stem and visual cortex is that those extra higher level pieces are able to improve our ability to learn about the world.

Now ask yourself what it would feel like to be the high level part of a neural net which is working to knit together a bunch of modules that are learning (in a limited fashion) about the world. Even your entire brain won't ever be able to ensure perfect reflective coherence (if I believe P and believe P -> Q I believe Q) and the kind of effects you are talking about might be just what it feels like to wrestle with these imperfect modules and imperfect control mechanisms.

Maybe that's deep and insightful but more likely it's just confused.

Expand full comment

>How did AlphaStar learn to overcome the fear of checking what's covered by the fog of war?

In a game of StarCraft, there's *always* a tiger lurking in the woods. You will never send an army out onto the map and not find the other player trying to kill you. AlphaStar can't be reinforced into thinking the best way to win is to avoid the other army entirely because doing that is impossible in the environment of the game.

(Although it would be hilarious if it learned to mimic every salty Terran player and float its buildings to the corner of the map.)

Also, AlphaStar can survive being eaten by the proverbial tiger, since it's only a game. The reward code can look down from a higher level and say "hmm, all the agents that didn't scout ended up dying to cheap tactics like DT rushes. Better try scouting."

Maybe Evolution plays the role of that higher-level evaluator for humans, saying "hmm, 90% of humans who didn't look at the woods got eaten by tigers, but 0% of humans have gotten eaten by IRS letters, so I guess it's safe to ignore it."

Expand full comment

Can't any reasonably self aware person detect their own motivated reasoning? If someone is dreading finishing a malarkey report due on Monday they must know, at least on some level, where the sudden importance of alphabetizing their LP's this Sunday afternoon is coming from.

Note:

According to the vinyl enthusiasts portrayed in “High Fidelity” - a pretty darn good 2000 romcom - putting your albums in alphabetical order is the shallowest way of organizing your music. A person who would do that is just not serious about it.

Expand full comment
Feb 14, 2022·edited Feb 14, 2022

When I put off preparing for exams to the last couple of evenings it was never a mystery to me why procrastination activities looked even more attractive in comparison.

However, while I know abstractly that there's still a substantial market for LPs, this matter is totally mysterious. Surely digital collection is a thousand times more convenient to organize and use, and there's no sensible case that audiophiles can make about inferior quality these days, not that I've ever met a serious one. Having physical paraphernalia like booklets to be able to touch and look at is another matter, but even if you value such stuff there's no real reason not to have it digitally too.

Expand full comment

Yeah, we have Amazon Prime streaming with 8 million titles on demand. It still amazes me. It’s hard to find a cut that isn’t available there.

We keep our LPs mostly out of nostalgia. I enjoy the art and liner notes. Our CD’s are all in storage now. Saved against some future catastrophe where the internet is down even though they’ve all been ripped and exist on a couple of hard drives as MP3s.

Some purists think LPs have a a warmer sound than CDs or streamed audio. I can’t hear it myself.

The comment was meant to be funny and point out that most self aware people know when they are engaged in motivated reasoning.

Edit

Thinking a bit more about it, all those MP3s are on my phone now too. It’s kind of like engineering Apollo missions. The important things have redundancy built in. ;)

Expand full comment

I believe that the majority of our actions and choices are driven by decisions that are not "motivated" in a rational manner; and for them it's not trivial to detect the underlying motivations, it would take an unusual manner of self awareness; in some cases it can take excessive amount of time of guided introspection/therapy to understand that; and in *all* cases it's very easy to get driven in rationalization, where people essentially invent a plausible (and socially acceptable) story for the motivations that does not necessarily have anything whatsoever to do with what the actual driving reasons were.

The fact that a part of our brain takes certain information into account when nudging you towards an action would count as "knowing, at least on some level" however, it is not necessarily the case that this "knowledge" is accessible by our rational thinking - we don't have 'privileged access' to 'hardware' doing all kinds of sensory and subconscious information processing. For example, you can guess and study why you like the taste of one vegetable and dislike another, you might have some plausible-sounding theories, but the mere fact that this decision is done within your body don't automatically give you privileged insight into the real reasons why you "know" that this is desirable and that is not.

Expand full comment

You've possibly thought about this more than I have. I know that there are layers of our minds that aren't directly accessible. Who is the guy scripting my dreams for example. When someone opens their mouth in a dream why don't I know what they will say?

In college I had an experience with MDMA that gave me insight into ways I was misinterpreting the motivation of some people close to me. I was thinking about it in a fundamentally incorrect manner. The benefits of that are still playing out over 30 years later.

I've had a daily meditation practice for almost 5 years now and that has produced a lot of useful insights too.

The vegetable example doesn't really work for me though. I like them all. The only flavor I can't handle is peppermint because it always causes heartburn. :)

Expand full comment

> how come I don’t start hallucinating that the arrow is green and points up?

It’s called daydreaming and happens to many of us

Expand full comment
author

Not in a way that makes us wrong about the ground-level sense data!

Expand full comment

For the brownie example, maybe it would be helpful to think in terms of multiple antagonistic reinforcement learning systems?

Say that one learning network (which we can call "YUM") has hedonic function is based on the caloric energy that gets to your cells, with weights based on the experienced taste and smell of food and some genetic information about metabolism.

And then another learning network ("MOM") has a hedonic function is based on maximizing perceived "goodness" of food, based on information you've learned socially about health/fitness and information you've gained from devices like scales or blood pressure monitors, or whatever.

These two networks are connected by a "guilt" pathway that allows "MOM" to apply a hedonic penalty to "YUM" if its output is predicted to be negative by "MOM." The "guilt" becomes part of the input that "YUM" is learning from. So "YUM" will start generating plans that incorporate avoidance of the "guilt" penalty, like looking at brownies without committing to eating them. "MOM," in turn, learns to recognize these tricks as likely to lower its own hedonic output, and starts getting increasingly aggressive about the "guilt" penalty to visual signals. So you'd get "YUM" trying to come as close as possible to the desirable visual signals, without actually generating them explicitly enough to trigger the feedback from "MOM."

-----

On the IRS letter... I wonder if it matters that the possible consequences of ignoring it increase over time. Like, if the day you get the letter, you're already exhausted and decide to deal with a possibly stressful letter some other time when you have more energy, that's a pretty reasonable plan. But if you're chronically exhausted and make that same decision every day for a year, that's when you start getting hit by late fees or interest or whatever that could increase the cost of ignoring it dramatically, to the point that it's plainly worth the stress of dealing with it. Maybe brains are just particularly bad at processing costs that increase over time for some reason.

Expand full comment

I appreciate the recognition of this incognito persona. It is humbling.

> I worried that if I saw the brownies, I would eat them

Here's the mixing of timescales again. The “sugar -> tasty -> eat“ strategy evolved under a very different reward signal. It looks wrong for the current reward signal, but that's not bad RL, just train–test mismatch.

Although, plausibly, our learning rates are tuned for evolutionary timescales, so our RL could be suboptimal for industrial/information age timescales. There's a downside to larger learning rates, but we're probably no longer at the optimum of this meta-learning problem.

> how come I don’t start hallucinating that the arrow is green and points up?

Timescales. The function of the visual cortex changes much more slowly than this, because that's too little data for that module to learn with stability and generalization.

Other brain functions learn / adapt faster, within one's lifetime or even in minutes. Because they solve a problem that doesn't need as much data as perceptual feature extraction does.

Expand full comment

The recurrent mention of lions struck a chord. The scariest sound I have heard was the roar of a lion that I heard on a trip to Tanzania. Terrifying. It may be that we humans are hard-wired to fear anything to do with them, more than say, sharks.

Expand full comment

> One of the ideas that’s had the biggest effect on me recently is thinking about how small the genome is and how poorly it connects to the brain. It’s all nice and well to say “high status leaders are powerful, so people should evolve a tendency to suck up to them”. But in order to do that, you need some specific thing that happens in the genome - an adenine switched to a guanine, or something - to give people a desire to suck up to high-status leaders. Some change in the conformation of a protein has to change the wiring of the brain in some way such that people feel like sucking up to high-status leaders is a good idea. This isn’t impossible - evolution has managed weirder things - but it’s so, so hard. Humans have like 20,000 genes. Each one codes for a protein. Most of those proteins do really basic things like determine how flexible the membrane of a kidney cell should be. You can’t just have the “how you behave towards high status leaders” protein shift into the “suck up to them” conformation, that’s not how proteins work!

> You should penalize theories really heavily for every piece of information that has to travel from the genome to the brain. It certainly should be true that people try to spin things in self-serving ways: this is Trivers’ theory of self-deception and consciousness as public relations agent. But that requires communicating an entire new philosophy of information processing from genome to brain. Unless you could do it with reinforcement learning, which you’ve already got.

Is this a general purpose argument against every complex genetic trait, even those that have nothing to do with mental phenomena?

In the genome there is no literal blueprint of a heart. Yet hearts with determinate features such as complex valves and the right number of atria and ventricles are reliably produced from genetic instructions. Those instructions just code for gadgets that convert whateverose into whateverose-6-phosphate at a certain rate.

There are also many animals with modest neural endowments that nevertheless exhibit complex "hard-coded" behavioral repertoires. Insects with thousands of neurons have elaborate innate courtship rituals. A hundred thousand neurons buys you spiderwebs, i.e. the ability to solve 3D geometry problems without any instruction or practice.

Maybe part of the confusion comes from miscalibrated intuition about how difficult it really is to hard-code a behavioral phenotype. Bacterial chemotaxis seems quite intelligent and purposive, but the underlying behavioral circuit is about as complicated as a thermostat. Many dog breeds show unique innate behaviors after mere tens of generations of selection. Complex innate behaviors (involving a lot of info transmission from genome to brain) seem more the rule than the exception.

Expand full comment

What are the actual consequences to not opening an IRS letter?

A lot of people might answer "you'll go to prison," but contrary to popular belief, you can't be imprisoned just for failing to pay taxes, only for deliberately lying to the IRS about your earnings or assets in an attempt to avoid paying them. Someone who simply ignores IRS letters isn't getting arrested.

The IRS can garnish your salary, but they can't just do that automatically. It's a whole process that can take months or even years to be implemented. And it might not happen at all, especially if the amount you owe is fairly trivial and the IRS decides it's not worth the effort. Even if it does happen, there's a limit to how much they can take at once, so having the IRS garnish your salary won't leave you homeless and destitute, it'll just make it harder for you to save up money.

There are other consequences, like being ineligible to receive certain forms of government assistance and having bad credit. But these are even more abstract and even less likely to affect your life in a directly impactful way. You're not going to die if you fail to open IRS letters, and you're not going to wind up in a situation where your odds of survival are significantly lower. It's not going to lower your odds of reproduction or negatively affect your children's chances of survival either.

You could argue that humans should be inclined to care because we're social animals, but the social circles we're actually wired to care about are much smaller in scope than the U.S. federal government. Being in debt to the IRS probably isn't going to make your family, friends, and neighbors think less of you, in part because it's not the sort of information that's likely to become widespread knowledge in the first place; it's pretty easy to simply not bring it up around anyone. And depending on the socioeconomic class and political leanings of the people around you, it may even make them sympathize with you more rather than respect you less.

"Success" from an evolutionary standpoint is defined very differently than the conventional idea of "success" dominant in 21st century Western middle-class society, and probably doesn't come anywhere close to including lofty notions like "being an upstanding citizen who always pays their taxes on time and doesn't have any debt." From that perspective, whatever internal algorithm makes people ignore IRS letters might not be wrong or broken at all; it could very well be a feature rather than a bug.

Expand full comment

"This isn’t impossible - evolution has managed weirder things - but it’s so, so hard."

I've been feeling confused recently about how hard this actually is. The example I had in mind was sheep dogs. We domesticated dogs like 30,000 years ago, which isn't too long for evolutionary times, and yet, we managed to breed sheep herding into dogs. So much so that those dogs will herd random things on instinct.

If humans can control the evolution of dogs to breed in something as complex as sheep herding, maybe humans could've controlled the evolution of other humans to breed in something complex like "suck up to leaders" (the leaders would probably be in the best position to do something like this...)

I'm not sure how this weighs on motivated reasoning being genetic, since I haven't thought of an artificial selection mechanism that would point to it, but I also haven't thought about that much yet.

Expand full comment
Feb 15, 2022·edited Feb 15, 2022

Scott, as other people have mentioned, you are vastly underestimating what information can be conveyed through the genome. 20,000 protein codings is an effectively infinite number of combinations. Now the obvious problem is that means a change in one gene effects a lot of different things, it is inherently jury-rigged. However, our physical bodies non-the-less exist and with very few exceptions are still recognizably human and broadly-functional if any given gene fails. If genes can create the human brain or hand, admittedly using semi-random processes for many things (like fingerprints or capillaries) and yet almost without fail produce them, why can't certain social behaviors be coded with significant precision?

We also know this is true. Think about Monarch butterflies. Each year they return to specific forests in Mexico. That cannot be learned since several generations elapse and they do not raise their young. However, somehow encoded is the desire to migrate at a certain time and then the ability to actually end up at the same dozen couple-acre forest places year after year. Compared to that, sucking up to high-status leaders is pretty simple.

Expand full comment

It's not a matter of HOW MUCH information can be carried by genes, it's a matter of WHAT KIND of information.

For physical development, we more or less understand that proteins make a kind of state machine, with one protein triggering the synthesis of another one in proximity or in succession. Also, I remember reading that for complex organisms, especially mammals, the process is highly dependent on it happening inside of an individual of the same species; it will be a challenge for designing artificial wombs. The short of it is that we have ideas how it happens.

As far as I know, we have no idea whatsoever how proteins could code complex cognitive and cerebral structures, like the logic for sexual attraction, regulation of the heart rate (dreams can affect it ⇒ it is controlled by software), or the blueprint for beaver dams.

Yet these features are also too fundamental to be learned by imitation (except for beaver dams, but I have read of cases studies that more or less exclude it). This makes me conjecture that we might be missing a completely different mechanism for heredity. It could be the “non-coding” DNA that's actually coding for another language than DNA→RNA→proteins, or it could be something else entirely.

Expand full comment

> Humans have like 20,000 genes. Each one codes for a protein. Most of those proteins do really basic things like determine how flexible the membrane of a kidney cell should be. You can’t just have the “how you behave towards high status leaders” protein shift into the “suck up to them” conformation, that’s not how proteins work!

Genes are a tiny fraction of DNA, non-coding DNA regulates these genes and has way more space to work with. From https://medlineplus.gov/genetics/understanding/basics/noncodingdna/

> Only about 1 percent of DNA is made up of protein-coding genes; the other 99 percent is noncoding. Noncoding DNA does not provide instructions for making proteins. Scientists once thought noncoding DNA was “junk,” with no known purpose. However, it is becoming clear that at least some of it is integral to the function of cells, particularly the control of gene activity. For example, noncoding DNA contains sequences that act as regulatory elements, determining when and where genes are turned on and off. Such elements provide sites for specialized proteins (called transcription factors) to attach (bind) and either activate or repress the process by which the information from genes is turned into proteins (transcription).

Expand full comment