Moral realism is hardly *that* obscure of a “only stupid theists believe this one” sort of moral and philosophical belief to hold.
Yes the moral knowledge argument proceeds on the assumption that it will be convincing to, and only to, people who believe in an objective moral reality.
But there are plenty of pretty robust arguments for objective morality out there; it’s not just endless question-begging.
If you’re not a moral realist when you encounter the argument from moral realism/moral knowledge then sure it’s not going to be convincing, but it doesn’t follow that moral realism is just a baseless view to hold. It’s just a case of differing priors.
I think moral realism probably is quite obscure. I think it's a view endorsed almost exclusively by philosophers and people who've been influenced by philosophers, and only specific philosophical traditions, at that. I don't think most people today are moral realists, nor do I think most people in the past were, either.
As far as arguments for objective morality: there's lots of arguments out there, but I'd hardly call any of them robust. There are no good arguments for moral realism.
I don't find moral realism ridiculous, just counting it as a "proof" of God's existence. It's more "God has to exist, unless this popular philosophical position is true."
> you are clearly suggesting that the inference being made in the moral knowledge argument/EDA is valid or cogent, therefore the only thing that could be "ridiculous" would be the premise that we have moral knowledge, right?
What is ridiculous is assuming that some controversial position is true and then when encountering counter-evidence to it, keeping your prior trapped and treating it as evidence in favor of some other, even more controversial position, instead of reevaluating your initial assumption.
Here is an example. Suppose you don't know whether a person is going to order a cake for dessert or not. You can think of some arguments why they would: cake is on the menu, cakes are tasty, a lot of people order them, the person has enough money to order it, etc. You therefore conclude that the person is going to order one.
But then a new though crosses your mind. Wouldn't it be a huge coincidence of this person ordered specifically cake? There are other dessert option. And, it's not even clear that the person wants a dessert that much. Why would they spend their limited money specifically on cake instead of all other things they like?
Would it be, therefore, reasonable to conclude, that not only this person will indeed order cake, but also that they are cake obsessed? That they value cake extremely, which explains away the coincidence of them ordering cake instead of spending their money any other way?
> Sure if one starts with an extremely low prior on Theism
The point isn't that Theism is so low probable that you should immediately refute anything that implies it. This isn't exactly a bad heuristic, but we are talking about a much weaker claim, that whatever your prior on Theism is it costs your theory *some complexity penalty* if it requires Theism to be true.
> But I did order cake?? and now you have to explain why?
Only *after* we indeed already observed that the person ordered the cake we can justifiably increase our confidence in the statement that the person cares about cake a lot. Until then it would be a ridiculous reasoning mistake to simply jump to the conclusion and assume both future order and cake obsession.
I'm starting to suspect that you are confusing "I believe in X" with "X is true". To be clear, cake order is a metaphor for the latter, not the former.
> Are your priors such to even be open to the possibility that you might be wrong?
They are open enough that when I hear "Boltzman brain" I do not immediately assume that I have to be extremely confident just on priors that I'm not one. I think we have pretty good a posteriory evidence against this hypothesis, though.
I don't consider fine tuning particularly interesting, as we are dealing with a sample of one here, so the most of probability mass goes to out of model error. Inside the model, yes it's unlikely that our universe just originated on sheer random, and there probably is some kind of optimization process involved. But any particular kind of optimization process is very unlikely, especially if you demand that it's also sentient, infinitely good and powerful.
If I encounter some evidence that evolution and naturalism *does not* solve psychophysical Harmony, I'd be happy to incorporate it into my world model. For now I see that all Mathew's arguments about it are rooted in being a dualist and not understanding evolution and that the whole concept of "psychophysical Harmony" is confused. https://benthams.substack.com/p/arguments-for-god-tier-list/comment/83765728?utm_source=activity_item
I'm pretty confident that torturing small children to death is wrong. Again, I'm ready to entertain a argument to the contrary, appealing to some other human values. But if God Himself would tell me, appealing to His divine authority, that it's actually good to torture children, because He is the one who decides what's good and what's bad, and He said that it's good so end of discussion and my values are just wrong, that wouldn't count as one. In such situation I'd claim that God got it wrong and is evil and then we will come to some negotiated agreement as reasonable beings, where my batna would be to coordinate as many people as possible to go full Elisha ben Abuyah on His divine ass.
I agree it’s silly to call it a proof — but in my experience, the more sophisticated arguments like the argument from moral knowledge are basically never referred to as “proofs” by their defenders. The arguments that get called proofs are invariably the less convincing ones. (The anthropic argument is an exception, since the paper under review Matthew and I wrote on it is called “A Presumptuous Proof of Theism” or something — but that’s just for the alliteration!)
I don't know about "ridiculous" but it is one of the least plausible positions to enjoy so much popularity in all of academic philosophy. There isn't a single good argument for moral realism.
>But also I have never understood why anti-realists aren't all just error theorists
Because something can be true without correspondence to a special-pupose domain of truth makers; and useful without being true; and meaningful without being useful or true
I'm not quite. following that. I like quasi realism, and I don't think it's extreme , and I can't tell whether you think extremism or good or bad. Are you influenced by Lance Barber?
What is correspondence and how do you ever verify anything "in the world" corresponds to something "in your mind," which by the presuppositions involved in much of the discourse is inaccessible since you're only ever "in your mind?"
>"I have never understood why anti-realists aren't all just error theorists, like non-cognitivists, subjectivists etc. all just seem to be using language weirdly"
Speaking personally (from a broadly pragmatist perspective): error theory rests on a semantic analysis of moral language that seems to take us further away from the goal of providing an adequate description of moral language, rather than closer to it.
Error theory works fine as an analysis of the language of *realist moral philosophers*, since those philosophers have explicitly specified a semantics for their own uses of moral language. But ordinary moral language (like all ordinary language) doesn't come with its own semantics. A version of the principle of charity suggests that if we want to understand what people are doing with language, semantic analyses that attribute falsehood to large domains of discourse aren't going to be particularly helpful.
Lance Bush's philosophical position (as I understand it) is that moral language doesn't have a determinate semantics. He believes this position receives support from empirical work revealing variations and inconsistencies in lay responses to survey questions on metaethical topics.
I think that's fine, as far as it goes. One problem I have with Lance's writings on this topic is that he tends to skirt around explicit engagement with the problem of whether, under what circumstances, and to what extent, *any* domain of language can be said to have a determinate semantics. He has gestured towards the larger metaphilosophical issues at stake, but hasn't really given any clear indication of how he thinks they should be resolved.
(This problem isn't unique to Lance, of course — much contemporary writing on metaethics suffers because it tries to address metaethics too narrowly, in isolation from broader metaphilosophical issues. Without a more clearly elaborated stance on those broader issues, I'm not sure that the metaethical debates can even be said to be meaningful.)
Lance says some kinds of metaethical views (e.g. "moral language describes objective non-natural facts") are unintelligible. If you want to round that off to "utterly meaningless," fair enough.
He *doesn't* say moral views are themselves unintelligible. As far as I know, he thinks he understands moral claims perfectly well.
To the best of my knowledge, he has never addressed the possibility of tensions between these two contrasting intelligibility claims.
"Lance says some kinds of metaethical views (e.g. "moral language describes objective non-natural facts") are unintelligible" yes, but it's a different claim to saying that ordinary language ethical statements have no determinate meaning, since it consiists entirely of philosophical terms of art. And there is no evidence of its unintelligibility except his own it-seems-to-me.
This is not true. I do not equate indeterminacy with meaninglessness. On the contrary, I think ordinary moral language is very meaningful. You don't have to have determinate metaethical views when engaging in ordinary moral language to make meaningful claims.
Yea, that's basically my view! Nice to see someone characterizing it; though I don't think ordinary moral language is incoherent. That's a suggestion Don Loeb has argued for, but I don't endorse that view.
The view I defend is "folk metaethical indeterminacy." Essentially, I don't really think nonphilosophers are realists or antirealists at all; I think they have no particular metaethical views. Hence, why I'm not an error theorist. Error theory isn't consistent with indeterminacy.
"I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one."
And yet most of us *do* act as if morality is objective and eternal. Take the current roiling about Nazis and Fascism: we don't argue that "well their standard of morality was a different version to the one we accept", we condemn them. Slavery is not accepted as "okay back then we believed one thing, now we believe a different thing", it's condemned as always being wrong. Abortion rights, gay rights, trans rights - there's no "we used to think A, now we think B" about that, those in favour argue that such things as rights exist, these cases involve rights, and there is an objectively correct and an objectively incorrect way to think and act about these.
Try arguing that it's just as morally acceptable/correct to be a 'transphobe' as to be an 'ally' because there is no objective set of standards, and see how far that gets you.
That's because most of us are treating morality as intersubjective - we agree to certain ideas or principles and then evaluate everything else against those ideas and principles, creating the illusion of objectivity. But you can't evaluate against an arbitrary standard any differently than you would an objective one, so it looks the same.
So what? Suppose you say it's wrong to be a transphobe because it doesn't cohere with some arbitrary ideal or principle. The transphobe can simply reject the ideal or principle.
There's no objective set of standards that makes salad a better food than gasoline, but if you're going to feed me gasoline for dinner, as a human, I'm going to object. Similarly, if you're going to treat me in a way that might be perfectly appropriate for lizards to treat lizards according to lizards, I'm still going to object as a human and it would be ridiculous to expect me not to.
You are conflating first order normative ethics with second order meta ethics, likely because you are opining without ever studying either subject to at least minimally understand the subject area difference. I take it this your disposition towards every every study area you comment on?
"I have never understood why anti-realists aren't all just error theorists, like non-cognitivists, subjectivists etc. all just seem to be using language weirdly, sure lay people have confused meta ethical beliefs but like for a more sophisticated anti-realist to avoid orienting their anti-realism towards substantive realism seems weird."
They may seem to be using language weirdly *to you*, but there are no good arguments that ordinary people speak, think, or act like moral realists, and that therefore noncognitivists or subjectivists are speaking any more weirdly than anyone else.
Also, noncognitivism and subjectivism aren't declarations of one's personal commitment to speaking this way; they are typically claims about how others speak, so it would be question begging to think that they have got ordinary language wrong: their positions are positions about the nature of ordinary language! If they're correct, it would be realists and error theorists who are speaking weirdly.
I'm a moral antirealist, but there is absolutely no good reason to be an error theorist. Error theory relies on a semantic thesis about the meaning of ordinary moral claims, and, specifically, the claim that ordinary moral claims purport to describe stance-independent moral facts. But why would I endorse that? I don't think it's true and I don't think there are good arguments that it's true.
> it seems reasonable to also accept that such a universe could spontaneously create stable self-replicating structures that begin to eat the surrounding entropy and create an expanding region of simplicity. In this case, most universes would eventually be dominated by relatively low-entropy environments like the one we find ourselves in.
This is not possible. Or, rather, it's just as improbable as skipping straight to the large low-entropy universe directly. A state that expands without bound and without consuming free energy *just is* a high entropy state.
The thing that makes entropy entropy is that you *can't* eat it. The only way simple things can come to exist from a maximum entropy state is a stochastic fluctuation.
Jumping in before I've read the full post, but Bentham's Bulldogs comments about cardinality are incorrect -- the way you described things is closer to correct (although I do wish you would learn some actual math and stop mangling things :P ).
It is *not*, in fact, the case, that in mathematics we measure the size of an infinite set solely by its cardinality. Rather, cardinality is *one* way of measuring the size of an set, that can be used as appropriate. For a subset of the plane, one might use area. For a subset of the whole numbers, one might use natural density. For sets equippped with a well-ordering you'd use ordinals, not cardinals. Etc, etc.
Usually cardinality is not a very helpful measure when dealing with infinite sets, in fact, because it's so crude and lossy. (A rectangle of area 2 and of area 1 have the same cardinality, but they still have different areas!) I'd say one advantage of cardinality is that it can be applied to literally any set, regardless of context, whereas other measures will have a more limited domain of application; but as advantages go that's generally not a very relevant one. Most mathematicians aren't set theorists!
If someone says to you that in math the size of an infinite set is measured solely by cardinality, you can tell they haven't actually done much math involving infinite sets!
So is BB right or not when he claims that if every world has one billion real people and one Boltzmann brain, and there are an infinite number of worlds, the chance of being a Boltzmann brain isn't one-in-a-billion, it's 50-50?
I agree you can take a measure over observers. But my point is that the measure will not actually be a good way to measuring probabilities. If your view says that 99.999999999% of people have some property, but you could arrange the worlds so that they're each filled with 99.999999999999% of people without that property, something has gone wrong.
“Measure” just is another word for “probability” in this context.
I don’t think there’s any good way to make sense of the kinds of objective probability that Scott and many others try to rely on, but whatever sort of probability might be useful here just isn’t going to be invariant under permutations.
I think Kenny just means permuting the worlds will give different answers for the probability...ie he's agreeing with you that depending on how you arrange things, the probability will change, but that's just a necessary feature for a probability to have in this context.
I’m thinking of probability as a function from sets of outcomes to real numbers. Permutation invariance is the idea that if you take any permutation of the outcomes (ie, a one-to-one total and onto function from outcomes to outcomes) then the probability of any set of outcomes is equal to the probability of the set resulting from replacing each outcome by the one the permutation sends it to. “Cardinality” is just cantor’s term for the thing that is preserved under permutations. Your argument seemed to be stating that, since one in a billion of infinity is equal cardinality to the rest, it shouldn’t have wildly different probability, so I was thinking of it as a permutation invariance argument. But in infinite sets, there is no permutation invariant measure, because every infinite set has equal cardinality to infinitely many disjoint proper subsets of itself.
Oh ha Kenny responded before I could to say basically what I was going to. But to say roughly the same thing real briefly -- what do you mean you "could" arrange the worlds this way? *Are* they arranged this way? What power do you have to alter this arrangement?
Insistence on using cardinality is the same thing as saying that everything has to be invariant under every permutation. But, why does it in this case? I'm not seeing any obvious reason that should be true!
The number of natural numbers that are divisible by one billion is aleph_0 (that's a cardinality). The number of natural numbers that aren't divisible by one billion is also aleph_0. It emphatically does not follow that the probability that a natural number is divisible by a billion is equal to the probability that a natural number is not divisible by a billion!
Comparing cardinalities to get probabilities doesn't make any sense, and isn't something a mathematician would do. (The fact that the cardinalities are equal has other consequences, notably that you can make a 1-1 pairing between the numbers that aren't divisible by a billion and the numbers that are, that does't leave any number out. Or equivalently, a 1-1 pairing between the regular brains and the Boltazman brains. In fact, this 1-1 pairing is pretty much the definition of "cardinality".)
Saying the probability is one-in-a-billion is intuitive, but I don't know of a mathematically rigorous way to obtain it. It's tricky because the "probability" that you get changes if you group the numbers/brains differently.
I have to nitpick and point out that you can't actually choose a natural number uniformly at random, and what you want is probably natural density rather than probability. :P
Some people use the word “probability” for the kind of non-countably-additive set function that you get from this concept of natural density. I don’t think it’s particularly helpful to be pedantic about which thing does or doesn’t count as “probability” here (even though I strongly disagree with the people who think these non-countably-additive functions are helpful here).
Yeah, the real point is that you need to define your probability measure *before* you talk about probabilities. This isn't an "epicycle" as BB claims, it's a prerequisite
Some of the people in this debate (notably Scott) are committed to the idea that there an objectively correct probability function, and that closeness in approximating this correct function is key to rationality.
I think they’re wrong, and all that is needed is internal coherence.
I agree with you that there isn't an objectively correct probability measure. But at the same time, I think you can get most of the Tegmark argument, with its concentration on simple possibilities, without having to make ad hoc choices - you only need quite mild assumptions about the probability measure. E.g., *any* probability distribution on the natural numbers has the property of being concentrated on small numbers, in the sense that for any epsilon, there exists N such that (1-eps) of the mass is on numbers < N.
He's not right and not wrong because it's nonsense to speak of infinitely many universes.
Apart from the fact that there is, was and will ever be only exactly one single universe (universe: everything taken as one), infinitely many universes is not a number of universes.
Infinity is not a number. You can do fun things with mathematical objects, because they aren't objects, just like fake diamonds aren't diamonds, but you cannot do the same fun with real objects.
Atoms are called atoms because one had thought they would be indivisible. And one was wrong.
Do you think it is it even remotely, in any sense of the word possible, possible to be wrong about all matter, not all matter here or there, but absolutely all matter being all matter?
Universe is called universe because at one point it was thought that it was everything. Then we realized that if there is no possible way to ever observe or interact with another area, then those are separate universes. You are playing word games and he (and I)are calling you out on it.
> Universe is called universe because at one point it was thought that it was everything.
That's wrong. Universe is called universe because one wanted to convey that one is talking about everything as a whole. You dont even need to know what all belongs to it to talk about it. That's the beauty of this concept.
> Then we realized that if there is no possible way to ever observe or interact with another area, then those are separate universes.
You use the word "then" but the latter doesn't follow from the former.
I would call this a word game or trick, not what I'm doing. But I guess you're not even interested in the point I made with my original comment, but only in calling me out for something.
The meanings of "universe" are different in different contexts. If you read some stuff from the early 1900's you'll often find "universe" used the way we use "galaxy" today. The very word "multiverse" implies that there are multiple universes. (Which doesn't prove that this is true, but proves that the word can have a different meaning from "everything". I've occasionally encountered the term "cosmic all" to mean everything that exists.)
But the discussion is not about different parts of the universe, or different "universes" in a "multiverse" but different, well, universes.
The participants don't recognize this, because they confuse epistemic possibilities (mathematical descriptions of different universes are no different universes, they are just ideas of different universes) with reality (the one universe that can have only one exactly right mathematical description (not counting how many equivalent formulations of this description there might be)).
Inside your mind, though, there's an epistemic approximation which corresponds to the world you live in... but how do you know that the world you live in is the "real" one, or even bears the vaguest similarity to it, rather than being some simulation based on simple, arbitrary rules with results complex enough to evolve fractal models of itself? As Conan put it,
"Let teachers and priests and philosophers brood over questions of reality and illusion. I know this: if life is an illusion, then I am no less an illusion, and being thus, the illusion is real to me. I live, I burn with life, I love, I slay and I am content."
> But the discussion is not about different parts of the universe, or different "universes" in a "multiverse" but different, well, universes.
I think the discussion is absolutely about different parts of the universe.
The idea was that the universe, was everything mathematically possibles, and the different parts would be what is implied by different mathematical structures.
BB is wrong. His argument, as I understand is, is as follows. Let R be the number of real people and B be the number of Boltzmann brains. Then the chance of being a Boltzmann brain is B/(B+R). But since there are an infinite number of possible worlds, B and R are both infinite. So, using the symbol ∞ for infinity:
The math is wrong because, as BB realizes when he sets B = R in the first place, infinite cardinal numbers don’t behave like finite numbers. (2*∞) is equal to ∞, and (∞/∞) is undefined. So the math should be:
B/(B+R) = ∞/(∞+∞) = ∞/∞ = undefined
That means that to get a probability, you have to go back to the drawing board and formulate the question differently. I would take the limit as the number of worlds approaches infinity, giving a probability of 1/(1+100000000), but I think you need to use measure theory if you want to be rigorous.
What are you taking the limit of? The worlds aren't spatially arranged. Plus, if they were somehow, you could change that result just by moving the worlds around.
The ability to rearrange to change the answer assumes that you have some case where there are (infinitely) more Boltzmann brains than human brains. But I haven't seen a good argument for that - especially if we're discussing the original claim that we might be Boltzmann brains, so that we need to posit that we care about not just brains, but ones in scenarios where we could not realize we are one of them. (And this is even more true if we're using any kind of reasonable simplicity weighting over mathematical structures, though that gets into much deeper speculation!)
OK, but saying they are the same cardinality assumes that there are infinities in the relevant universe, which is something we've debated elsewhere, but is a critical assumption. (Or it assumes that it would make sense to do ordering of minds across infinite universes - but any specific mind making the calculation presumably shouldn't do that, since whether it is a Boltzmann brain or not, it exists in a specific universe!)
I was responding to Scott’s formulation, but to avoid confusion I’ll switch to discussing what you wrote: “Suppose that there are infinite galaxies. Each galaxy has 10 billion people who are not deceived and just one person who is deceived.”
I assume that “infinite” in this case means countably infinite, so there are one to one mappings from the set of natural numbers to the set of all galaxies. Chose any such mapping. It doesn’t matter which mapping we choose because the result will be the same for all of them. The total population of galaxies 0 through N - 1 will consist of N * 10 billion people who are not deceived, and N people who are deceived. Thus, for any N > 0, the fraction of the total population who are deceived will be 1/(1 + 10000000). The limit a N goes to infinity will therefore also be 1/(1 + 10000000).
If different galaxies have different ratios of people who are not deceived to people who are deceived, then taking a limit won’t necessarily work.
The more general issue is that when you write that, “even if deceived people are rarer and non-deceived people are common, the number (measured by cardinality) of deceived people will be the same as the number of non-deceived people,” what that really means is that the claim that “deceived people are rare” is not a statement that can be formalized in terms of cardinality. That doesn’t mean that it cannot be converted to formal mathematics; it just means that you have to formalize it using a different mathematical concept. I used a limit, which is essentially a hack to make the rules of normal arithmetic apply by replacing “infinity” with “arbitrarily large finite numbers.” As I noted, it would probably be cleaner for formulate the statement using measure theory.
"So long as the number of deceived people is not a function of where the people are located".
He makes the assumption that this is not the case, and then he is right.
But you make this assumption, by assuming that simpler universes get more weight. Then you are right.
(The word "number" in his sentence can be replaced by "weight". But I guess it's awkward to take about the weight of people. The comment section would flame up into a cultural war about fat-shaming.)
Not enough information to answer. There is no "the chance" until you impose a measure. The same is true of finite sets, it just doesn't look that way because people always assume we're talking about the discrete uniform measure if not otherwise specified.
If you randomly choose an integer what's the chance it's a multiple of one billion? Well, it depends on what exactly you mean by "randomly choose an integer", but according to any remotely natural definition the answer is either one-in-a-billion or undefined. Any definition according to which the answer is 50-50 will pretty much only ever come up if you specifically select it for that purpose.
If I'm understanding him right, then he's wrong. His argument seems to be that any infinite set is equal in value to any other infinite set. Even without understanding the special math that dictates the rules of infinites, this breaks down into absurdity pretty quickly. I could just as well say that each world contains 3 types of people - 900 million right-handed people, 100 million left-handed people, and one Boltzmann brain. And therefore, by his same logic, the chance of being a Boltzmann brain is now 1 in 3 instead of 1 in 2, just because we have now separated all beings into 3 groups instead of 2. And of course, we could just continue dividing the millions of real people into more and more groups, each of which would constitute an infinite set.
You're arguing intuition from finite sets to infinite sets.
For example: what percentage of numbers are divisible by 1 billion? You'd argue that it's on-in-a-billion, by taking the limit of the proportion of such numbers as n approaches infinity - but that depends on how we order the numbers; I can put aleph-naught numbers that are divisible by 1 billion in a row to start. (And making this worse, minds or universes don't have any natural order!)
I think the simplest not-wrong answer is that the problem is not well-defined and it is inappropriate to try to use it as an argument in favor of God (or anything else) is not valid.
The fact that there’s so much math Scott hasn’t learned is, from a certain standpoint, *phenomenal*. Since virtually everyone else with Scott’s clarity of thinking *does* know about things like asymptotic density, Scott is as fascinating for a STEM person as a fluent English speaker raised by wolves would be for a linguist. When, as almost always, Scott gets things right anyway, he shows how it’s possible to do that without formal apparatus, like Archimedes calculating crazy surface areas and volumes with no calculus or even algebra in the modern sense. And in the rare cases where Scott gets something wrong, we can take pride that the formal apparatus we learned is apparently good for something after all!
Yeah, reading this I kept wondering if Scott actually doesn't know measure theory, or if he just decided to avoid using math language in order to make the writing clearer for non mathematicians.
tbf a lot of things can have physical intuitions that normal people understand, if they get the right analogy, more easily even than a mathematician understands the formal math equivalent.
E.g. my eyes glaze over at the formal version of the squeeze theorem. But if you ask me: "You're a firefly flying through a tunnel. At any given distance can you ever go above the ceiling or below the floor?", I'll know the answer!
Yeah, I'm going to agree here. Scott says he doesn't actually understand, he just applies the informal understanding from hearing and understanding what others have said to new domains correctly, which seems like it's the same thing as understanding.
Hi Scott, sorry to put you on the spot, but I'm curious if you have any new thoughts on the Tegmarkian view since you last(? that I can find, anyway) wrote on it in 2014?
I feel like some of the ideas you moot in Ghost in the Quantum Turing Machine might be relevant ie on what it means to actually run certain kinds of computation; and I trust you to think about what incompleteness might mean for a Tegmarkian more than I trust myself.
No pressure, and sorry to impose, but I feel like you're the person who could add the most value to this conversation of almost anyone.
It seems odd that you can use a random draw from an uncountably infinite set to simulate a random draw from the countable infinite set of integers. Very interesting though. Maybe the entire conversation is just suffering from lack of crazy advanced math courses.
Yeah, I was a math major back in the day but I'm weirded out by the fact that apparently I can make a uniform choice of a real number between 0 and 1 but not a uniform choice of an integer. And I'm thinking that means I can't make a uniform choice of a *rational* number between 0 and 1, since there is an easy mapping between that and integers.
Can you uniformly choose a real number between 0 and 1? How would you do it?
Possibility 1: you reel off a long string of digits. Obvious non-starter, you can spend the rest of the life of the universe reeling off digits and you'll never escape the rationals.
Possibility 2: you attempt to choose via some kind of physical analogue, like pointing at a particular point along a physical line. But you haven't _really_ specified a number then, you've just pointed your big fuzzy physical finger at a general region. Physical reality is made of big dumb fuzzy things like atoms and electrons and I'm not sure how you'd go about finding something that can be unambigously defined (not even measured mind you, just defined) to arbitary precision.
Possibility 1 is the correct one. If you're interested in practical applications of a random number between 0 and 1, you can continue this process until you get a number of arbitrary precision. If you are not, then you don't need to actually "spend" any time generating one.
Not sure what you meant with regard to rationals. Those would happen if at some point the string of digits enters a cycle which repeats forever. That has a probability of 0 (also known as "almost never"). Nothing strange here.
The rationals include any number that can be expressed with a finite number of digits, not just those that repeat. Any number which you can express by writing out digits is a rational number, but there's vastly more reals than rationals.
If you were to uniformly pick a real number between 0 and 1 then it will almost certainly not be a rational one. But the digit-picking process can only produce rational numbers.
For all practical purposes, that's another question.
If I'm not misinterpreting anything, you can't actually make a uniform choice of a *real* number between 0 and 1. You can make a choice of a *float* number - an approximation of a real number to some accuracy. The trick is that for every given degree of accuracy there is a finite amount of float numbers between 0 and 1.
Actually, one can create a mapping, but not one with a reasonable order, and I think it may require continuity. And you've got to be able to pick numbers with an infinitely long (decimal) representation. So it's not a very useful mapping.
FWIW, I normally decide that nothing containing infinity is real. It's only a convenient way to talk about real things. So the first problem is the assumption that there are infinitely many integers. You can argue based on ANY particular number as the maximum, and it often doesn't matter which, as long as it's a large enough number, but you can't argue based on infinity. Not about "reality". I normally decide that 10^66 is large enough, but which number you need to use depends on the problem. Math is a *map* of reality, it isn't reality.
I have never been very good at infinite-adjacent reasoning, but if you can make a uniform draw from the reals in (0, 1), doesn't that mean you can also make a uniform draw from the integers? e.g.:
1. Draw a number from the reals in (0,1)
2. With probability approaching 1, this number will be in the form 0 point [infinite number of random digits]
3. Chop off the "0 point"
4. Et voila?
I know they are different cardinalities so there must be an error somewhere - but what I really mean is, how is the above argument different from the one Scott gives?
That doesn’t work. As you say, what you get is an infinite string of digits, which is not a natural number — every natural number has a finite number of digits. Even if you stripped off any trailing zeros (however that would work), then for instance it would be impossible to pick a power of ten.
3. Round to the nearest prime, or nearest integer and take its largest prime factor
4. Repeat steps one through three, add the two prime numbers together
5. Assuming the strong Goldbach conjecture holds, divide that sum by two, and subtract some constant to line up with your preferred lower bound. Alternatively, if you want to include all negative integers, flip a coin.
Also as long as I'm jumping in with early comments, I was going to link to Sarah's old Twitter thread about how a lot of the claims about pain asymbolia are likely wrong and the whole thing is probably misdescribed, but she appears to have deleted it. Well -- go ask Sarah about pain asymbolia. :P
I'm a little unclear on what you're trying to say here. Are you just asking "How is refuting an argument for X different from making an argument for not-X?", and then also making a note about apostasy, or are you saying that you don't *care* about the difference because both are apostasy?
In either case, the bit about apostasy is... uh... unlikely to be convincing to anyone who doesn't already agree with you. Not a great way to argue your case! And if it's the second you meant, then, well, I have no particular reason to pay attention to your comment, do I? As evidently it's not trying to convince me.
If it's the first, and you are actually confused about this basic distinction... generally, an argument for X attempts to establish the truth of X, starting from a position of ignorance about X. If an argument for X is refuted, however, that doesn't generally provide an argument for not-X; it just means that that *particular* argument doesn't work. Other arguments for X may still work, and even if none do, refuting arguments for X still cannot establish not-X. When one incorrectly concludes that because an argument for X was false, this means X must be false, this is known sometimes as the "fallacy fallacy".
Now granted the above paragraph is basically implicitly treating real-world arguments as being roughly like mathematical proofs, when in reality, one would want to use a more Bayesian perspective, where refuting an argument for X *would* raise the subjective probability of not-X, and refuting all such arguments might "establish" not-X depending on priors, although of course really from a Bayesian perspective you don't truly "establish" anything. See Yudkowsky's sequences, etc. But these distinctions are still useful even outside of mathematics because they do bear on how much evidence a given refutation or counterargument provides.
For a non-mathematician, all the arguing by the mathematicians in the comments is like the arguing of theologians, and does lay itself open to the same jibe about "how many angels can fit on the head of a pin?"
Are you arguing about real things here? No? Yes? If they're all just mental concepts, who cares?
The answer is that Tegmark's mathematics are indeed about real things, but if you're religious, so are angels and therefore if angels exist and pins exist, can they interact is a valid question.
The problem with the question of how many angels can dance on the head of a pin is not just that it's unknowable, but that it's one of the least important questions you could possibly ask about angels, if they were real.
Stuff like how were they created, what powers do they have, do they intercede in the lives of mortals, do they have any separate free will of their own or are they purely controlled by God's will, if they're purely controlled by God's will then what's the need for them given that God is immortal, et cetera. These are the important questions about angels.
I think that the people in this thread are at least asking the important questions about mathematical universes, rather than focusing on irrelevant minutiae.
"I think that the people in this thread are at least asking the important questions about mathematical universes, rather than focusing on irrelevant minutiae."
For people familiar with maths, yes. For those of us ignorant, it's all meaningless words words words. "Is there such a thing as infinity in reality or is it only a mathematical construct?" Angels. Pins.
And you're right that they're wrong to do that. But it does serve as a good reply to someone who presents the proof as the reason why you should believe this or that.
Attempts to refute attempted proofs of gods existence are not attempts to falsify gods existence. These were in fact one of the core topics of medieval universities when they were run by theologians - they wanted to ensure they only used good proofs of gods existence, not spurious ones, so they always appointed someone as the devils advocate in any disputation to try to refute the proof, and see if it could be refuted effectively.
And yet even in that scenario, the devil's advocates existed purely to strengthen proofs of God. In other words, strawmen. Anyone seriously implying that the existence of God was anything but an absolute truth is just a heretic.
Upgrading Scott's conversion date to 2030. Probabilities remain 70% Catholic, 20% Orthodoxy, 10% other (mostly likely very high church Protestant). I would put Orthodoxy higher but I don't think he will want to give up on Scholasticism.
If you had asked me a year or two ago, that’s what I would have said, but with the number of posts in the past few months that engaged with specifically Christian arguments (including some bad ones based on claims of early miracles), I’m now wondering whether he’s already somewhat Christian.
It's probably a lot easier to argue with people on here about Christian arguments, given that the cultural water of Western Europe and the USA is Christian-derived, than to argue with fellow Jews about fine points of Judaism as to how they apply in the everyday life of those who read ACX. If you're anxious about Sabbath-keeping, then declining what counts as the eruv is going to be both fascinating and relevant to you, but not as broadly applicable to the wider audience as "so the Trump clampdown on immigration violates principles of 'love thy neighbour as thyself'".
Scott will never convert to any brand of Christianity, he's going to remain Jewish and if he does come to some version of belief in God, it will be within the bounds of Judaism. C'mon, give the guy a break, don't tease him about this because I'm sure he's fed up of it by now.
(Yeah I'd love if he became Catholic but I'm not dragging him to the baptismal font, and God will judge us all in the end on our own merits so let that be).
You know, a shocking number of my friends, both from LessWrong and elsewhere, have converted to Catholicism.
Scott undoubtedly doesn't remember me from back then (possibly because I'm not using the same username, so how could he–), but we had a few pleasant interactions—e.g. we both generally agreed with Eliezer on things, but the one time I was like "no wait Yud seems wrong about this" (in re either animal consciousness or dust specks, IIRC), Scott (as Yvain) had /also/ disagreed: a welcome reinforcement, because arguing with Eliezer could be somewhat... frustrating—so I'm tentatively including him in the group "/could have been/ a friend (if I weren't so durn lazy & reclusive)"; and I interpret this to mean that a conversion isn't /totally/ out of the question. 💡
I've always personally had some sort of affinity for the Church, meself. I grew up completely non-religious(ly), but my mother—convinced both that a) I would be terribly bullied in a public high-school, and b) that religious people were happier—(probably she was right about the former; not sure about the latter)—placed me in a Christian (Protestant) school.
This ensured that I would never, ever become a Protestant. They liked trashing on Catholics, but my very first real girlfriend was a Catholic girl (from an all-girls Catholic school! the stereotype is gloriously true, boys, I'm here to tell you that!–) and took me to Mass with her once. I did not know that I was supposed to refuse the Host, so when I was in front of the nun handing 'em out, I plucked it from her fingers, nodded my thanks, and ate it right there. Oops. (To her credit, she didn't turn a hair. I think she was about 100 years old, so perhaps she had just seen it all already–) When we got back to our seats, my C.S.G. said "omg Kvel you uh weren't supposed to do it like that", in a fit of giggles... I was miffed that she hadn't warned me!
Anyway, the point is: both in the books I read, which were heavy on the European history and historical fiction sides of things, and in my personal estimation & experience—that of a solemn & dignified service at a glorious cathedral, with ritual (something I've always loved) and the weight of years behind every element, vs. that of snotty, grasping (donate for the LORD! donate! donate!), awkwardly-"modern-&-kewl!", Christian Rock(™!) service at Protestant churches; and, too, even of doctrinal differences: e.g., Purgatory and the "righteous pagan" vs. "even the best & kindest of all humanity will BURN IN HELLFIRE FOREVER if they don't have FAITH!"—I found I much preferred the Catholic stuff.
My mother—such a thoroughgoing atheist that she was unable to even pretend to believe (she tried, when I was small, again due to the "if he grows up religious he'll be happier"; she'd had an unhappy life, as you might be able to tell, and this was just one more way to ensure—in her mind—that I... didn't)—has /also/ converted recently. Keeps bugging me to too.
To be honest, I'm seriously considering it(!); I love everything about the Church, really... the sticking point is that I just have a really difficult time with, uh... a lot of the Bible, you might say. The OT in particular.
It is hard to accept that this fellow going around deeply concerned about propagating the ceaseless slaughter of innocent animals and the same of non-Judaic tribes, and about the sexual practices of His people & whether or not they're mixing types of fiber together on their persons & if they're getting too close in proximity to menstruation & ... etc., etc. ... is /really/ the portrait of the omnipotent & omniscient God of All; /really/ the way He would like to be represented.
It's blasphemous, I know, but---I can't help but feel like "if I were to write a Book for All Mankind Forever, I'd do it a little differently", y'know?
Not to kick the Protestants, but some denominations do seem to put more emphasis on the Epistles of St Paul than the Gospels, to the point that I can see why people go "It was Paul who invented Christianity".
No, he didn't, but the post-Reformation emphasis on using texts from Paul to back up their positions made it seem that way.
That's extremely interesting! I wasn't aware of that; I knew I preferred much Catholic /doctrine/, sure---but most of my reading on the Church has either been from medieval texts or incidental (I really love Papal history, for example, but it doesn't tend to go much into /current/ Catholic stances, heh)... so this is very intriguing---and welcome---news.
--------------------------
I remember, at my (Protestant) Christian high-school, a science class wherein the teacher informed us that "stellar evolution" is just as much a lie as is "Darwinism". Stars do not change; supernovae & black holes are snares laid by the unbeliever---ably assisted by the Enemy, no doubt---and "you will never see these atheist 'scientists' ever come with direct evidence of either one. Why? Well, because... THEY DON'T EXIST!"* [cue derisive laughter from teacher & students]
I argued, naturally. By sheer chance, this was not long after one of the first direct imagings of a black hole, and I had /just read/ some articles about it in a recent issue of Astronomy & of... some other magazine (Popular Science, maybe); I said "I will bring them next class."
I thought that the instructor would be impressed by my knowledge & initiative, and would eagerly read through the magazines---perhaps even right-then-&-there---and we would all have a fun, enlightening conversation about astronomy and stuff. :)
Surprisingly, no. In fact, he did not seem at all pleased, for some reason; he took my magazines, said he'd return them once he read them but not to bring it up in class again before that, reiterated to the class that "black holes are not real no matter how many magazines you bring in [gives me a look]", and... well, I never saw those particular issues again, heh.
This sort of thing happened quite a bit, although I learned not to argue back (...sort of--); memories of biting my tongue repeatedly, as arguments such as "they want you to believe that YOUR GRANDFATHER was a MONKEY" were advanced, are engraved deep in my mind...
...so /that/ was my dominant impression of Christianity, for a long time. Hence: I'm more intrigued than ever by the Church, now, after reading your comment---maybe I ought to take that RCIA class thing after all...
--------------------------
*(I mean, black holes & supernovae, presumably---not atheist scientists. He definitely thought /those/ existed.)
catholicism has issues because of that beautiful, ancient church filled with artistic merit and appealing to fine sensibilities.
the bible is messy. Jesus spits on the ground and makes mudpies to heal someone. The father of the church hacks somebody's ear off with a sword, who Jesus heals out of frustration. John ran away bare ass naked from the guards who came to capture him.
even the jews lol, the ot is hilarious in that much of it is God going wtf why are you jilting me for yet another God? Ezekiel, go lay on your side for a year and bake food over a fire burned with human shit to remind them who is their God.
idk how we got from homeless Jesus getting accused of being a drunk who hangs around prostitutes to this.
That's why the Eucharist, though. It's not a symbol or an ordinance or 'you ingest Jesus in some spiritual fashion via faith', it's material bread and wine and the Body of Christ. God interacts with us through the material, hence sacraments.
the eucharist as i understand it is more a theological explanation of how a believer is justified or in right standing with God through partaking in a ritual. the experience is the ritual which believers do "in remembrance of him."
but the "christ is physically present" is there to say "God forgives and changes you when you enact it lawfully" and thats another issue. it leads to a whole scaffolding of explanations that try and tell us how God does what he does as they reason out his mysteries.
Okay, this is where the rubber hits the road. The Eucharist is not just a ritual or commemoration. Let me make it clear from the outset that I'm one of the lunatics who believes in the Real Presence and in Transubstantiation (which admittedly is our 'best guess' theory and not at all the full explanation).
But when Jesus said This is My Body, this is My Blood - the bread becomes flesh, the wine becomes blood. It's not just "do this ritual in remembrance of me", it's the whole "this is a hard saying" thing where many of the disciples up and left after He said all this:
John 6:
"51 I am the living bread that came down from heaven. If anyone eats of this bread, he will live forever. And the bread that I will give for the life of the world is my flesh.”
52 The Jews then disputed among themselves, saying, “How can this man give us his flesh to eat?” 53 So Jesus said to them, “Truly, truly, I say to you, unless you eat the flesh of the Son of Man and drink his blood, you have no life in you. 54 Whoever feeds on my flesh and drinks my blood has eternal life, and I will raise him up on the last day. 55 For my flesh is true food, and my blood is true drink. 56 Whoever feeds on my flesh and drinks my blood abides in me, and I in him. 57 As the living Father sent me, and I live because of the Father, so whoever feeds on me, he also will live because of me. 58 This is the bread that came down from heaven, not like the bread the fathers ate, and died. Whoever feeds on this bread will live forever.” 59 Jesus said these things in the synagogue, as he taught at Capernaum.
60 When many of his disciples heard it, they said, “This is a hard saying; who can listen to it?” 61 But Jesus, knowing in himself that his disciples were grumbling about this, said to them, “Do you take offense at this?"
So to say "Christ is physically present" is much, *much* more than to say "God forgives and changes you when you enact it lawfully".
It is a Mystery, who can understand these things? 😁
I think part of the appeal in Catholicism may be all that Scholasticism, which enables a lot of word-wrangling. You do get equivalent denominations which like to do the same, such as Calvinism, but Calvinism can be a bit grim unless you scrap a lot of the emphasis on the negatives of double predestination.
So if you're smart and like arguing about concepts, there's a smorgasbord laid out for you in Catholicism.
the problem is when you go to a "high" church, you quickly find its two extremes:
1. A heavily intellectualized magisterium class that may actually subvert the fundamentals of the faith if not watched.
2. a laity that just does it because their parents did. They'd be Elks or Freemasons if they did too.
you have a small "convert" class that does experience something and is devout but i think a lot of the 60s fundamentalist revival in the states happened precisely because high churches ossified into this. its so bad in new england that all those elegant churches rent themselves out to fundamentalists because they can't actually maintain congregations
Absolutely---I find that, at least here in the American South, there is a strong current of anti-intellectualism in the Protestant churches; as someone who /loves/ word-wrangling (nice term, there), this is very unappealing to me.
One thing I always liked about Islam is that... well, I love language, and I love history; if I'd chosen better, as a young man, I think I could have easily acquired a PhD in historical linguistics or the like, and spent all my time reading, writing, & thinking about that sort of thing---like I do now... except I'd get paid for it, and published, and invited to conferences to speak upon it...
...whereas now---being outside the gleaming Red... er, ivory! I mean ivory!... halls of academia---no one really cares about that sort of stuff; the co-workers & friends I speak to don't have any interest in it, the ladies I eye are not impressed by it, the employers I meet won't pay for it, the ... etc. Interest in philosophy, language, history: a thankless card to draw, of no benefit even socially.
But in the Muslim community, ah... with the Muslims, if you're real good at fine distinctions of philosophy, if you've a facility with Arabic, if you possess talent for argument & exegesis, you're /hot stuff./ (The price, of course, is that the philosophy, exegesis, and language you must study are all of the Islamic variety; they does have some inherent interest, but to be the requisite degree of monomaniacal, I think one might have to really believe in the religion...)
Similarly, there's /one community/ in the world---outside of certain ever-shrinking, ever-more-irrelevant uni departments---that will be impressed, excited, interested, and perhaps even remunerative, if you spend all your time studying Latin & thinking about medieval theories of semantics: the Church, of course. THIS is a big draw for me, heh. If I could ever sincerely convert, I might try for seminary. If you're gonna go, go hard, I say!
--------------------------
Somewhat related, somewhat apropos-of-nothing: Protestants, and atheists, like to bag on Catholics for the doctrine of transubstantiation---but I remember being quite young (late teens?) & reading some philosophical explication of the concept; to my utter amazement, /it made total sense./
It just requires one to approach it upon its own terms, rather than from the angle that modern thought might suggest; if you're not familiar with ontology & metaphysics---particularly the Aristotelian/Thomistic varieties---then you're probably going to conceive of the doctrine incorrectly. It's not even really obscure or difficult (like some Trinitarian theology is, heh): understand "accident" vs. "substance", and it all falls into place!
I dunno---no real point here; I just thought it was interesting. Surprised the heck outta me, at the time!* (With the modern bias against historical thought/thinkers, I've found that most also don't realize that any "obvious" quick objection one might think up has /definitely/ already been considered by multiple Catholic philosophers, heh.)
--------------------------
*(unfortunately, the heck came back with a vengeance)
> It is hard to accept that this fellow going around deeply concerned about propagating the ceaseless slaughter of innocent animals and the same of non-Judaic tribes, and about the sexual practices of His people, and whether or not they're mixing types of fiber together on their persons or being in too close proximity to menstruation, or... etc., is /really/ the portrait of the omnipotent & omniscience God of All; /really/ the way He would like to be represented. It's blasphemous, I know, but I can't help but feel like "if I were to write a Book for All Mankind Forever, I'd do it a little differently", y'know?
If you really like Christ but not the OT, you could always fall back on the (non-standard, non-Catholic) view that the weird fellow of the OT was not much of a God at all, more like a reification of the Jewish tribes' own ideas of an ideal supernatural leader. That doesn't preclude Christ from being divine and the NT from being a real revelation if that's your thing. All the bits where they try hard to tie Jesus to the old Jewish prophetic tradition can be explained away as cultural emergency patching at the time.
From there on you could also give up on the unicity of the incarnation, and end up with a kind of universalist view where you get to choose your preferred divine symbol out of potentially many (or maybe it chooses you).
This seems sort of like the Gnostic view, to me---and, I admit, it does have appeal. The NT reads, to me, as if it does have /something/ of the numinous in it; that is, out of all the holy books I've read, I have felt that that which had the greatest proportion of something-which-could-be alien & divine is the NT (in particular, the Gospels).
That said, though: what motivates either the "Gnostic" view, /or/ the "universalist" one? I.e., why are we going for /either one/, instead of just throwing up our hands & saying "material monism & human authors, that's all I see 'round here"? I have difficulty with this.
Well Gnosticism is a whole historical movement with a complicated history, but these days it mostly stands for a form of spiritual conspiratorial thinking: there's something wrong with the world, and it's the fault of a kind of trickster lesser God. I don't find this very appealing; evolution and game theory are plenty enough to explain most of the ways in which we commonly find the world persistently unpleasant - what the Buddhists call 'dukkha'.
As for why not just materialism, it's really a question of sensibility and preference. No view is free from difficulties, materialism included; it rather abruptly denies any depth or power to subjective experience, awkwardly attempting to subsume it under measurable, objective facts. Yet our actual experience is the other way round, if we pay attention: apparently objective things only ever appear within a field of subjective experience. There's a whole strand of thinkers that take this kind of thing seriously, check out for example Bernardo Kastrup or Arnold Zuboff, they have whole lectures on youtube.
Excellent point. Any further readings, figures, search terms etc. you'd like to recommend re: idealism (or "metaphysical realism"? sometimes it seems t'me that the terms "idealist" & "realist" are used in several—& almost opposite—senses, in philosophy) shall be gratefully received!
My prediction does not preclude a pit-stop in Judaism. But if Scott concludes that God exists and decides to practice a religion (Deism or other system where you acknowledge the Creator but you don't change your behavior in any way are possibilities), he's going to want to pick the correct one. He'll need to decide that Judaism's specific claims about God, both positive (He wants us to follow the Law, and a Messiah is coming) and negative (Jesus of Nazareth isn't that guy), are correct. Having accepted that the Old Testament is true, he'll need to come up with a specific reason to reject the New. You can't just say "miracles are impossible" or "the dead can't come back to life." So I just don't think he gets there, not with the way that he engages with Christian arguments. It's going to be all or nothing.
Sure, lapsed or never-practicing Jews sometimes rediscover their childhood or ancestral faith. But I imagine that's more out of a sense of spiritual longing. They connect with something familiar because it feels right. I don't think people reason themselves into Judaism.
Simplicity is often used to assert higher probability in "reality". In fact, the Occam's razor is only a guide for humans on how to pick a model worth testing, mostly because it is easier to work with and test, not because it is likely to be more "real".
There have been some successes with simpler models, but mostly as stepping stones to something much more complicated. There have been plenty of failures. In fundamental physics nearly all "simple" extensions or replacements of the Standard model of particle physics have experimental consequences that contradict observations. Same happened with all known extensions of General Relativity.
If you are a researcher and look critically over your own area, you will notice that "simple" is not a good approximation of "accurate".
I think you're talking about some sort of vague philosophy-of-science political debate. On a mathematical/technical level, which I think is what we're doing here, simpler simply *is* more probable, that's how math works.
Very simple example is that "the first object I will pick out of this hat is blue" is more probable than "the first object I will pick out of this hat is a blue sphere between 1 and 2 inches in size".
Slightly more complicated example: suppose that I am rolling ten d20s, and I will declare success if EITHER the red dice comes up 20, OR the blue dice comes up 5 and the yellow dice comes up 3 and it's raining outside. I declare success. Which is more likely - that the red dice came up 20, or that the conjunctive thing was true?
I realize this seems cheating because I'm using easily-quantified things like dice rolls, but I think the same principle extends to everything else. The reason "OJ killed his wife" is more likely than "An international spy killed OJ's wife, then used supertechnology to fake the DNA" is because it's the conjunction of p(international spy) * p(has supertechnology) * p(wanted to do this), and we can expand each of those into more complex propositions in turn.
Right, the simple examples in well-researched areas work as you describe. Anything out there, like the MUH or, well, actual science, does not.
For example, the simplest extension of the standard model is the grand unified theory SU(6) -- it is wrong. SO(10) is also wrong. String theory is very simple... sort of, if you know the right math. It has failed every experimental test so far.
In your professional area, does the Occam's razor work well? I guess, maybe, in most cases? If someone comes in talking about vegan trans alien-looking people trying to murder them with a katana, they are probably in psychosis and hallucinating, and need some antipsychotics stat. Unless you know a few more details about their lives.
The topic you picked for your original post is way out there, your normal intuition is pretty useless there. You simply do not have enough information to tell whether there are metaphorical Zizians out to get you. Well, there always are, but you do not know where they are hiding and how they look.
AIXI was a great start for AGI research, it ended up being completely useless, real progress was messy and complicated, though it might look "simple" with the benefit of the hindsight.
More examples: tiny animals causing disease. BS until you accidentally stumble on them by looking through an unrelated instrument.
Rationality, Transhumanism, frontier research is not something where a dice model is very useful. "the same principle extends to everything else" seems to be empirically wrong when that something else is a poorly understood area.
My understanding is that string theory has received zero empirical tests, and in fact has produced zero concrete predictions that even could be tested.
This is a bit of a contentious area. Supersymmetric partners was one famous predictions that those doing high-energy physics research expected to be confirmed in the LHC with high probability. There were theory/phenomenology papers with confidence intervals and everything. The utter desert of what was found (Higgs and nothing else) was a surprise to most area experts.
Other, less talked about predictions were refuted right away but quickly patched up, such as the predicted 10/11 dimensions (oh but we can fix this with Calabi-Yau manifolds, nothing to see here!).
The main reason String theory is still around is because there is nothing better, and it inspired holography, which rings true to most experts, though has not been experimentally tested, either.
"If someone comes in talking about vegan trans alien-looking people trying to murder them with a katana, they are probably in psychosis and hallucinating, and need some antipsychotics stat. Unless you know a few more details about their lives."
Ah - you wouldn't be referring to recent example of some fluttering in the dovecotes, would you? 😁
EDIT: Never mind, I see that you are. Well, in the Tegmark universe, every possible thing that can happen will happen!
I don't think the contention is that "the simplest alternative is always right", but rather "the simplest alternative /that fits certain criteria/—e.g., like 'doesn't contradict something else we have observed to be true'—is more likely to be right"; hence, your examples re: extensions of the SM don't offer a refutation of Scott's claim.
“Rationality, Transhumanism, frontier research is not something where a dice model is very useful.'“
This is correct and probably why Scott and BB are arguing about Boltzman Brains. The reality is that most people here are clearly minds that operate on the frontier of knowledge. There are not enough frontier minds to get the data needed even think a frontier mind follows any normal distribution!
For all we know frontier minds are the only true agents and while they're probably highly correlated with a type of brain structure that is found in the tail end of things like IQ and/or level of personal financial discipline we dont actually know that.
Say we are in the simulation and the simulator(s) obtained the ability to get the technology to make it naturalistically. It makes sense to me that in order to either save resources or to prevent observers emerging in bodies that are more likely to suffer only minds the most likely to become a superintelligence or minds that have increased odds to make more minds that help build it would be granted agency.
I dont understand at all why we think we all won the lottery.
Your examples all involve a conjunction where one part of the conjunction is isomorphic to the entire alternative hypothesis. Your conclusions don't hold for examples without this special property.
We can make options complex in ways other than conjunctions. Which is more likely: The die will roll a 20, or the die will roll EITHER a prime number OR a perfect number OR an even number?
We can also make conjunctions that are still more likely than some non-conjoined alternatives. Which is more likely: I will win the lottery tomorrow, or the sun will rise AND the grass will grow AND the birds will sing AND I will blink at least 7 times during the day?
There's some point in The Sequences where Eliezer has a hypothetical involving an alien mind that thinks the universe is like drawing papers out of a hat (without replacement), such that each outcome becomes less likely the more times it's already been drawn, and when you ask them why they'd expect this theory to suddenly start making good predictions NOW, they answer "because it never has before!" (I think this was somewhere in the section about No Universally-Compelling Arguments.)
I don't think those are even the sorts of alternatives that Occam's Razor is meant to apply to. There's another point in The Sequences where Eliezer gives an example of a photon that escapes the observable universe in a way where no detector could ever possibly catch it, and asks whether we should believe that the photon disappears, or whether it continues in the way that photons usually behave. He says some might argue that it would be simpler if it disappeared, because then the universe would contain less stuff. Eliezer argues that the kind of simplicity that has proven to be an effective heuristic in scientific history is simplicity of rules, not simplicity of the universe's contents, and so we should believe that the photon continues normally. Here, you seem to be trying to apply it to "will A or B happen next?" examples.
I think Occam's Razor has a decent historical track record, but I don't think I've ever heard a strong theoretical justification for it. My first hit on Google says it's because the more assumptions you put in your theory, the more chances it has to be wrong about something, but I think that sounds more like an intuition pump than a serious argument. (This justifies why you shouldn't just add random extra stuff onto your theory for no reason, but doesn't justify preferring the simpler theory where one of them isn't JUST "the same thing but with extra stuff tacked on.")
Though, I did come up with this parable that makes it seem more intuitive to me that simpler explanations would have more weight. Perhaps you'd find it interesting? (Warning: I haven't run this past a mathematician.)
Suppose your teacher announces that she's going to hold a random drawing to decide what rules to use for the next class simulation, and sets out a stack of blank forms for kids to write down the rules they want. Further, she says you can make up more than one option if you want, but she's going to remove all duplicates before the drawing, so don't bother submitting the same option more than once.
Alice gets a form, sees that there's 500 character-sized boxes, so apparently your system needs to fit within that. She thinks really hard and comes up with a system she likes, and carefully writes it out, finishing with the "end program" instruction exactly at the end of the 500th box, filling the form entirely.
Bob gets a form, and thinks really hard, and carefully writes out a system that happens to take 499 boxes. After he hands it in, he gets a thoughtful look, then grabs a big stack of blank forms and brings them back to his desk. He writes out his entire system again, but this time, in the final, extra box, he writes "A" so that it's not a duplicate of his previous submission. Then on the next form, and writes out his system again, and adds "B". And so on.
Carla sneaks a peek at what Bob is doing, then she picks up a stack of ~8 billion blank forms so that she can write out her 493-character system with every possible 7-letter postscript.
To summarize: If you have a "fixed budget", then there are many equivalent ways to describe a simple system using that budget, and not as many equivalent ways to describe a more complex system. Thus, weighting things inversely to their length is sort-of equivalent to a _uniform_ weight across all fixed-sized inputs where the inputs are allowed to have non-functional padding to reach the correct length. Take the limit as the allowed input size goes to infinity and you get the Solomonoff Prior.
(This still doesn't solve the issue that Kolmogorov Complexity is language-dependent, of course.)
I don't think your examples of "more complex but more likely alternatives" hold here. For using OR: we're not really testing a more complex alternative, just seeing which one of a group of alternatives obtains. I.e., what's the equivalent here in terms of "hypotheses about the world"? It's that the elements you've composed with "OR" are separate and independent hypotheses, surely—in a way, reminiscent of p-hacking: "either I will observe that women respond better to this treatment OR that men will OR that Hispanics will OR that Hispanic men will OR that elderly men will..." We don't get to call this a /single/ hypothesis, I'd think, not for these purposes.
For the blinking and birdies example: well, yeah, if we purposefully take things for which the probability is known and compose likely ones here and unlikely ones there, we'll get this result. The contention isn't that the simpler alternative is /always/ to be preferred; it's that /all else being equal/, it is.
Edit: Although it wasn't a direct response to your comment here but rather to Scott, I believe hsid's response below is actually saying the same thing I am here—except less messily, heh. That is, surely Scott's comment is to be read "lacking better information / all else being equal, the simpler option is preferred", not "the simpler option is /always/ the best".
The typical informal statement of Occam's Razor usually includes a caveat like "all else being equal", but in the comment that I replied to, Scott claimed "simpler simply *is* more probable, that's how math works." I was attempting to refute that, not the common informal claim. (Though if you attempt to formalize what you mean by "all else being equal", I think you'll run into trouble.)
Saying that chaining things with "and" is a legitimate form of increased complexity, but that chaining things with "or" is cheating, sounds to me like special pleading. Conjunctions and disjunctions both have higher Kolmogorov Complexity (in typical computational languages). If you want to convince me this isn't just special pleading, come back to me with a principled and general way to measure complexity, and show how this result emerges from your method.
>I was attempting to refute that, not the common informal claim.
Ah, I see---pardon; my mistake!
>Saying that chaining things with "and" is a legitimate form of increased complexity, but that chaining things with "or" is cheating, sounds to me like special pleading.
I have difficulty putting my intuition here into words---most especially, the proper words, since I don't read much about this sort of thing---but I don't think it's "special pleading", insofar as this seems like the usual way to view it in terms of testing hypotheses against the world. That is, the argument isn't about the definition of "complex" (as I intended it, anyway), but rather about what counts as "one hypothesis/option/alternative/prediction", maybe.
I.e., "either this pen is red OR it is blue" is testing two separate, independent hypotheses, wherein if either one is true the other one doesn't matter at all---so, while the entire statement is indeed more complex than either one alone, in some sense we're just making two predictions. One statement, but two entirely independent & separate predictions.
Or, say: "either A will happen, or not-A will happen"---is this a prediction/hypothesis /at all/? Certainly not, in the sense we're meaning?
(If that's not convincing, I'll have to just throw my hands up & shrug---I doubt I'd be able to articular it any better than that... but I bet you this has been looked at /somewhere/ in the literature before.)
>If you want to convince me this isn't just special pleading, come back to me with a principled and general way to measure complexity, and show how this result emerges from your method.
I have a truly marvelous demonstration of this, but---unfortunately---this comment-box is too small to contain it.
Notice that statements like "this pen is red" and "this pen is blue" do not refer to a single possible outcome, but each refer to a large class of outcomes, and those classes are not even equal in size. A color table I quickly looked up online says that "red" light is light with a wavelength anywhere in the range 620-750nm (a range spanning 130nm), while "blue" light is only 450-475nm (or 450-495 if you include cyan, spanning 45nm of difference; still less than half the numerical range of the "red" category).
Why are all possible shades of red allowed to be combined into "a single hypothesis", but shades of red and blue are not? Seems pretty arbitrary.
The hypothesis "this pen is either orange or cyan" probably covers fewer distinct possibilities than the hypothesis "this pen is red". Are you going to count the former as 2 distinct hypotheses just because of the way those wavelengths were summarized in English?
(This is before considering that the pen's color might not be uniform, or that one "color" can be a mixture of multiple wavelengths, or that the same color could be produced by more than one physical arrangement of matter. The total number of theoretically-distinguishable physical states covered by "this pen is red" is vast, and you'd need to know a lot of esoteric physics to even say whether or not it's finite.)
Twenty is a strict subset of the even numbers. Principled way to describe hypotheses like that would be to list every distinguishable resting position the die could land in, and a probability for each.
If the sun fails to rise tomorrow, that would imply some calamity, most likely one severe enough nonessential functions such as lottery drawings will be canceled.
What's your point? None of that changes that the more complicated option in those examples is more likely, so they still work as counter-examples to Scott's claim that "simpler simply *is* more probable, that's how math works."
And if you still object to those specific details for some reason, it's trivial to modify the examples to avoid the specific details you pointed out. For instance, you could change the first example to be odd numbers instead of evens. So I don't see how this could possibly change my conclusions.
(Also, your "principled way" of dealing with the die isn't actually possible; the number of in-principle distinguishable ways the die could land is far too vast for a human being to ever list them all in real life, and might be literally infinite depending on esoteric details of how physics is implemented. And even if you could do it, I suspect you only believe this would help because you're assuming that the die is fair, which isn't guaranteed; in fact ~100% of real-life dice are at least slightly unfair. I'm not aware of any principled reason to expect that reality is fundamentally divided into a discrete finite set of equally-likely options.)
An object of arbitrary complexity can - and often will - have a small, discrete set of ways it can come to rest on a flat surface. Consider the https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c
If I think a given d6 is approximately fair, I might list the six symbols on its flat surfaces and "0.166" for each, with that last 0.004 as rounding error, plus a hedge against various unlikely scenarios like it landing on edge, being stolen and destroyed mid-throw, or simply bouncing off the table into some crevice where the result can't be feasibly determined.
For a weighted die, maybe one favored side is 0.375, the four around it are 0.125, and 0.120 for the side directly opposite the favored one, remainder increased to 0.005 since mid-throw interruptions are notably more likely, in part due to risk of someone noticing it's weighted and getting angry about being deceived.
Those predictions are equally complex, or at least close enough that Occam's Razor cannot slide easily between them.
"Or a perfect number" is irrelevant padding if you include even numbers, since all known perfect numbers are even, and there aren't very many in the range small enough for feasible polyhedral dice. https://qntm.org/trick
Assuming we're talking about a d20, "will be even or prime with certainty" reduces to "will not be 1, 9, or 15," and "will be 20" is similarly equivalent to "will not be 1 through 19." Principled way to encode such absolute predictions when selecting capriciously from a short list of discrete options would be to assign a single bit each, in which case there are roughly 2^20 = 1,048,576 equivalently complex theories. Those aimed at narrower subsets are more impressive if validated, but that's Popper or Bayes, not Occam.
Even contrived dice examples point the opposite way, I think. Imagine I roll a trillion-sided dice, and I declare success if the result is exactly one trillion, OR if the result is divisible by 3, 7 and 11, and if that number appears as a group of digits of pi in some place, or some similar contrived rule. Saying “we got one trillion” seems simpler from the epistemological/information theoretic point of view, but it’s probably still less likely. Since we have a completely certainty model of how the system works, we go straight to counting probabilities, without considering simplicity at all.
Simplicity is only useful when we have so little information about the underlying model that for any extra assumption or epicycle in our explanation, it becomes very likely that it will clash with some unknown part of the model.
I wrote this in another comment before reading this one, but dinosaur bones are another good example, actually. The simplest explanation is probably something like “they were already there from the start”, but since we already know how the world actually works, the real most likely explanation is a way more complicated one that involves real living creatures with skeletons, the theory of evolution, special kinds of bogs to preserve the remains, etc.
I don't think Scott would disagree with this, nor that he meant to—surely his comment is to be read "all else being equal / no other information being available, the simpler alternative is to be preferred."
>simpler simply *is* more probable, that's how math works.
No it's not. Mathematicians like studying simple objects because mathematicians (like myself) are simple. Mathematicians also like structure, and simple objects tend to have structure.
But: consider e.g. the set of functions from [0,1] to [0,1] (i.e. the unit interval). Such a function takes a number from [0,1] and spits out a number from [0,1]. Let's limit ourselves to the functions that are equal to f(x) = x on the interval [0,0.5]. How does the function continue? What's the most "common" way for it to continue? There's no good answer for it, and nothing interesting to say in this case, so mathematicians will study more complicated things (like continuations with certain properties, or impose additional structure on the problem) but the simplest case - a function where we don't impose any structure actually has the *most* possibilities.
This doesn't seem very relevant to the question at hand, to me; i.e., this isn't to say anything about probability, except about what mathematicians are more likely to study.
Actually, this might support Scott's point instead: the simpler function having more "possibilities" means that any given instance of the latter is more likely to belong to the simpler function, since possibilities that fit the-function-plus-additional-structure are fewer.
I don't understand what you're trying to say. "Probability" *is* a mathematical abstraction. If you're talking about probability, you need to define what it is you're talking about, and the way to do that is mathematics.
>Actually, this might support Scott's point instead: the simpler function having more "possibilities" means that any given instance of the latter is more likely to belong to the simpler function, since possibilities that fit the-function-plus-additional-structure are fewer.
My point is exactly that the "simpler" functions (in the day-to-day sense) are not more common. I am *refuting* the point you are trying to make. The additional structure is what makes the function *interesting* and *useful*.
"More common" in what sense? As far as I can tell, you mean "more commonly studied"---which isn't relevant at all, so I must be misinterpreting.
The only other interpretation I can come up with (of your initial comment) is the one I explained above: a function "having [more] possibilities" than another means it is more likely any one random draw from the combined pool of possibilities will belong to the former. That is, in the context of making predictions / forming hypotheses about the world: a hypothesis that more possible evidence fits is more likely to be confirmed by evidence; imposing additional structure makes it /less/ likely, all else being equal.
>I am *refuting* the point you are trying to make. The additional structure is what makes the function *interesting* and *useful*.
I know what you're /trying/ to do; the confusion---whether on my part or yours---lies elsewhere (i.e., in whether or not you're actually doing it). I make no comment on "interesting" or "useful".
Yes. What I mean is exactly the opposite of what you understood.
> I make no comment on "interesting" or "useful".
I'm sorry, let me rephrase - mathematically, the things you and I (or mathematicians) call "simple" are usually the "interesting" or "useful" or "intuititive" objects. If you just pick a random object the overwhelmingly most common thing is just noise (and I *don't* mean noise in a technical sense).
If you define “probable” as a synonym for “simple in the language I use to describe it” (as the solomonoff people do) then yes this is trivial. But if you think that every measure satisfying the kolmogorov axioms is an equally good candidate for probability (as is Bayesian orthodoxy in many fields) then this just isn’t true.
There are multiple justifications of occams razor.,some more theoretical, some more practical.
The tautologous one is that if you have a theory with N premises whose individual plausibility is unknown, their conjunction is going to be more plausible than the theory with N+1. That's not just methodological.
It's also very impractical , because you usually do have some information about how likely your premises are. But the problem with the entirely methodological approach is that you haven't solved the basic question at all ..you are considering the simpler hypotheses first because you must, not because they are more plausible.
I wish you'd been able to work in a discussion of the Ontological Proof (that the definition of God necessitates his existence) somewhere, since it seems to have a lot of similarities to the Tegmark theory (or at least my vague understanding of it from your post) that mathematical truths must necessarily have existence. You have to carefully steelman the Ontological Proof and appreciate its nuances to get any enjoyment out of it, otherwise it just sounds silly; there are some non-obvious arguments for it that evade the obvious arguments against it. [I write not as someone who believes that "Proof" to be valid, but as one who was favorably impressed by reading nuanced versions of it deployed by smart people who knew what they were doing and weren't being ridiculous.]
The ontological argument is similar because it confuses the mathematical or logical space with reality. The proof is really saying - imagine in your mind (or in some logical space) that there is a perfect being. Does it have to exist?
Well non existence is imperfection so of course the imagined perfect being exists. But only in the mind that imagined the perfect being. Not in reality.
Some mathematicians argue that we discover mathematical rules, that Pythagoras is discovered, that Mandelbrot set exists independently of us. This might be true but it’s a world away from every mathematical object having to create a physical reality, there seems no compulsion there. And a lot of the absurdities that come from this belief and the belief that we can treat infinity as a number - which Scott got wrong here anyway leads to philosophical and mathematical absurdity.
If Tegmark's theory includes universes where for example c (speed of light) takes every value in some real interval like [2*10^5, 5*10^5] km/s then that means that the set of existing universes has a cardinality of at least continuum. In this case you can't order the universes in a sequence with first, second, third element, etc.. (this is proven by Cantor's famous diagonalisation argument). However, I don't think this hurts your case at all as you can still have a non-uniform measure, and this is just a small technicality as far as I can see.
This isn't really right since it equivocates between two technical senses of "sequence" (a function with domain the natural numbers or an initial segment thereof, which you definitely can't do; and a function with domain an ordinal, which you can if your mathematical foundations are ZFC, but not necessarily if they are ZF or something weaker).
Is the second interpretation possible in the context I'm replying to? If we take Scott to be defining f(lambda) = (1/2)^lambda for lambda ordinal, then we need a definition for ordinal exponentiation with rational bases and non-finite ordinals which I don't know of. Also, because of the definition of a measure we need the assigned function values to be extended reals. Honestly, even if it works somehow I don't really like this approach, you will need to assume that the universes are well-ordered by simplicity, ie there is a simplest universe in every subset of universes which, to me, seems like a pretty big metaphysical assumption.
Ah, I see what you were getting at, it was a different point than I thought. The standard answer is that the reals involved in parameterizing the universe need to be computable, or definable without parameters, or otherwise somehow contain finite information, but yeah there probably is some fancier answer that, for example, bases the probability measure on an oracle Turing machine with random oracle, or definitions-with-parameter with a random parameter, or something.
I think this is a lot worse than it looks. Whats the complexity of a universe with an irrational natural constant? It doesnt fit easily into the computation-based models of complexity.
This is sort of fine for Solomonoff induction, where you just get a sequence of theories with increasingly precise approximations, but you cant do that for reality.
Our universe's natural constants are not rationals with small numerators/denominators. We'd have no way of knowing if they are rationals with large numerator/denominator, but we have no reason to think that they are.
Are you by any chance mixing up irrationals with uncomputable numbers?
It seems like a decent amount of trouble might be sidestepped by narrowing the Tegmark Universe from "all mathematical objects" to "all mathematical structures that are defined over a finite universe[1]." You'd still have infinitely many such structures, but I *think* it might at least be restricted to a countable infinity. And it doesn't seem like letting physical constants take on any real number (or even any rational number) is especially necessary: changing the speed of light by one part in 3^^^^3, for example, doesn't seem like it would have any observable consequences on any sort of physically reasonable timescale.
On the other hand, it's entirely possible that defining physics over some set that was really large but finite would have weird consequences that aren't immediately obvious to me. Certainly a lot of the formalism of modern mathematics based in very fundamental ways on infinite sets, and I'm not actually sure how possible it would be to formulate finite equivalents.
[1] This being "universe" in the formal logic sense of the word. A terminology choice which is going to get real confusing, real fast in this particular context.
> Boltzmann brains are a problem for even a single universe
They *can* be. My response is that clearly our understanding of the universe is wrong in some subtle way and there are no Boltzmann brains. Maybe something about the expansion of the universe causes their probability to keep decreasing, so there's only a finite (and very small) probability of being a Boltzmann brain. If they do exist, any basis for understanding our current universe is wrong, so either they don't exist and we have a model that's very accurate except for t he far future, or they do exist and we have no idea what universe we're in.
Okay, here's the deal with the Boltzmann brain stuff.
Firstly, typo note: the estimate from wikipedia is 10^(10^50) years, not 10^500. The first number is vastly larger than the second.
Boltzmann brains are not only a problem even for just one infinitely long-lasting universe, but even for just one universe that lasts a finite amount of time (before repeating itself). So long as the amount of time it lasts for is exponentially large, then we might run into problems. But we could assume that for one reason or another, our universe is one where Boltzmann brains are not possible, maybe due to undiscovered physics.
Okay, what about a Tegmark multiverse of finite universes? Say that to try and get around some of the paradoxes of having an infinite number of observers, we make a rule that universes in Tegmark's mathematical multiverse can only do a finite amount of computation. If we represent them as Turing machines, each machine must halt. Due to the absurdly fast-growing busy-beaver function, the maximum number of observers in a universe of a given complexity grows way faster than the complexity penalty. So we can't just sample from all observers in the multiverse. It's not that most observers are Boltzmann brains, it's that such a sampling process is mathematically undefined.
But, if we first sample a universe at random, then sample an observer from that universe, we can see that Nevin's objection fails to correctly count information. Under any reasonable encoding scheme, all laws of physics of our universe along with all the the "fine tuned" physical constants easily fits in a megabyte. (Each constant is only a few tens of bits.) This is a complexity penalty of 2^(1000000). But this means we only need to make a megabyte of orderly observations before we've got enough evidence to prove that we're not Boltzmann brains.
It bemuses me how horrified people are by the thought of Boltzmann brains. If I'm a BB and I am hallucinating such an effective and convincing hallucination, then this hallucination is just as "real" as the "reality" of me floating in space. I'm fine with it.
I think the main thing that "horrifies" people about Boltzmann brains is that we clearly are not Boltzmann brains (whose conscious experience would be fairly random and confusing and short), yet our theory about the universe or multiverse implies that most people should be Boltzmann brains, who why aren't we? Seems improbable.
Who's to say our experiences aren't random and confusing and short compared to what they ought to be, but we since we have nothing to compare them to…?
I still don't get how that's a problem. It just sounds like saying "our model of the world implies a random person is more likely to be Chinese than any other nationality, so that means I should expect that I personally am most likely Chinese". It doesn't mean that, because "I" isn't a term I use for randomly-selected people, it's a term I use specifically to identify myself, and I already know that I'm neither Chinese nor a Boltzmann Brain.
There's also another fundamental problem here. The calculation that says most observers will probably be Boltzmann Brains is itself conditioned on our not being Boltzmann Brains; otherwise, all the starting assumptions of the calculation would just be random hallucinations, and we'd have no reason to think they hold in reality. So what you're actually calculating isn't the absolute probability that you're a Boltzmann brain, it's the probability that you're a Boltzmann brain *conditional on your not being a Boltzmann brain*. Since this is a logical contradiction, its probability will be 0% no matter what, so the number of Boltzmann Brains existing in the universe or multiverse doesn't affect anything here at all.
How can you tell that you're not a Boltzmann Brain?
I know how to tell (assuming I inhabit the conventional sort of existence) that I'm not Chinese; I observe myself being really bad at making my syllables have tones. But I have no argument to prove that I'm *not* a temporary mind created for a split second with a memory of a lifetime that will vanish in the next second; I just have an argument for why it's not very useful to consider what to do in case I am.
>How can you tell that you're not a Boltzmann Brain?
Oh, it's very simple. A Boltzmann brain has only false memories, whereas I have true memories, so I can't be a Boltzmann brain. (Refer also to Samuel Johnson's proof of material objects, GE Moore's proof of an external world, and Diogenes' rebuttal to Zeno.)
And no, "but a Boltzmann brain would also say it had true memories, so you can't know you're not a Boltzmann brain" is not a valid rebuttal. The premise of the argument is itself conditional on the assumption that I do know I'm not a Boltzmann brain (otherwise it's just some random words I hallucinated and there's no reason to think it's true), so to infer the conclusion is incoherent.
It also suggests that it's more likely for us to be Boltzmann brains conditional on our experience even though us not being Boltzmann brains better explains our observations.
There is no practical difference if we were vs if we weren't so this is a silly thing to investigate or adopt a positive attitude towards. By your own stipulations of the thought experiment - there is no way to tell if you're a Boltzman Brain or not - you trip yourself into arguing over something undecidable, as is the typical milieu in analytic philosophy.
Generally, it is because the thought of minds appearing at random "many times" and plausibly suffering or feeling completely insane as they get sensory noise is quite unpleasant. Something we'd prefer not to happen.
Of course, if you, at this current moment are a Boltzmann brain, then you should believe that your hallucination is a lot more in-depth. It has a lot of structure to it with what you've experienced so far, and thus is effectively real in many ways. That's just the question of whether a simulation is real, and isn't too much of a problem... beyond that people do care about whether things are real. (ex: Boltzmann brain with low-quality intuition-based sim might be simpler than Boltzmann brain with high-quality sim of reality such that all the people you interact with are actually people)
A Boltzmann Brain that only lasts for a few seconds might have something like a near-death experience, and as far as I know, NDEs are very rarely negative or painful.
We can sample from observers (or anything else) rather than universes if we use a universal distribution using machines that simulate a universe and then find/decode the particular thing of interest within that simulation. I'm not quite sure whether this cuts in favor of "we are not Boltzmann brains" or not. Practically speaking the simplicity penalty of finding Boltzmann brains is bounded by the difficulty of describing how to find the simplest-to-recognize form of conscious observer that can exist in the universe they inhabit. This predicts at the very least that if we are not Boltzmann brains we should be about as simple as possible to externally recognize as conscious ...
You might think that you could treat universal Turing machines simulating each other as a kind of Markov process and find a stationary distribution, which would give some notion of a "natural" universal Turing machine, but Mueller showed there isn't one.
Wow, I had no idea there could be multiple ArXiV papers with the same number, in this case 0608095. https://arxiv.org/search/?query=0608095&searchtype=all pulls up no fewer than nine, made unique only by the combination of number and category. (I think with the new numbering system adopted a few years ago the number alone is now unique per paper, but I wouldn't swear to it.)
Your argument about falsifiability seems over-simplified. The issue with arguments for the existence of god is not that it isn't falsifiable, it's that there isn't any evidence either way. The OJ Simpson example that you bring up just drives this point home. Occam's razor is useful there because there was tons of evidence and we need to find a way to differentiate between possible theories.
What exactly is the evidence for or against the existence of god: that we exist and that the universe is complicated? Seems like pretty weak sauce to me. If you count that as evidence then you can claim that just about any ridiculous claim has an empirical basis. For example, I could argue that when I was a baby, I was briefly teleported to an alien spaceship. How do I know? Because I have hair on my legs and this is obviously evidence that aliens planted it there.
>What exactly is the evidence for or against the existence of god: that we exist and that the universe is complicated?
We have a very large amount of eyewitness testimony to the existence of God: certainly more than we had for the OJ Simpson trial. Leaving out the rest of the world, in the US 45% of adults report having a spiritual experience where they felt a connection with something beyond our world, and 30% of Americans say they have personally encountered a spirit. (https://www.pewresearch.org/religion/2023/12/07/spiritual-experiences/)
Although I agree with your general point, these seem like bad examples. Neither "something beyond our world" nor "a spirit" necessarily imply a deity, let alone a capital-G God.
> We have a very large amount of eyewitness testimony to the existence of God
As opposed to, what, eyewitness testimony to the non-existence of God? There logically can be none. This is a classic Devil's Proof. This statistic alone cannot be grounds for favoring existence of God over non-existence.
What about all the "if God exists, I challenge Him to strike me with lightning now?" things. I agree those are more stunts than anything, but some people do seem to be convinced by them.
I'll happily provide eyewitness testimony to the non-existence of God. In several decades of life I have never once seen God. That's not proof, of course. But it is evidence, in much the same way that opening your fridge and not seeing mayonnaise in it is evidence that there's no mayonnaise in your fridge.
Of course your not having seen Australia is evidence that Australia doesn't exist. For it to be otherwise, you would need to somehow be MORE likely to not see Australia in worlds where Australia did exist than in worlds where it didn't. That is, the existence of Australia would need to make observing Australia *less* likely. Which would be a weird state of affairs.
It is, however, not strong evidence, nor is it the only evidence on-hand. It's not strong evidence because you've never created a situation where Australia-including models *strongly* predict you ought to see Australia (such as flying to Sydney, or sailing northwest from New Zeeland). And it's not the only evidence because we have maps and pictures and books and videos and news articles and first-hand accounts that all maintain a reasonably-consistent, reasonably-independent consensus about the existence and properties of Australia. So the preponderance of evidence quite strongly favors the existence of Australia, despite your real-but-small mass of evidence for the non-Australic model.
Alas, some other facets of existence are not nearly so well-attested as Australia is, and so weaker pieces of evidence must weigh much more heavily in our current models.
Personal testimony doesn't mean anything - you could at most be confident they weren't trying to deceive you, not that they were accurately reporting what they supposedly experienced.
What if an observer has their own personal testimony? I think the historical evidence argument isn't a bad one its just not the strongest. I dont believe in the type of entity BB is arguing got has much weight but i don't think the simplicist universe that causes the dovetail is an athiest one. Seems like humans dont reproduce enough intelligent minds fast enough in those worlds.
The Simpson trial is a good example. I watched it with my late father and we went in thinking "yeah he did it" and came out "well there is reasonable doubt". The defence team did a fantastic job, no two ways about that, but the police just made such an utter mess of the case (descriptions of chains of custody being non-existent, the cavalier way the labs treated samples, how they took and preserved samples, questioning, etc.) that you couldn't convict on what was presented to you; a conviction would have had to come from "I think he did it and nothing is going to change my mind". And a trial has to start with the presumption of innocence.
I continue to think that too little attention is being given to the super shady idea of a measure on all these necessary mathematical objects. It's a huge bait and switch. "Oh, look, we know all these mathematical objects necessarily exist!"
[later]
"Oh, and there's an extremely non-necessary, arbitrary measure on them -- ignore that that doesn't make any mathematical sense -- that's rigged to give us the right universe since the other idea definitely wouldn't, and that we, as mere portions of *one* of the objects, somehow have access to."
It's highly disreputable just for that reason, in my opinion. Say nothing about its other problems (such as conflating abstract and concrete).
Yeah. It's exactly the same problem that M theory (commonly called string theory) has. It doesn't uniquely identify any particular theory until you constrain it by a cherry picked set of criteria post hoc to match our observations. And that's just bad pool scientifically.
My view is that MUH is more of a theory of "how you should interpret reality". You run into forced simplicity measures often in probability theory.
That is, all of those mathematical objects don't necessarily exist, but that this is a good baseline model for overarching reality that doesn't assume much. It doesn't assume only our local universe exists, which is actually quite a big assumption!
I think you run into MUH-like issues when interpreting reality even in a single universe, as "what is the logic this matter implementing" is *relative to some language*. We just have such a natural language/interpretation of reality that it is less of an issue. Partially what I think Scott is advocating for is that a "natural" language probably isn't super far from natural ones we might use.
I don't think this is ignored, Scott is hardly being a researcher in the area, but that it is a hard philosophical question to see if there's a way to 'naturally' resolve it.
> Imagine trying to pick a random number between one and infinity. If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined. Reddit commenter elliotglazer has an even cuter version of this paradox:
> » “The contradiction can be made more apparent with the "two draws" paradox. Suppose one could draw a positive integer uniformly at random, and did so twice. What's the probability the second is greater? No matter what the first draw is, you will then have 100% confidence the second is greater, so by conservation of expected evidence, you should already believe with 100% confidence the second is greater. Of course, I could tell you the second draw first to argue that with 100% probability, the first is greater, contradiction.”
--
I think this could be extended to conjecture that it is impossible to *observe* *anything* infinite without minimally collapsing something about that infinite thing. Which would be a useful way to wall off a universe from the rest of a cosmos
I think you miss the point of the Plato recapitulation an Popperian falsifiability arguments.
Tegmark's multiverse is mostly silent on what constitutes a suitable mathematical object capable of existing. Is specifying it in English enough? ZFC? Do the objects need to be computable? Does ultrafinitism have a say? Does the set containing all sets exist? What about other logical impossibilities?
The most detailed specs we have in math these days reduce to formal logic, but even that relies on some unspecifiable meta-logic. I.e. no formal proof verification system can verify the hardware it's running on.
Even worse, if we assume materialism, then all the math we know is implemented by physical processes in this universe. Do we also accept the existence of objects from completely alien maths impossible in this universe due to physical constraints?
A maximally permissive answer to my questions above forces us to accept that any "system" exists, without the possibility for determining what is or isn't a system and not even being picky about impossible things. It's not just a fuzzy border issue. Every object is illogical relative to some logic, especially the simplest one! I.e the logic that permits nothing is arguably the simplest, at least relative to standard set theories like ZF.
Hand waving Kolmogorov or whatever measures onto these things just begs the question by fine tuning the definitions to get whatever result desired. Or said another way, it's a model with tons of tunable parameters and answers only things that are encodable in that free parameter set. I.e. we just created a lookup table.
This is the basic mechanical problem with Platonist metaphysics, IMO. It is simply incoherent, despite first appearances. Popperian falsifiability does point at a utilitarian resolution however. What does Tegmark give us when we play taboo on the concept of real? I.e. can we operationalize what Tegmark's existence claims even mean? Are we dealing with replicable realist things? Non-replicable but consensus reality-like things? Or is it more like how dreams and false memories operate? Etc.
That said, the as an intuition and discussion pump, the Tegmark idea is fun, so hedonistically, I'm all for it.
The metalogic is usually just ZFC, or occasionally ZFC+some large cardinal axiom. You can absolutely use ZFC to study itself, as long as you're okay with all your theorems taking the form "if ZFC is consistent then _" or "if ZFC has a transitive model then _" etc.
You can certainly encode ZFC within some models of ZFC, but you're still running on some unformalized hardware. The punchline being that this let's you smuggle in as much complexity as needed to encode the lookup table of answers you want.
We could try sidestepping these issues by declaring that "mathematical object" means anything definable in ZFC or whatever, but then then we must contend with Godelian monsters like BB(745), which are definable but provably have no value in ZFC. We can even construct objects one more level removed that are only expressable in ZFC even by defining an object that's capable of defining the object. You get the idea.
On the one hand, permitting these objects to exist means that "mathematical object" now means ZFC, plus all the infinite other things we arbitrarily added in, sneaking in infinite parameters for our lookup table. On the other hand, if we don't do thT, then we have admitted that there are "mathematical objects" which don't exist.
When reading about metalogic it's important to keep the distinction between definability, provability, and truth in mind. A number can be definable without being provably equal to anything. If a sentence isn't provable or refutable in in some logic system, doesn't mean it's neither true nor false.
ZFC considers BB(745), and all busy beaver numbers, to be a definable object, and it will tell you it's a natural number, it just doesn't know *which* natural number it is. Further, all models of ZFC that agree on what the natural numbers are, will agree on the value of BB(745) and indeed on the truth value of all first-order arithmetic statements. I'm sympathetic to saying that the higher reaches of set theory are not very meaningful, but in order to hold that first-order arithmetic statements don't have well defined truth values you have to deny that there's a privileged standard model of the natural numbers - and in that case it's *you*, not me, who has introduced an infinite array of new objects by fiat.
When you say "the value of BB(745)" you're sneaking in non-ZFC arguments. ZFC, i.e. the stand-in for what we agree to be all of existence, cannot demonstrate that any (arithmetically defined) value equals BB(745) without also demonstrating that ZFC is inconsistent.
That's the state of affairs I gestured at when saying "doesn't have a value". I'm afraid the level of technicality here detracts from the main point.
Which is that Tegmark's multiverse permits this uncomfortable situation whereby almost all objects are neither equal nor disequal to infinite others. Resolving this conflict is where all the fine tuning I previously mention comes in.
We could attempt to be laissez faire about allowing any definition independent existence regardless of equality with others, but then we also allow things like Russell's set to exist, and the whole philosophy reduces to permitting anything at all to exist, no matter how self-contradictory or not.
The mathematical platonist view is that ZFC, which is ultimately just a formal system for deriving finite sentences from other finite sentences, is a limited descriptor of a true mathematical reality. As finite beings, neither we nor our logic systems can know every true fact, which should be of no surprise since there are infinitely many platonic objects and infinitely many things to say about them, but this is no problem for the objects themselves, which exist independently of our attempts to understand them.
I think this view encounters some genuine issues with things like the Axiom of Choice or the Continuum Hypothesis, but it seems much more grounded when it comes to the natural numbers and first-order arithmetic.
You seem to be rejecting this in favour of some kind of hyper-formalism where you can only talk about what is provable or refutable, and not what is true or false, and that if ZFC fails to answer a question then there is no meaningful answer. Going this far seems epistemologically unstable, since ZFC doesn't resolve the question of its own consistency, so you have to hold that the question is meaningless or arbitrary. But if it's meaningless to say that ZFC is consistent, why care what it has to say at all?
Regardless, asserting a position isn't making an argument for it, you've really just observed that mathematical platonism requires believing in more than just formal logic systems, which is the basic point of it.
I feel like we're not really connecting here. Though, I'm finding this back and forth really helpful, so thanks for continuing to respond.
My original post outlines the contradictions that arise in Tegmark's Multiverse if we just use an intuitive "know it when you see it" notion of "mathematical object". Really, these just recapitulate the problems with intuitive set theory.
One could object that we can resolve the contradictions by pinning down sufficiently what we mean by "mathematical object" but then the you get the fatal Gödelian issues I mention.
Whether or not we're a hardline formalist or not, in practice we study mathematical objects via some semirigorous formaism. Maybe that formalism is governed by Platonic ideals or not, but if we take Tegmark seriously and try to use it as Scott does here, we have to contend with the fact that we can tune the formalism to give whatever complexity answer we want, rendering it useless IMHO.
I think actually the Mathematical Universe Hypothesis is exactly what you get when you take normal scientific reasoning and then taboo the concept "real".
There is only one universe. That's what the first three letters of "universe" mean.
The universe includes all of space and all of time. It is meaningless to consider whether something "came before", or "will come after", or "exists outside of" the universe, and hence also whether something "caused the universe to exist" or to have certain properties.
i don't think it's even useful to say that the universe has properties. Properties are useful to distinguish one item of a class from another, but there can never be any sense in "distinguishing one universe from another" because there is, has been, will be, and could be only one universe.
We need better language to distinguish 'the totality of everything that exists' from 'the thing that started at the Big Bang' in case they are different.
The 'observable universe' is what I hear physicists use for the latter when they want to be precise.
I agree that if language were to be logical, 'universe' should mean 'everything'. Alas and unfortunately for everyone who likes the etymology of words to be relevant, 'multiverse' seems to be the most accepted way to explicitly refer to the former.
A finite universe doesn't suffer from the "non-Boltzmann brain observers have measure 0" problem, but it can still have an extremely low ratio if it's sufficiently large.
If you expand a small universe by adding matter that's capable of evolving life, the ratio stays the same. You can only reduce it by adding matter that's favourable to BBs, e.g. a long period of cold high entropy.
>I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one.
I'm not sure you're grokking the moral argument, at least the moral argument BB is defending. He's arguing that evolution cannot in principle explain why we could have true moral knowledge (or, as he includes, true mathematical or rational knowledge) while theism can. If our moral sense is purely the result of evolutionary processes then it could only be correct by accident: so it is either not correct (just a nice hallucination that improves our reproductive fitness) or something other than evolutionary processes has intervened to provide us with the ability to ascertain true moral (or mathematical, or logical, etc) knowledge.
BB links to a paper in his subsection on moral knowledge and here is a relevant quote from it:
"Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” When we consider certain moral claims, we can just “see” whether they’re true: we can see that (at least absent extenuating circumstances) hatred is bad, virtue is good, killing innocent people is wrong, etc. With these intuitions in place, we can reason and make our moral judgments more accurate. For instance, initially we might not think that factory farming is wrong. However, once we realize that causing suffering without a good enough reason is wrong, that producing cheaper meat is not a good enough reason for producing tremendous suffering, and that factory farming causes tremendous suffering in order to produce cheaper meat, we might change our minds. But this requires that we trust our intuitions: we wouldn’t be able to reason our way to the wrongness of factory farming without intuitively grasping the wrongness of causing suffering and the insufficiency of producing cheap meat as a justification for it. The problem for the naturalist here is that, if naturalism is true, it seems that the faculties responsible for our intuitions were formed through purely natural processes that didn’t aim at producing true beliefs. For instance, it seems plausible that our intuition that you shouldn’t cause pain without a good reason was instilled in us by evolution, since communities of our ancestors who flippantly inflicted harm on each other wouldn’t have lasted. But this might unnerve the naturalist who believes in moral knowledge. After all, it seems that we might have easily had very different moral intuitions. For instance, Charles Darwin suggested that:
"'If . . . men were reared under precisely the same conditions as hivebees . . . our unmarried females would, like the worker-bees, think it a sacred duty to kill their brothers, and mothers would strive to kill their fertile daughters, and no one would think of interfering. ( 1902 , 137)'
"Similarly, the philosopher Mark Linville (2009 , 397) suggests that
"'Wolves in a pack know their place in the social hierarchy. A lower ranked wolf feels compelled to give way to the alpha male. Were he endowed with [moral thoughts], then, presumably, his “moral sense” would tell him that obeisance is his moral duty. He would regard it as a moral fact that alpha interests trump beta or omega interests.'
"In light of this, the philosopher Sharon Street, who has done as much as anyone to draw attention to this issue, suggests that trusting moral faculties that have been formed by such natural processes might seem to be analogous to
"'setting out for Bermuda and letting the course of your boat be determined by the wind and tides: just as the push of the wind and tides on your boat has nothing to do with where you want to go, so the historical push of natural selection on the content of our evaluative judgements has nothing to do with evaluative truth. While this historical push might sometimes give us correct beliefs, this would be “purely a matter of chance” '
"Further, with naturalism, how the moral domain is doesn’t seem to explain why we hold the beliefs we do; even if these beliefs are true, we don’t hold them because of how anything in the moral domain is, but instead because beliefs like that helped our ancestors survive. As we discuss in Section 4.3, even if we couldn’t have easily had very different, and therefore largely incorrect, moral intuitions, this fact might be enough to make it the case that, if our beliefs are true, they are true only by accident in a way that rules out knowledge."
Scott believes that there are no objective, human-independent moral facts, there is merely the evolved stuff, so no explanation is needed for a correspondence.
The presence of "moral facts" that don't cash out in any kind of facts about material reality doesn't provide any more reason for moral action than their absence. What you're saying above is just the contrapositive of the Naturalistic Fallacy.
First off, making the only alternatives "objective facts" and "social construct" is a false dilemma. "Ice cream tastes better than gasoline" is neither.
If there are no moral facts then there's no objective sense in which one "should" follow one's desires either; there is no stance-independent "should", even that one. We follow our desires (to the extent that we do) because we're built that way. Similarly, we follow our consciences (to the extent that we do) because we're built that way. Recognizing that "rocks should not fall on people's heads" is a fact about ourselves rather than a fact about rocks doesn't change anything about what we will do. It also neatly solves the mystery of why rocks make no effort to avoid falling on people's heads.
He believes that people have values. That these values are not objective moral facts about all creatures, or even necessarily all of humanity.
> For example, giving your own money for malaria nets in Africa as a clear recognition of the fact that there is some moral good in doing so instead of spending that money on hookers and blow for yourself.
(from your other comment)
Yes, that he has empathy which makes him value what other people are experiencing. That he wants to improve their situation!
But, he wouldn't believe there's some moral rule encompassing both humans and an alien artificial intelligence. That's what moral realism would usually postulate. The human and AI could still trade with each other and apply game theory for some degree of cooperation, but it wouldn't be as empathetic as two humans interacting.
Now, you could say "The AI just destroyed a nice earth-like planet for resources, that is bad *relative* to my moral values", just like the AI could say "I think it is bad that the human is not making paperclips out of the metal."
But they both could also say "I understand that by their values, that is the better action, even though it is bad according to mine."
> I am transported back to 1939. Would I fight for the Nazis or the Allies? I “know” I should fight for the Allies. But for a person who thinks morality is entirely subjective, he should be able to decide “I’ve been socialized to think the Allies are good, but nothing really matters, so if it’s more fun to fight for the Nazis, I’m going to do that.”
They would be able to decide that. It just would clash with a lot of their values! You seem to be going "oh, there's no absolute morality, thus anything is permissible and I don't have any reason to pay attention to morals", when... no?
I dislike people being killed. The vast majority of humanity is like that. I like people living healthy happy lives.
> imagine you can steal something from the store with a 0% chance of getting caught. Why not steal it then? Of course you’ve been taught not to, but as a thinking animal who knows morality isn’t a real thing you should steal the thing, because it’s all upside.
Yeah, you keep saying morality isn't a real thing, but you're misunderstanding the core idea. People still have morals/values about situations! Most people know it is bad to steal, because they've been taught that, and they also can form the implication of "person got this item by their own work -> I steal it -> I get a benefit, but they are harmed", and thus fallback to intuitions about cooperating and not betraying others.
----
The morality that Scott espouses does not behave that exotically compared to what many moral realist positions espouse. It is more person-centric, but as a view into their own values.
Another important element is that humans share a lot of values! Most people have altruism! Most people don't like backstabbing someone who helps them! People have fairness intuitions! And that's even before we get into notable socialization-induced values.
Law does change in how its viewed, becoming less about instituting morality, and more about ensuring that a society of people with varying views can cooperate peacefully... but that's not that different from how law has been viewed for centuries in America. (That's why Freedom of Religion exists to varying degrees, so that people with different ideas about morality can cooperate!)
You could then ask the same question to the person who's considering hookers and blow vs donating to people in Africa. Why doesn't he just opt -out of enjoying hookers and blow *and* donating to people in Africa? Then he doesn't have to care about anything, seems much easier!
Well, the reason is that we have fundamental values for that. Hardwired into the brain, via our tastebuds and instincts.
Similarly, altruism to some degree is hardwired into the brain. The evolutionary explanation would be that it is useful, especially in small tribal groups, and encourages cooperation to avoid zero-sum or negative-sum problems.
We have preferences for being consistent and fair, varyingly influenced by socialization. Now, there's arguments to be made about how "valid" social influenced morality is. My view, and I believe Scott's too, is that it is more than 0% but less than 100% in terms of how you should adopt it.
----
Relatedly, I often make a distinction between felt-emotions and ~decision-motivations. Felt-emotions would be "that food was good" or "I'm glad my friend is happy with the present I got them". Felt-emotions often include altruism, just like any other pleasures.
Decision-motivations I think are often morality-centered. Extrapolations from our felt-emotions, because we're thinking creatures. This is where you get more abstract theories of "I should donate to people far away, they're people just like anyone else." or more intricate moral theories like utilitarianism. These may not have strong emotional-components, but they still affect how you make decisions. Think of this like 'duty', there's not quite a strong emotional sense with that, though I think there's some of one, but it still drives decisions to some degree.
You can't just "opt out" of your own moral preferences any more than you can "opt out" of your other preferences, you are not some kind of universal mind-instantiating machine. Furthermore, "moral preferences that benefit only you" is borderline incoherent, because "benefit" means "bring good to" and thus only has meaning under a specific set of moral preferences.
> people in Africa whose lives have nothing whatsoever to do with mine
That's a hell of an assumption, in a world with densely interdependent supply chains and fragile shipping bottlenecks. You ever use consumer electronics with components that require cobalt, or pass through the suez canal? Been scammed by a Nigerian prince? Eat any chocolate?
That's fine (I mean, it's not completely fine, but that's a different subject*) but as BB points out the same argument applies for mathematical and model knowledge, which I don't think Scott believes is unreal: for one thing, if mathematical knowledge was not human-independent than Tegmark's theory doesn't work at all.
*As an addendum explaining why moral anti-realism is not completely fine, I would like to quote C. S. Lewis from his book "Miracles":
"They say to themselves, ‘Ah, yes. Morality’—or ‘bourgeois morality’ or ‘conventional morality’ or ‘traditional morality’ or some such addition—‘Morality is an illusion. But we have found out what modes of behaviour will in fact preserve the human race alive. That is the behaviour we are pressing you to adopt. Pray don’t mistake us for moralists. We are under an entirely new management’…just as if this would help. It would help only if we grant, firstly, that life is better than death and, secondly, that we ought to care for the lives of our descendants as much as, or more than, for our own. And both these are moral judgements which have, like all others, been explained away by Naturalism. Of course, having been conditioned by Nature in a certain way, we do feel thus about life and about posterity. But the Naturalists have cured us of mistaking these feelings for insights into what we once called ‘real value’. Now that I know that my impulse to serve posterity is just the same kind of thing as my fondness for cheese—now that its transcendental pretensions have been exposed for a sham—do you think I shall pay much attention to it? When it happens to be strong (and it has grown considerably weaker since you explained to me its real nature) I suppose I shall obey it. When it is weak, I shall put my money into cheese. There can be no reason for trying to whip up and encourage the one impulse rather than the other. Not now that I know what they both are. The Naturalists must not destroy all my reverence for conscience on Monday and expect to find me still venerating it on Tuesday.”
Given all that, if Scott is a moral anti-realist then why does he keep trying to convince people to give 10% of their income to charity and donate their kidney to a stranger?
I might be misunderstanding him as it's late here and I'm dumb, but I don't understand what's his problem. He has moral want X, learns that moral want X is explainable without (effectively) supernatural and for some reason that lessens his moral want X compared to non-moral want Y. How about not doing that?
I want to cheat on my wife and I'm pretty sure I can get away with it: but my conscience tells me that would be a betrayal and would hurt her and I shouldn't do that. But now I know that my conscience has no claim of superiority to my desire to bed beauties out of wedlock, which is a real relief. I can safely ignore it. Why shouldn't I?
You can't safely ignore it, because you have conscience, that is exactly what is accepted! this is like assuming something and deriving (with some highly questionable derivation) the opposite of the statement assumed
Well... yeah, why shouldn't you? Plenty of people cheat on their wives, and some of them don't even regret it. There's nothing actually stopping you other than your own empathy and psychological disgust of the act, the latter of which is a result of that idea being drilled into you since you were a child.
The first is that strong, deep emotions don't just vanish the instant you realize there's no logical basis for them. If they did, a lot of mental disorders would be a lot easier to treat.
The second is that your "desire to bed beauties out of wedlock" is a result of the exact same evolutionary and social processes that your conscience is. If one is somehow invalidated by its context, so is the other.
It proves that you can't have your cake and eat it too: you can't act like your moral sense means anything more than your other desires while believing it doesn't point to anything beyond survival instinct. Which I think you would agree with, but most people are not nearly as nihilistic as you are and might recoil at the idea that our moral sense can be ignored.
>if Scott is a moral anti-realist then why does he keep trying to convince people to give 10% of their income to charity and donate their kidney to a stranger
Because he thinks that people are similar enough, which includes the mechanism producing moral intuitions. Since his intuitions point towards EA, he thinks that he does the rest of humanity a favor pointing them in the same direction.
>Since his intuitions point towards EA, he thinks that he does the rest of humanity a favor pointing them in the same direction.
Under moral anti-realism, how is that doing anyone a favor? Before he attempts to "point" them they are content. Afterwards they feel guilty: and there is no reason for them to feel guilty, it's just a built in social survival instinct that Scott went ahead and activated when it wasn't previously. Why do that? It would be like going around giving people a sharp whack to their funny bone. Yes, people are similar enough that they almost all have the mechanism necessary to feel nerve pain and lingering discomfort when struck on the ulnar nerve, so what?
You're taking this from a point-of-view where moral anti-realism is thus inherently very selfish and only thinks about current values.
People have values about the world, and many people have altruism to varying degrees. Some people avoid it because they don't trust charities, some because of usual procrastination, and some because they're afraid of feeling pushed to 'give it all up'.
The usual view is then that they do have these values already. They may not be feeling aware of their altruistic values at the moment, but it is still a part of them. They do want others to be better off.
He does believe that their values will be satisfied better by helping others, of knowing that they've improved the world.
But I also think there's an important distinction to be made between felt-emotions and values. Humans have values that they don't feel particularly strongly about either way. They also have values that would make them feel negative. Yet they do them anyway. Why? Is that not just hurting yourself? Why put in that effort?
Because our decision-making is not wholly built upon felt emotions! I value altruism and generalize that to helping people across the globe even though I'll never see them personally.
This is why people say they wouldn't want a dead relative replaced by a fascimile so they never recognize they are dead. Because they value more than just felt emotions, they have values, and also some degree a care for the reality of the situation or counterfactuals.
(We could quibble about felt-emotions, you could say that sadness/happiness/guilt are strongly-felt emotions while acting-by-values/duty are more decision-orienting emotions that are less visible, but whatever.)
I think your argumentation style is common, but is a misunderstanding of how people think of morality. You've attached the "cares fundamentally" to moral realism, but other moral theories can have that.
Is that supposed to be a good counterargument against moral realism by Lewis? It basically reduces to just 'any preferences are acceptable and subject to the whim of the individual'. Which, okay, I can see why you would dislike it, but doesn't feel like a strong rebuttal. It ignores that besides preferences, we live in groups that can push us one way or other. And also, that you can still have *really* strong preferences even if you know they are just that. Humans who have been indoctrinated and internalized some preferences won't be able to get rid of them just because they can rationalize 'well, this is just a preference' (we could call this the Raskolnikov fallacy). And there's always the external, amoral 'society will punish you for doing x' reinforcing those socially valued preferences.
I don't know about Scott, but I'm happily anti realist about maths. There's just no evidence that it's true beyond "deriveable from axioms" and/or useful.
It would feel problematic to me to have to chop out derivability from your criteria and be just left with pragmatic utility, because it feels to me like there’s something more objectively right and wrong about statements of mathematics beyond usefulness, which can change over time and based on circumstances and the persons asserting the mathematical claim.
According to Tegmark's theory math *is* reality. There are no two different magisteriums which just so happen to coincide. Everything is just math. Therefore there is no coincidence to explain.
This, of course, is completely unsatisfying, and immediately such questions as: "okay... but how exactly does it work?" come to mind. Though, I suppose if one's curiosity can be satisfied by "God did it", for the sake of consistency, it should also be satisfied by "It's just math".
Not for my brain, I don't have the maths brain. So I'm unable to go past the twelve times table, don't even talk to me about calculus and algebra.
I'm in the position of the illiterate peasant having to take on trust that the village priest is telling me the truth when it comes to "but the theory is so simple, here let me explain it, xyz&%*thj, so of course it follows that fifty angles of ghjjj means the apple is green not red!"
Mathematical intuition helps navigate and succeed in the physical world and as for why, from the point of view of the MUH this is obvious. Therefore it can contribute to fitness and be selected for by evolution. And it isn't an absolute thing either, as witness nonhuman sentient animals which have insufficient mathematical intuitions to do axiomatic reasoning.
I can testify that it's possible to reject strong realism.about morality, maths and morality (and money, marriage and morgages). Mathematical multiverse theories aren't the only multiverse theories available. If non mathematical existence is coherent, then they are arbitrarily small.
Evolutionary processes are quite capable of non-accidentally generating usefullness, so the question is whether anything more than usefulness is needed.
Not sure I understand, are you suggesting that morality= usefulness and usefulness is created by evolution and is objective, therefore objective moral facts exist?
If so, then I disagree, and I believe you are commiting some version of the naturalistic fallacy, but I don't want to elaborate before you confirm the above is your position.
"Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” Most people's beliefs depend on what society tells them. Few people try to derive morality from first principles.
When you were a kid, and some other kid got a thick slice of cake but you got a thin slice of cake, did you get upset because society had taught you to expect fair treatment from others? Or did you intuitively feel it was unfair?
Longer-term cake maximization strategy in the environment of ancestral adaptation might reasonably involve distributing enough benefits to fellow tribe members that they'll willingly contribute to the prep work next time. Modern version of that basic plan is the joint stock corporation, and while such superorganisms have been widely and thoroughly criticized, to call them a failure by Darwinian standards would be nigh-impossible to justify.
Asking people to introspect is very bad methodology to answer psychological questions, especially about distant time periods like someone being a child, and about situations that didn't merit reflection in the moment like an underspecified emotionally charged reaction to some putative injustice. It's also pretty obvious that children are not immune from socialization, so I don't know on what grounds your (mostly rhetorical?) question could succeed in vindicating your position.
"evolution cannot in principle explain why we could have true moral knowledge"
I'm not sure why we should think we have true moral knowledge. The vast amount of literature trying to explain morals would suggest we have only a vague approximation of moral knowledge.
> If our moral sense is purely the result of evolutionary processes then it could only be correct by accident:
Yes, by definition that's how evolutionary processes work: get it right by accident once, notice it's working better than before, do more and more of that right thing thereafter.
Most humans believe it is wrong to murder children; yet gorillas, lions, dolphins, baboons, prairie dogs, mice, and many other species regularly kill the children of their rivals. There is a clear evolutionary advantage to doing so: in lions, for instance, the lionesses will not be fertile while they are nursing their young, so if a rival lion takes over a pride and kills all the cubs the lionesses will become sexually available, so they can bear his children instead. Would you say that evolutionary processes got it right with us, and wrong with them, or vice versa?
It is clear to me that evolution does not hinge on whether something is morally correct, only whether it increases the odds of successful reproduction and survival. I see no reason to believe that what is moral is also what is best for reproduction.
> Most humans believe it is wrong to murder children
Most humans who've ever lived? Categorically, no circumstantial exceptions?
https://en.wikipedia.org/wiki/Exposure_(infant) There's a huge amount of stuff in the Old Testament and other contemporaneous sources about ritual murder of children for religion-mediated strategic advantage, or killing all the males above a certain (very young) age in some conquered group to ensure cultural assimilation. There are even present-day moral conflicts over "honor killings" and "officer-involved shootings" and such, where one side argues that some child's behavior was such that to let them remain alive would have been more wrong.
> Would you say that evolutionary processes got it right with us, and wrong with them, or vice versa?
Evolutionary processes neither plan ahead nor strive for conceptual consistency across different categories; they only care about fitness under circumstances actually encountered. Human children are a lot more labor-intensive to raise, and many important hereditary traits are more behavioral than genetic, which means the optimal repair / replace threshold is different. Industrialization shifted it further, and morality has been scrambling to recalibrate accordingly.
>evolution cannot in principle explain why we could have true moral knowledge
This isn't a problem because most moral behavior is a result of socialization. Evolutionary debunking arguments concede too much ground - evolution is only reaponsible for the barest of moral behavior. It's like saying we need God to explain why we can play baseball if evolution can't do it. Baseball is primarily a result of socialization, not evolution, so the entire discourse about debunking vs defending evolutionary arguments about baseball is defunct.
>Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” When we consider certain moral claims, we can just “see” whether they’re true
This is an empirical claim about moral actors' psychology with no empirical evidence supported. It's also using a metaphorical version of "see" that likely postulates some sort of moral sense, which philosophy has a long line of doing with divine senses and rational faculties, but has never had an inch of empirical evidence in favor of it.
>true only by accident in a way that rules out knowledge.
Only if you buy into a parochial conception of knowledge that in the quoted section so far has not been motivated for anyone to accept.
>What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, […]
I believe the first option because I have a model of how the world generally works, in which the first option is more *likely*. The epistemological concept of simplicity doesn’t really come into play there.
I guess that the more cloudy your model is, the more this rule starts to correlate well with simplicity: if you have a lot of assumptions and epicycles, but of a lot of uncertainty in your model, it’s likely that some unknown part of the model will clash with some of your assumptions. But when the uncertainty on the model is relatively low, the opposite is true: the simpler explanation for dinosaur fossils is probably something like “they were already there since the beginning”, but knowing how the world actually works, an extremely complicated explanation involving living creatures with skeletons, which were similar to modern animals but a little different because of evolution, special kinds of bogs to preserve their remains, and so on, is actually more likely, and it’s the one we choose to believe.
For this reason, I would be very wary of assigning any kind of intrinsic value to simplicity. To me, it mostly looks like a way of saying “the less words in your answer, the less likely that you’re wrong”.
Right, but many gods probably aren't possible mathematical constructs. Zeus is basically no different from an advanced alien and I see no particular reason why Zeus cannot exist as a mathematical construct, and the god of Spinoza/Stoics is at least close to if not identical to the Universe (or Tegmark's mathematical multiverse) which does exist so perhaps this similar-and-plausibly-the-same object exists too, but going by the Epicurean argument, omnibenevolent, omniscient and omnipotent god is self-contradictory, and so these gods (probably?) aren't possible mathematical constructs. If Tegmark's mathematical universe is true, then all gods with properties such as "is all that exists at the most fundamental level of existence" and "some property, which isn't a property of a set of all mathematical objects" would also result in contradictions.
The classic Christian view of God as omnipotent/omniscient is inconsistent in general, because he can't affect all the other universes. Now, he could have a local copy of the MUH, because you can nest it arbitrarily, and control that.
> there is no neutral programming language, no neutral compiler, and no neutral architecture.
But Forth comes close.
In general you can have different languages that capture some simplicity (Forth, a simple Lisp, Turing Machines, say even C), and they will each be able to model the other languages tersely.
Does the universe even need to be consistent to exist?
Ask yourself the question: what would happen if a million dollars appeared in your room as you are reading this comment? Well, you would be surprized, then you would reread this comment, then you would... figure out that you don't actually exist, but are just part of the answer to this hypothetical, right? Then you would probably spiral into an existential crisis and think about what this means for the nature of reality, and whether or not you are even conscious or a philosophical zombie...
And it's not like 'realize you're in a story' fourth wall break either, because there doesn't need to be a story written anywhere or simulated on any hardware. The answer to the question "what would you do if a million dollars appeared in your room right now" exists as a hypothetical object, even though the physics of the underlying hypothetical universe are not consistent.
What, Scott, would you do if a million dollars appeared in your room as you are reading this comment? What would you think about the nature of reality? Would you reply to this comment asking me how I did it? Would you tell people that this is evidence that the MUH is true? What would you do?
I would infer I'm in a simulation, not necessarily that I'm in an inconsistent hypothetical.
My view is that a universe doesn't necessarily need to be "consistent" in some models, because those models alter the rules.
My goto example is pointfree topology. You can have a locale of "All functions that bijectively map from Natural Numbers to Real Numbers" (it can go back and forth, uniquely). There are no functions that fits that. However, in pointfree topology, you can get all the arbitrary 'approximations' that get close to that, satisfying rules about how they locally interact (join/intersect each other).
Now this doesn't answer whether it needs to be consistent, but I think it does provide a view that you can have inconsistent-looking setups that have consistent paths-to-the-end.
>Where did all of that come from? It was . . . inherent in the concept of z^2 + c, I guess. Somehow lurking latent in the void. Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
You can talk about them, so they are meaningful. They are probably difficult to prove or falsify. But simplicity is a thing as you said. Having said that, the ontological razor, assuming the minimal amount of stuff, and the epistemological razor, making the fewest arbitrary assumptions, pull apart.
Is the puzzle of Secret Fire (or Reality Juice) epistemic or ontological?.is the problem "why only those things?" or "how does it work?". , MUH and materialism are different theories, and a potential answer to "what is the reality juice that makes only some possible mathematical entities real" is "matter".
To me any theory that has "reality juice" is a form of dualism. To be a real monist, you either have to have material facts *just* be facts about math, or mathematical facts *just* be facts about matter. Now as for which, and exactly how, that's the hard part.
Well, there is an interaction problem with math and physical reality. It's usually brought up under the heading "the unreasonable effectiveness of mathematics". It's puzzling that mathematical facts can constrain our expectations of physical reality, and that physically-implemented minds can derive (at least some) mathematical facts, if they are really in completely separate ontological magisteria.
Well, the matter is reality juice theory isn't that kind of dualism. It's more like hylomorphism -- everything real is a fusion of matter and form, and you can't have the one without the other.
But that's just epiphenominalism--if the Form determines all the behavior of the Matter, and the Matter just adds "reality", how is that different from Matter determining the behavior of the underlying p-zombie and Mind just adding "phenomenal reality"?
Tegmark is a technically credentialed thinker, so I assume he has considered with some care the following obvious objection, but there is more wrong with assigning some simplicity distribution to "all mathematical objects" than just the easy problem of putting a finite measure on an infinite set. Whatever this phrase is meant to mean, it must certainly be unsetly large, since there are plenty of unsetly large mathematical objects, for example the category of sets. To my knowledge, there is no standard theory of integration over proper classes, so just saying it's like how an infinite series can take a finite value falls far short of the issue, even as a toy example.
Does Tegmark mean all computable objects? Finite objects? Objects in some specified logic or universe of sets? If so, why these and not others?
Edited to add: it occurs to me that it was a great missed opportunity in mathematical nomenclature to refer to proper classes as unsetly large rather than unsettingly large.
I guess I'll need to read the book or some of the discourse around it - that section and the one following it make it sound like he is picking in a pretty ad hoc way very restrictive definitions of "all mathematical objects" to suit particular objections at particular moments. I think this obviates a lot of the immediate appeal of the theory, which comes precisely from the "if you can think it, it exists" flavor of the unasterisked phrase.
As a sidenote, I’m reading Douthat’s book and the amount of research he does for the people who disagrees with him is…extremely bad. He may be smug against New Atheists or whatever because it’s popular, but it’s clear he never really understood many of their arguments/counterpoints. To name just one example: he constantly frames our understanding of the universe (i.e the human ability to detect patterns and organizing our understanding of those patterns into theories) as discovering the order of the universe, sort of presupposing an orderer. I’m not sure how to explain this without cutting/pasting examples, but he literally says at what point that the multiverse explanation demands a “by whom” answer. This was basic creationist argumentation - begging the question (sneaking in your conclusion within your premise).
Anyway, there is a pattern of behavior by Douthat of not actually keeping up with the discourse. In his first few chapters he tries to delve into the multiverse/debunk it and the most recent article is 20 years old! In a revealing off-handed comment he refers to Lawrence Krauss’s “Universe from nothing” as “recent” when it was published 13 years ago. His work is super shotty, and borrows from the Christian Apologist playbook of quoting scholars who disagree with him as if they do, not actually portraying the arguments of the people he disagrees with (citing Nagel to debunk Dennett instead of quoting Dennett) and citing irrelevant scholars pontificating on subjects irrelevant to their expertise (a classics scholar talking about neuroscience/consciousness).
None of this is inherently disqualifying, but anyone who’s a veteran of the Internet Religious Wars of the Bush/Obama years will be frustrated. His book tries to make a neutral case for religious belief but it’s dripping with Catholic assumptions about a mind-universe and quite flippantly dismisses arguments about consciousness/self as illusory, which is pretty common in Buddhism, so odd to say his is the neutral perspective.
I’m not as high IQ as Scott A or BB, but I try to keep an open mind and i’m only 1/3 way through the Douthat book and it’s so frustrating.
At the end of the day, all cosmological worldviews rest on faith in things unobservable*.
Philosophical debates can be fun, and much more likely so when it’s philosophical world-building with math and physics. Falsifiability *may* indeed be inadequate as a vetting mechanism for multiversal cosmological questions, but the boundary where observability ends** and entirely independent cosmological spaces that are by definition unobservable to us is the boundary between reason and faith.
* More accurately but messes up the shorthand aesthetic: things that can not be forced to become observable.
** Obsersavbiilty within our light cone plus secondary, tertiary…effects from outside our light cone that have physical effects within ours.
>I think of MUH as claiming that there is no Secret Fire, no difference between possibility and existence.
I mean, sure, but you mentioned the logical necessity of mathematical objects as the answer to the question of "why is there something rather than nothing", and as far as I can tell, MUH doesn't have much to say about that.
> [...] but shouldn’t be necessary for it to be obvious). Once you put it like this, I think it makes sense [...]
Sorry, you want me to believe that I'm talking to an equation, and then appeal to "obviousness"? It's *obvious* that Tegmark is a crackpot, but if you want me to suspend my belief in the obvious, you can't continue to rely on it when it suits you.
> Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor).
Sorry, you want me to believe that *everything* exists, even stuff that isn't real, and then appeal to Occam's Razor? (I mean, I'm willing to accede to Occam if you're willing to accept that simplicity is relevant because God values elegrant universes…!)
And how did we go from "draw form an infinite set" to a "draw form a set of infinite measures". I was ready to accept the first statement, but the second one is not obvious at all. Especially talking about drawing from a set of possible conscious beings
Another point is that in a standard metric spaces (like Rn with Lebesgue measure), a set with infinite measure cannot be compact because it would fail the boundedness condition. However, in certain non-standard topologies, exceptions might exist.
""Consciousness” is a useful shorthand ..., but I don’t think anything in this discussion hinges on it."
You can't just take consciousness out of the topic of discussing God, since it is implied that he is a being possessing this property of superconsciousness
Here's a possible argument for giving simpler universes more probability measure than complex ones: Simpler universes appear often as subuniverses of more complex universes. Simply put, if you throw a (conscious, observing) dart at the multiverse, what will it likely see? It would see the smallest self-contained universe in which it landed. Even if the dart actually hit an incredibly complex universe, it probably landed in a much smaller subuniverse inside that, and wouldn't be able to see past the confines of that subuniverse (which is what makes it a subuniverse) to the bigger universe containing that subuniverse.
I really must read Tegmark's book, I wonder if he mentions this argument.
Doesn't the definition of "universe" (whatever that may be - something like, "everywhere where these particular laws of physics apply", maybe?) preclude subuniverses?
If you're just saying "the dart will likely land in the most common type of sub-universal structure, which must tend towards the simple because complex structures by definition can contain simple structures but not vice-versa" then I'm absolutely with you, of course!
Simple structures can contain complex structures, given the technical sense of "simple" that is at play here. The inherent complexity becomes subsumed in indexical knowledge, that is, in specifying where *within* the simple structure the complex structure is located.
I don't fully understand your explanation here (sorry!) but if you're saying that simple structures can contain complex substructures whilst retaining less complexity in the overall [simple structure + complex substructure] system than the complex structure would retain alone" I'm afraid I don't agree with you at all.
It's hard to give a specific point of disagreement though, since I don't understand your point. If you're saying it's because the simple structure just contains a pointer to the complex substructure, then sure: a simple structure that contains a pointer to a complex structure is simple, true - but it does not in any sense contain the complex structure itself. If I write "The Second Oxford Edition of the Complete Works of Shakespeare" on a slip of paper, my slip of paper remains simple - but it doesn't actually contain the complete works of Shakespeare*. And if I write-out the Complete Works of Shakespeare in full on my slip of paper then, yes, now my paper does contain a complex substructure - but the inscribed paper as a whole is no longer a simple structure.
No, it's quite the opposite. The fact that there's *no* pointer is what makes the larger structure simple. Let's use your example of "The Second Oxford Edition of the Complete Works of Shakespeare" as an example of a complex structure. The library that contains books containing all possible sequences of letters is much larger, but simpler to describe, and contains "The Second Oxford Edition of the Complete Works of Shakespeare".
Okay, thanks, I (think!) I understand your point now, and it's very clever and very elegant... and I really don't like it!
1) [Mostly to check that I understand your point] The Library of the Infinite String contains the Complete Works of Shakespeare as a substring, and you could generate the Library String with a super-simple algorithm like "Map every three digits of Pi to the ASCII character set, skipping over digit-triplets 256-999 (which fall outside the ASCII range)".
2) I can't help but think of the old joke: "Michelangelo had it easy: all he had to do was chip away the bits of marble that didn't look like David". I don't think it's accurate to say that a 20-foot block of marble contains all possible sub-20' marble statues. In order to get a marble statue out of the block we would need A) the block, and B) instructions for which bits to chip away.
3) So I think I would say that the block, by itself, contains the *raw materials* for a statue without actually containing the statue: the rules for which bits to chip away are in some sense encoded within the statue, or something, and the marble block alone doesn't have that encoding.
4) If we take "The Library String algorithm plus the appropriate start-digit and end-digit", this *does* seem to contain the Complete Works of Shakespeare: this is your "indexical knowledge", equivalent to my "marble-chip encoding"
5) Fiiiiinne, this must be the Kolmogorov Complexity of the Complete Works of Shakespeare then, looks like we have a really memory-efficient but processing-inefficient compression algorithm for generating the CWoS
5.1) I really don't like Step 5 at all because it means for any nontrivial string, even a perfectly random one, its complexity is entirely unrelated to its length (ugh!) - but I can't see any argument against it?
6) The Library String thus doesn't "contain" the CWoS until the index is added, and the Library String + Index is exactly equal in complexity to the (compressed) Complete Works of Shakespeare. If we want a structure which contains every possible string, that would be the Library String plus the index of every possible substring - far more complex than any individual substring. Thus, the principle "simple structures cannot contain structures more complex than themselves" is preserved (albeit at the cost of disassociating string-length from string-complexity)
7) I don't have much confidence in the above and I would greatly welcome a correction if I've gone wrong somewhere! Absent such a correction, though, I find that even though I really dislike it, I dislike it less than I dislike "simple structures can contain complex ones"...
5.1 - the string length is still related to the string complexity *on average*, because there's a simple way of generating the library that makes sure to hit all the short strings first but there isn't one that hits all the long strings before the short strings, and the index itself has significant length that depends how big it is. It's only on average because some long strings are easy to describe directly.
6 - the Library String plus the index of every possible substring is simple again because a very simple algorithm could list off all the possible substring indices.
One thing about very simple structures that contain complex structures is they have to be very large. There has to be enough "space" inside them that finding something complex is almost as hard as creating it from scratch. You could imagine a Museum where every possible sequence of chisel strokes has been applied down to atomic resolution to a nearly-endless number of identical 20-foot marble blocks. Probably the ideal interface to find anything in there would be a virtual chisel and hammer you could use to teleport to the next statue, with backtracking. It would still be as hard to find the David in there as to recreate it, modulo the ability to undo.
P.S. Does this get us down to P vs. NP? It would be really hard to figure out the start-index for the Complete Works of Shakespeare within the Infinite Library String - but pretty easy to check the index was correct once we had it...
Actually you need a lot of knowledge about Shakespeare to check the index is correct--as much knowledge as you needed to find it in the first place! So although the effort of the task is much less as you note, the complexity of it is the same.
Similarly, shorter programs appear as prefixes of many longer programs. For every program P that is exactly 100 instructions long, there are many programs that are exactly 200 instructions long, where the first 100 instructions are exactly the same as in P, and the instruction #101 is "HALT".
Or, for any set S of laws of physics, there are many larger sets which include all the laws of S, and describe some new particles that do not interact in any way with the particles from S.
(It is assumed that our universe contains "dark matter" which does not interact with the ordinary matter in any way, except for adding extra mass to galaxies. But this would instead be a "transparent matter" that does not interact with the ordinary matter in any way, not even adding mass. If "dark matter" exists, which I guess is still an open question, I see no reason to assume that "transparent matter" could not. This would drive Popper crazy.)
"I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences."
Please do this. I've never understood why an atheist would care about doing "moral" things if objective morality isn't real.
Personally, I don't care at all about doing "moral" things. Unsurprisingly, most of what feels right to me aligns with common morality. However, I plan to end my life before my cancer starts to inconvenience me too much. For many people that would be against their morality, but for me it is completely fine.
I care about doing moral things the same way a dog cares about eating: It is a useful (necessary, even) strategy for my continued existence and it provides me with reward.
There's no need for 'hunger realism' to be an objective property of the universe for eating to make sense to the eater.
What exactly is at stake in the question of moral "objectivism"? (I am asking, because I don't know: it seems some people seem to think it is very important, and others not)
People are interested in how they are treated and how they treat each other. It isn't clear to me what atheism or theism has to do with that.
If there's nothing objective, it seems like you'd just do what's best for yourself, which wouldn't generally involve donating money to faraway charities afaict
Well, it depends what function this word ”objective” has in your way of thinking. (Compare this quotation from I think some Buddhist monk ”you are real, but not really real.”) Also on what you mean by ”best for yourself”.
Well that's what i'm curious about, because i wouldn't think it's donate large portions of their money to charity, and yet that's what many EA atheists do.
Well, trying to think like an atheist, first you can't take something away that doesn't exist, but to try to answer the question, I think you'd be left with a bag of coordinated chemical reactions
Why do you want to? How do you construct a worldview that involves doing "good" things for little or no personal benefit without believing in an objective concept of good?
Why stop at imagining Boltzmann brains? Why not imagine Boltzmann Brahmas or Boltzmann Yahwehs?
If you believe that Boltzmann brains can (miraculously) form from the chance accretion of matter and then would be able to maintain their existence by (miraculously) parasitically gathering the free energy of the universe, well, it's a slippery slope to then imagining Boltzmann Brahmas — who assemble from the random accretion of matter — and who are able to not only parasitically gather the free energy of the universe but also manipulate the free energy of the universe to the extent that they could create their own universes.
Heck, while we're at it — if we can imagine that *anything* can happen in infinite space and time, no matter how low the probabilities, why couldn't the biological systems of a dead Yeshua ben Yosef spontaneously repair themselves for him to rise from the dead and scare the bejesus out of Mary Magdalene?
Of course, rationalists aren't supposed to believe in miracles — unless they're Boltzmann brains.
Boltzmann brains *can't* maintain their existence, or at least, they can with only vanishingly small probability. The whole thing that makes the idea of Boltzmann brains philosophically interesting is that they don't have any special "powers" and only involve things that are known to be physically possible for agglomerations of ordinary matter.
Boltzmann Brains are one of those things I call magical scientism — along with the MWI, the Singularity (aka "the Rapture of the Nerds"), AGIs that will suddenly become malevolent and bring the end times, and the belief that there are no limits to human knowledge or technology.
If you're a physicalist — and most of the people who inhabit ACX Space seem to be physicalists — you believe that the phenomenon of consciousness is tied to a biological substrate.* The biological substrate that supports consciousness is the result of a path-dependent evolution over billions of years. Botzmann Brains are impossible, unless you believe that consciousness is not tied to specific physical systems. Boltzmann Brains would be possible from a panpsychist point of view, but panpsychists don't need them because the universe is already a conscious entity.
*Yes, possibly consciousness could exist on an electronic substrate, but that's mostly magical thinking on our part because we don't really understand consciousness or what its requirements are or how to initiate it.
A Boltzmann brain could be a literal *human* brain, or whatever the minimum necessary amount of a human body is to be conscious if you believe in some of the "embodied consciousness" stuff. It could even be a single planet with a moon and sun lasting (just) long enough for an intelligent race to evolve, most of the arguments go through even if you believe consciousness requires *that much* context to exist.
Also I don't know why anyone should be interested that you call things "magical scientism". You calling them that doesn't in any way provide evidence that they are false or uninteresting.
Despite Scott's snark about Popper, science requires falsifiable propositions that we can put to experimental tests.
For instance, we can't devise an experiment to prove or disprove the existence of God. So, we generally don't consider God to be a legitimate object of scientific speculation. Boltzmann Brains were originally used as a philosophical thought experiment to highlight the absurdity that mere random fluctuations in entropy could create the universe we see today. But then some cosmologists, like Andrei Linde, latched on to them as being possible given hypothetical multiverses and infinite time, yadda yadda yadda. And people started talking about them as if they were no longer a thought experiment.
> You calling them that doesn't in any way provide evidence that they are false or uninteresting.
No, I can't prove that BBs are false. But that's why they're not science — they're unfalsifiable. And, yes, they may be interesting in the way that Anselm's Ontological Argument or Aquinas' Five Ways were for Medieval scholastics arguing for the necessary existence of God.
Plenty of people seem to think BBs can and *have been* falsified, myself included! The fact that our best cosmological theories seem to predict something that seems to be false mean either they are wrong or there is something important that we are failing to understand about them or there is something important that we are failing to understand about how we make observations. This is how science is done. Ignoring predictions that "don't add up to normal" because "thinking about them is weird" is how science stagnates.
"For instance, we can't devise an experiment to prove or disprove the existence of God."
Yes we can. We can, and we have. We have disproven the existence of *many* gods. We can go up to Mount Olympus and look around, after all. The trouble is that "the existence of God" is a phrase that ends up standing in for an *alarmingly* large number of quite distinct hypothesis, so whenever one is disproven, somebody is always ready to try to sleight-of-hand another one in its place. After a few millenia of that sort of thing, the Top Tier Apologists have worked their way down to hypotheses that are *quite* hard to test. But less sophisticated believers are frequently calling on Gods whose existence *can* be falsified[1] and very often make wrong predictions as a result. Not that that stops them.
Now. "Boltzmann Brains can exist" is also a very, very difficult hypothesis to test. To do it with an actual experiment, one would need to wait a rather impractically long time or observe a rather impractically large amount of space. Otherwise the closest you can come is working very carefully through the implications of physical laws, checking to see if anything *precludes* their existence. But science makes all sorts of predictions that are hard to test. That doesn't mean they're *nonsense*. That just means they're...untested predictions. It happens. We also don't know (for example) that nuclear pulse propulsion or antimatter rocketry would work the way we expect. We have pretty decent reasons to *expect* they would, but if the ultimate arbiter of reality is to be experiment, we can't claim to have done the experiments yet[2]. That doesn't mean they're nonsense. It just means they're well-supported, but untested. Boltzmann Brains are rather less well-supported, but they're not a nonsense idea. They're just a weird, maybe-wrong-but-also-maybe-not consequence of our current understanding of physics.
By contrast, a Boltzmann Yahweh is a different creature all together. We *know* brains exist. We *know* roughly what they're made of and how to fit together. We have no clue whatsoever what goes into the making of a Yahweh, or if its the type of thing that even *could* exist. If there's some combination of atoms that fits together into a Yahweh, then Boltzmann Yahwehs are every bit as reasonable as Bolzmann Brains (though unless a Yahweh is very small, they will be much less probable).
"why couldn't the biological systems of a dead Yeshua ben Yosef spontaneously repair themselves for him to rise from the dead and scare the bejesus out of Mary Magdalene?"
They could, of course. That's basic thermodynamics. The more standard example is the un-breaking of an egg. It violates no laws of physics for an egg lying broken on the kitchen floor to spontaneously re-assemble itself and jump back onto the counter. We know this because the laws of physics are time-reversible (at to a good enough approximation for that). But a broken egg is a much higher-entropy state than an unbroken egg, so the reassembly is very, very, VERY improbable. You would have to drop an unimaginably large number of eggs and wait an unimaginably large number of lifetimes to see it happen. And you'd have to re-load the world from the day of the Crucifixion an even larger number of times before you got a spontaneously-reassembling Jesus.
[1] Like the sort that helps your team win a football game.
[2] In fact, we could get really nitpicky and play this game with existing tech. Do we know for sure that the next iPhone to roll off the assembly line won't hit some hitherto unseen physical constraint and totally stop working? Not until we do the experiment!
What is your proof for the non-existence of gods or God? Climbing to the top of Mount Olympus only proves that Zeus and his kin don't live there — not that they don't exist — nor that they may have resided there in the past.
I think you think that the Duhem-Quine thesis says something that it doesn't. D-Q basically claims that scientific hypotheses cannot be tested in isolation because they are always part of a larger web of assumptions and theories. Big deal.
To paraphrase Thomas Kuhn: If Quine’s extreme holism were true, it would be impossible to determine whether one theory is better than another since all theories are interconnected.
And Karl Popper argued that while hypotheses are tested within a web of assumptions, scientists can still subject them to falsification by designing critical experiments that make specific predictions. Although Popper acknowledged that auxiliary hypotheses exist, he insisted that science progresses by deliberately attempting to refute theories and refining them accordingly. For instance, Einstein's Theory of General Relativity presented astronomers with a falsifiable hypothesis. It was tested in 1915 when it successfully explained the discrepancy seen in Mercury's orbit with what Newtonian gravity predicted. This was the first of many confirmations of General Relativity, and GR stands alone as a solid theory, although Quantum Gravity may modify its underpinnings.
For starters, rationalists are supposed to understand how probability works. The more miracles something requires, the *less* likely (exponentially) it is.
Universes with a Boltzmann Brahma that exists for a nanosecond? Sure, they are extremely rare, but why not. Universes where the Boltzmann Brahma exists for *two* nanoseconds? They are extremely rare even among the former, but why not. Etc. The problem with Boltzmann things is that every additional nanosecond of their existence seems to require an additional miracle.
As an alternative, consider a rule-based universe. Yes, its existence is itself a miracle, but... that might be the last miracle you need, if afterwards all it does is follow the rules. One miracle is more likely that zillions of miracles happening in the right sequence.
I agree rationalists are supposed to know how probability works. For instance, the odds that a ball will fall on a particular number on a roulette wheel is 1/38. But the idea that a bunch of atoms will suddenly come together and create a roulette wheel is not within the realm of probability.
First, you've got to deal with the Second Law of Thermodynamics, which states that entropy (disorder) always increases in an isolated system. Life exists on Earth because we're not in an isolated system (we're getting continual energy input from the Sun). Imagining that Boltzmann Brains or functioning roulette wheels will appear out in space from interstellar gas would violate the Second Law.
Second, everything we've been able to observe in our Universe, at least above the scale above quantum mechanics, can be explained by Leibniz's Principle of Sufficient Reason (PSR), which states that nothing happens without a reason or sufficient explanation. Lee Smolin discussed this in his book _The Singular Universe and the Reality of Time_. Basically, you can't get something from nothing — at least as the Universe exists now.
Third: Our universe — at least as it exists now — functions under the Principle of Least Action (PLA), which plays a fundamental role in classical mechanics, Quantum Mechanics, and Relativity. For instance, light follows the path that minimizes the travel time between two points. In Relativity, the motion of objects in curved spacetime follow geodesics that extremize the action (from which Einstein's derived his field equations).And in Feynman’s interpretation of QM, a particle technically explores all possible paths, but the dominant contribution comes from paths that minimize (actually that "extremize") the action. So water isn't going to run uphill. And atoms are not going to diverge from their courses through interstellar space to form Boltzmann Brains nor roulette wheels (of either the US or European variety).
The Second Law of Thermodynamics is only true statistically, Boltzmann Brains live entirely within the gaps this creates, which is also why they overwhelmingly tend to disappear right away and always disappear eventually. The Principle of Sufficient Reason is not a widely accepted scientific law, Lee Smolin is a bit of an iconoclast. The Principle of Least Action is not violated, the atoms are following their courses not diverging from them.
If you want to believe that Boltzmann Brains are popping in and out of existence, go right ahead. If you want to believe in a creator-deity that launched the Singularity that caused the Big Bang, that's your prerogative. If you want to believe that a creator-deity created the planet Earth in seven days, layered the geological strata with fossils, and created the cosmic microwave background to make the universe seem older than it is and fool unbelievers, you're free to do so.
Just don't call any of those ideas scientific.
Science requires hypotheses that are testable. If you can't gather the data because it's out of the range of your instruments or it's too rare to be observed, then the untested hypothesis is just a hypothesis — i.e., a belief without supporting data. Some beliefs may sound more scientific than others because they invoke mathematics or logic, but it's all speculation and belief until someone can prove them or disprove them with data.
Only the Second Law of Thermodynamics is a law of physics, the other two are loose heuristics at best. And the Second Law does NOT forbid such things. It just points out that they're very, very improbable. A clump of atoms of proper number and kind to form a roulette wheel can be arranged in an utterly staggering number of ways, only a tiny, tiny fraction of which resemble a roulette wheel. That right there is basically the *entirety* of the Second Law: disordered states are more numerous than ordered states, and thus more probable.
This is a quite standard part of a decent physics education: as I said elsewhere, the broken egg reassembling itself is a common illustrative example. If you think the second law *forbids* low entropy states, you have not learned it sufficiently well.
Did some creator diety inscribe the Laws of Thermodynamics on a stone and declare "These are my Laws, everything else is Hueristics"?
Laws of physics and heuristics are similar in that they both serve as guiding principles for understanding and predicting outcomes, the laws of physics predict physical phenomena with high precision, while heuristics offer rules of thumb that guide us to probable outcomes in a framework of less precision. Both have an empirical basis and are derived from observation and experience. Physics laws come from experimentation and rigorous testing, while heuristics develop from practical experience and patterns of what we observe.
And BTW, if you're going to discuss Lagranians in Physics, the Principle of Least Action and the concept of Lagrangians are fundamentally interconnected in classical mechanics. The Principle of Least Action is the guiding rule that determines the true path of a system by extremizing this action. Seems kinda like a law when you look at it that way. Just sayin... ;-)
The MUH doesn't "defeat" any actual proofs of God, just like a sloppy sketch of a perpetual motion machine doesn't "defeat" the first law of thermodynamics. To achieve that, you'd have to show that the sketch translates into a working machine, i.e., the hypothesis is true.
Until you do that, at best the MUH provides an alternative to the "proofs of God" that suffer from the same conceptual gaps. Which is better than nothing, because all of the supposed proofs have some metaphysical assumptions, and if you showed that even if those held true it still wouldn't constitute a proof of God, but something else as well (which would then mean there has to be more than one flaw in one of the approaches).
The glaring fundamental flaw in the MUH that Scott acknowledges above, but glosses over, is that you can't define something into being. The Mandelbrot set is actually a nice example. Sure, it exists in the realm of mathematical possibilities, but in a sense, only when people had access to computers to approximate the math numerically did it it take steps from the potential to be instantiated as a fancy poster in a shop for psychedelia. Are there fractals out there in mathland that no living creature has seen? Probably.
Soooo... the "proofs of god" are metaphysical BS, and the MUH is metaphysical BS until someone patches that gap convincingly. And I got to stop now, the guinea pigs are hungry.
· "[...] my impression was that this is a useful hack, but not perfectly objectively true, because there is no neutral programming language, no neutral compiler, and no neutral architecture":
Is this not what a Turing machine is? The simplest theoretically-possible, entirely language-agnostic, entirely universal (if the Church-Turing thesis holds) computing system? I would have said that measuring Kolmogorov complexity using a real-world programming language would indeed be a useful hack, but measuring the Kolmogorov complexity of X on a Turing machine means something akin to "What is the simplest possible rules-based process that can reproduce X in perfect fidelity?"
I think that what real-world programming languages are doing is making the program simpler by moving complexity from the program into the machine. This is frightfully useful for real-world programming where we have to scratch-write each new program but we can copy-paste machines (which are themselves mostly made of existing programs)*, but since transferring complexity in this way absolutely must incur an overhead** it's not useful/relevant for talking about the simplest possible rules-based process for producing any given output, which has to be a Turing machine: anything simpler wouldn't be universal and anything more complex would be reducible to (Turing machine + overhead).
* Which is why real-world programming is done on like a dozen layers of machines built upon machines built upon machines..
**I was far too hungover in my lectures on the subject to be certain, but I'm somewhat confident that this has in fact been proven. I'm afraid all I can offer now is an intuitive not-quite-proof, though: the same program (where "same" means something like "produces an isomorphic output") will always run faster and in less memory in a dedicated hardware arrangement of transistor logic gates than it would in the Assembly language; faster and in less memory in Assembly-on-hardware than in C-on-Assembly-on-hardware; better in C-on-Assembly-on-hardware than C++-on-C-on-Asssembly-on-hardware, better in C++-on-C-on-Asssembly-on-hardware than in Java-on-C++-on-C-on-Asssembly-on-hardware...
· "Since the universe actually has to implement the solution we come up with, there can’t be any ambiguity. But how can the cosmos make an objective cosmic choice among programming languages":
The "the universe has to.." phrasing is confusing me a bit, here! I thought that these mathematical objects were according to the theory never produced and exist-without-beginning outside of the universe, but that we were interested in hypothetical instructions for producing them nevertheless (on an equally-hypothetical Turing machine that has, amongst other things, infinite memory..) because we're curious about the objects' relative complexities and Kolmogorov showed that "length of simplest-possible instructions for producing an object isomorphic to [mathematical object]" is another way of saying "complexity of [mathematical object]"?
· "You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’ ":
I think the bit-length of PotterZip is actually longer than the bit-length of the uncompressed PotterVerse! In order to know _exactly what_ it is supposed to represent by '1'*, PotterZip must contain within itself a structure isomorphic to the PotterVerse**, and it also.. er, "must" (once again I didn't pay enough attention in university to remember the actual proof of this; sorry!) contain a structural overhead. (And even if we did assume no overhead whatsoever, the minimum bit-length for the PotterZip _system_ (machine + program, compressed data + decompression algorithm, however one wants to think of it) would still be the bit-length of PotterVerse plus the '1'.)
This problem doesn't exist with actual (lossless) compression schemes: they genuinely do contain everything you need within them to losslessly reproduce the full version when given the compressed version as input, and for sufficiently-large uncompressed data the bit-length of 'compression machine + compressed data' genuinely is shorter than the bit-length of the uncompressed data.
.....this doesn't mean that PotterZip is useless, though! If you had _multiple_ PotterVerses to compress, the bit-length of the PotterZip system would be as short as as "bit-length of one PotterVerse + ('1' x 'number of uncompressed PotterVerses in the input')", and PotterZip would output for you a handy count of how many PotterVerses you had.
* Or to think of it another way: "In order to know what it is supposed to output when given an input of '1' and set to "Decompress"..."
Ooh, interesting question! I think it'd be Turing machines with an infinite tape in both directions, because there would be no overhead in simulating a one-way-infinite tape whereas a one-way-infinite machine would have a slight overhead in simulating a two-way-infinite tape.
(I also think probably you asked the question Socratically, to show me that it's not as straightforward as "Turing machines = simplest for Kolmogorov complexity"? If so - very clever; nice! I do still think that, even if *I* can't define the simplest-possible programmable system, there must nevertheless be some theoretical system that is simpler for K-complexity purposes, and I suspect that asking questions about how much overhead there is in having systems emulate each other might teach us useful things about that theoretical system...)
Your program is some finite sequence of words only containing 0 and 1, for example: 001, 111, 00
The initial state is a word containing only 1s and 0s and a pointer pointing to the first word of the program.
One step of computation is the following: if the leftmost symbol of the current word is 1, then the current word of the program is appended to the current word, and if it's 0, then we don't append anything. In both cases we delete the leftmost symbol and move the pointer to the next word of the program (going back to the first word if we were at the last).
These systems are called cyclic tag systems and can compute exactly as many things as turing machines can. Would you say this is more complex than a Turing machine?
This is really interesting and taught me lots; thanks!
I think I would say (just guessing!) that Turing Machines might still be simpler because A) there would be lots of overhead in having a Cyclic Tag system simulate/implement a Turing Machine but not much overhead in having a Turing Machine simulate/implement a Cyclic Tag system, and B) even were the Cyclic Tag system itself simpler to describe than a Turing Machine (which it might-or-might-not be but let's examine the least-favourable case) it probably isn't *orders-of-magnitude* simpler to describe and so the overall complexity of [system + input] is probably still in the Turing Machine's favour when it comes to simulating/implementing the other system.
I think both A and B are required, here; A alone would mean that the Turing Machine is just more powerful, not necessarily simpler, and B alone would mean that the Machine itself was simpler but that this simplicity resulted in far more complex output.
I'm just guessing here, though! I accept your general point that there actually isn't one obvious system we should consider "the simplest" and use for purposes of K-complexity - but I do still think that probably there is some system that is in some sense consistently in some "sweet spot" when it comes to assessing [system + input] length (even if it's not the simplest-possible system by every metric and even if I personally have no idea what actual system it is)
Turing machines are not the simplest possible nor language-agnostic. The Church-Turing thesis also applies to JavaScript, C, Brainfuck, Haskell etc. A better candidate would be lambda calculus or SKI combinator calculus.
Okay, fair enough, I'm sure you know this stuff better than I do. It does seem as though we both broadly agree that there are candidate systems that meet the necessary criteria for fair determination of K-complexity (eg. "shortest possible complete description of [language/system + program]" as opposed to simply "shortest possible description of [program only] irrespective of language/system"), though!
I have no detailed comments except to say that philosophically, I Like the mathematical universe idea. The neatest answer to "why is there something rather than nothing" is that there is not and could not be any difference.
And e.g., running Conway's Game of Life from the same initial conditions twice doesn't feel like making two things happen, it feels like making the same thing happen twice, which I guess means it's just one thing that's already there whether or not I run a program that lets me look at it
(Existentially, were I to think about this too hard it may or may not irk me, since in that case everything that can happen is already happening, good or bad, whatever I do. Though idk maybe there's an anthropic or TDT thing where making good choices means I'm in an omniverse where more probability mass is in universes where good things happen to mes)
> how can the cosmos make an objective cosmic choice among programming languages?
I think there is more hope than you have. The candidate perfect languages we have (e.g. lambda calculus) seem to have better properties than necessary from the theory. For example I think implementing something as lambdas and as a logical formula you would rarely want to first create an interpreter. So we can probably exclude harry-potter-lang.
This may be a yet unknown but learnable thing. For example, we don't even know the precise computational model for our universe. But as we learned physics we learned that the model uses quantum computing, and retroactively we can see that it was mathematically natural all along. So I'd start with solving fundamental physics and maybe at some point we'll get to being sure of what the perfect language is.
I just want to mention that Nevin's a very good philosopher with lots of expertise in both arguments about God and the more technical points of probability theory, so if he has a reply it's probably worth listening to - not many academics have both.
Maybe I'm the "abstract concept of the number 5", care to refute that idea? You can't, because no part of that sentence makes sense - great, it's a "counterexample" to philosophical theory XYZ, let me write a blog poist.
I'm a bit surprised "Tegmark's mathematical universe" is actually something taken seriously. I was bothered by the idea that maybe existence was just "mathematical possibility" about 25 years ago as a teenager but even then I relatively quickly realized that actually there are a lot of pretty strong notions that go into what it means to be a "possibility" and stopped taking the thought seriously as any kind of foundation. After that I went on to learn more about mathematics, now I take it even less seriously - mathematically, there are just so many ways to define things and group things as being "equal" or different or define structure etc that the idea seems even more nonsensical now.
Also: "By existing, you are a random draw from the set of possible conscious beings." is so nonsensical (your intuitition for randomness comes from how "random" things behave in this universe, wtf even *IS* a "random draw"?). Sure, you can define a measure on infinite spaces but it's far from simple - just defining the usual uniform measure on a line segment is an undergrad lecture in mathematics in itself. Speaking as someone who actually understand mathematics - if you don't actually understand it, stop trying to use it in your philosophical theories. You can't just gesture at mathematics and use it to fill logical holes.
I really like Tegmark's hypothesis, but then again, I am a radical mathematical realist with a really strong intuition towards independence of mathematical truths which probably puts me at odds with most readers of this blog.
> I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.
Re: the points about Plato etc., I think you gave a hostage to fortune by writing that Tegmark's thinking was novel. This of course left the gate wide open for the bull to get out of the field and for all us nitpicky contrarians to go "well ackshully" 😀 The theory may well be novel, but it's not like nobody ever before this guy ever thought anything similar.
" But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword!"
There's going to be a lot of angry atheists angry about this. I get what you're saying but it's not the kind of thing that they like having pointed out.
Thanks for the Mandelbrot set, that brought me back to the 90s.
I used to think the idea of Boltzmann Brains was too weird and, indeed, nuts to even contemplate, but I'm beginning to change my mind. If I *were* something that just popped into existence randomly a few moments before, am currently AI-hallucinating the entire history of the universe, and will pop back out in another few moments - I'm telling you, that would make an *awful* lot of sense regarding my life.
"From the physicist's perspective, both "God did it" and "anthropics did it" communicate the same thing: that investigating why the universe is the way it is, is a waste of time.
Finally, I admit an aesthetic revulsion to the particular way Bentham is using “God” - which is something like “let’s imagine a guy with magic that can do anything, and who really hates loose ends in philosophy, so if we encounter a loose end, we can just assume He solved it, so now there are no loose ends, yay!” It’s bad enough when every open problem goes from an opportunity to match wits against the complexity of the universe, to just another proof of this guy’s existence and greatness."
I agree with both a'yiz on this, but I don't agree that "God created the universe and everything in it" does stifle investigating why the universe is the way it is, and please don't drag Galileo into this. There was room before and after him to work on "why does this thing do this in this way and not that way" without stepping on the toes of the Church, and indeed "God did it, no reason to ask more" was something not well-regarded by thinkers of the time. See, courtesy of the late Michael Flynn who brought this quote to my notice, William of Conches in the 12th century:
"Or more directly, William of Conches in (iirc) the Dragmatikon:
"[They say] 'We do not know how this is, but we know that God can do it.' You poor fools! God can make a cow out of a tree, but has He ever done so? Therefore show some reason why a thing is so, or cease to hold that it is so."
A lot of the disagreement is over those who think of God as coming at the end versus those of us who think He came at the beginning, or indeed is outside of the entire frame of 'beginning' and 'end'. Of course God is perceived as necessarily complex if viewed as arising out of a series of increasingly ordered and complex entities, and then that complexity becomes an argument for 'so God could not have existed at the beginning, because the beginning was the ultimate simplicity' (I think Dawkins used that argument).
Metaphysics is hard, indeed. And when we get out of the empirical, concrete, observable 'kick a stone and stub your toe' universe, into the realm of Forms and philosophy, it gets wild and trackless.
Current status: still believing in God, still happy to read these speculations, still resolutely un-insulted because Scott never does write with intent to insult anybody and does argue all this in good faith.
Speaking as an angry atheist, I actually agree with you -- answering any question with "God did it by magic" is, philosophically speaking, equally as valid as saying "This happened as the result of natural processes which I can attempt to understand via further investigation". Given that the two answers are equally valid, I choose the one that ultimately enables me to have houses made of stone and warmed by fire, plentiful food, cars, computers, spaceflight, eradication of diseases, and all that other good stuff. I will cheerfully admit that doing so is just a whim of mine.
And yes, you can reframe the later stance as "This happened as the result of divine processes that follow complex yet non-arbitrary rules which God chooses to strictly enforce at all times, and thus I can attempt to understand via further investigation"; but this is just a word-game that you can play, with no significant impact on anything. For the purposes of building cars and computers and all that other good stuff, it doesn't matter whether the Universe operates by some set of invariant rules that are external to ourselves because that's just how universes are; or whether the Universe operates by some set of invariant rules that are external to ourselves because that's how the eternal unchanging God likes to run things. The answer is the same either way.
>People bring up the fine-tuning argument as a proof of God. If I show that other things can create fine-tuning, then God is no longer proven. This doesn’t mean God definitely doesn’t exist. It just means that we’re still uncertain.
This just seems like playing word games around what counts as "proof". If you want to insist that "proof" refers only to arguments that grant absolute logical certainty, then fine, you can call fine-tuning "compelling evidence" for God instead if you want. But "Tegmark's Mathematical Universe Hypothesis shows God's existence is only supported by compelling evidence, not proven" is not a particularly compelling argument for atheism.
If people are happier believing the answer to "why is there something rather than nothing?" is 'because maths' instead of 'because God', let them. We're all going to die one day and we'll know then which is true.
Here's another reason I think Tegmark's Multiverse is mostly untenable.
Ostensibly we're letting out mathematical objects be defined in symbols and calculating K-complexity over such descriptions. However, who decides how these strings of symbols map to the things that "exist" according to Tegmark?
It's trivially easy to just label everything with a unique symbol in a mathematically consistent way, since Tegmark permits infinite sets physical reality. So are all objects equally simple now? All the information content is in the rules that make symbol manipulation, but that information is now shared in the interaction between an object and all others. Mathematically, think parent category vs it's Yoneda embedding.
In practice the map between symbols and meaning grows naturally to be useful and even changes over time, so it's not at all helpful for thinking about supposed
Given how you seem to be getting a bit annoyed toward the end of the post, let me write a bit from the silent minority saying how much I enjoyed hearing you defend these arguments on your blog, and give such a cool hypothesis wide coverage.
I remember vividly first reading about the MUH (at first, via fiction in Egan's Permutation City, and shortly after in Tegmark's books and papers). It was like a lightning bolt, supercharging my existing intuive sense of mathematical Platonism into a more concrete sense of *this is the answer*. "The unreasonable effectiveness of mathematics in the natural sciences" had gnawed at me for so long, not because it seemed puzzling, but because I just couldn't imagine any other way for the world to be. And the MUH spells out all the consequences and solves so many problems at the same time.
I also really appreciate your accessible treatment of, and defense of, the subtleties around measures. Both that having one is natural, and that it's weird and unsatisfying to pick a particular one.
I remember reading some SSC post way back where you referred to Tegmark as "the man who figured out why there's something instead of nothing". Ever since then I've had a small secret joy in my heart knowing that you also enjoyed his theory. It's so great to see it making its way to the front page of the blog.
The section on Popper/falsifiability could do with a bit more charity, IMO.
Let me see if I can rephrase: What we really do when debating hypotheses is we *test them.*
Knowledge is meaningful to the extent that it is (or has the potential to be) *prophecy*, that is, to the extent that it can contribute to actual predictions with actionable confidence. Without that, it's "mere explanation."
If Tegmark doesn't actually give us the ability to act on the world any differently than we would without it, even to the degree of developing future models or opening new lines of inquiry, then it's a belief that pays rent by providing emotional satisfaction, no different in principle from believing that the world was created when Odin slayed a big giant.
It's *nice* to have a satisfying explanation for the nature of the world, sure. But it's a bit weird to see ACX of all places discard the prediction criterion in favor of the satisfaction of having a 'right' answer. It's called 'Less Wrong' for a reason!
Knowledge is true to the extent that it tells you what reality is, beyond making predicitons. LW/Yudkowsky style rationality isn't instrumentalism, the prediction only approach, because they care about things like Many worlds versus Copenhagen, and atheism versus deism. No Logical Positivist, He.
If there's a fundamental axiom of rationalism, it's "The map is not the territory."
TMU is a description- a map, in other words. And a map that can't be used to navigate is just a pretty picture.
LW rationalism absolutely has a strong instrumentalist bent- for example, one of the reasons for the early focus on atheism was to get people to internalize the fact that there will be no providential guard rails preventing them from wiping themselves out with AI. Big Yud had a specific (and explicit) reason for advancing that discussion.
"LW rationalism absolutely has a strong instrumentalist bent " That's an argument against "LW is not instrumentalist at all", which is not a claim I was making.
"My mental models of phenomena are lossy representations of a more-complex reality, and I'm going to assess them by their utility rather than declaring them to be sacred and unassailable Truth" is not a rejection of scientific realism. It's a practical, measured, and sane response to the uncertainties of inductive learning.
You seem interested in having satisfying explanations for things, whether or not those beliefs pay rent. I don't mean to tell you not to do that; it seems important to you. But let's not kid ourselves. The reality that hosts your brain is necessarily more complex than your brain, and any explanations you provide are nothing more than words, words, words. If those words are neither functional nor useful, then the explanations they provide are merely decorative.
I was hoping to see a response to my comment or a similar one on the original post. Quote is Scott's, response is mine:
> By existing, you are a random draw from the set of possible conscious beings. You can’t make a random draw from an infinite set, but the accepted solution is some kind of measure weighted by simplicity. So even though every possible mathematical object exists, simpler ones exist more. Most conscious beings exist in very simple universes, ones that (like Life) are just a few short rules which produce surprisingly complex behavior.
This feels like a band-aid solution to questions like, "If every possible universe exists, why wouldn’t we expect something bizarre, like gravity suddenly changing in 30 seconds?" A universe where gravity depends on time doesn’t seem infinitely more complex than one where it remains constant.
Yet, for every universe with constant gravity, we can describe infinitely many that, up until this point, had constant gravity, but will shift to conditions unsuitable for life in the next second. So why should we assume we exist in one of the rare, stable ones?
-----
Just to reiterate, all universes described in this comment are the same at the current moment, and will only diverge in the next second.
> A universe where gravity depends on time doesn’t seem infinitely more complex than one where it remains constant.
It's not *infinitely* more complex, just a lot more. The laws of gravity observed thus far can (at least aspirationally) be embodied in a few short equations, but in the alternative hypothesis you'd have to append:
> ...until time T=264862...[a number with ~60 digits; see "age of the universe in Planck times"]...505292, at which point the laws become the following: ...
This hypothesis is heavily penalized for its complexity, so although we can't rule it out with 100% certainty, we can basically ignore it.
At any rate, this isn't a problem with Tegmark's theory specifically. Even if you only think there's one universe, i.e. the one we live in, we are still in a similar state of uncertainty as to whether the laws governing it include that addendum or not.
> This hypothesis is heavily penalized for its complexity, so although we can't rule it out with 100% certainty, we can basically ignore it.
The problem is that, for every single universe with constant gravity, there are infinitely many universes that don't have constant gravity. Just for our universe, there's a universe where at t+1 gravity will double, one where it will triple, one where it will quadruple, etc etc. Not to mention t+1.0000000001, t+1.0000000002, t+1.0000000003.
With so many infinities to choose from, I just can't see there being a higher number of simple universes. The explanation is more mystic than most religions.
Complexity: For the change in gravity to happen to us it would have to also pin down the time we're on Earth for us to observe it, which adds a *lot* of extra complexity.
----
Anthropic: If gravity swung around a bunch since the beginning of time, that makes it notably harder for life to develop, and so we're less likely to exist in that universe.
----
Low Complexity: Another explanation is that gravity is not an isolated thing, all the forces are pinning each other down to a substantial degree. This makes it less of an independent set of a hundred numbers and closer to a a dozen numbers. (or something, I don't know how I'd calculate that precisely without a lot of effort)
----
Multiverse: One consequence of a multiverse, which we plausibly have even with just QM, is that there could be a decent chunk of universes collapsing. I look around the world, but this doesn't isolate any single mathematical object, but rather a whole bunch of them. Instead of assuming we exist in one of the rare stable universes, we don't. Rather, we lose some amount of measure every unit of time due to reasons as you describe, the universe changes suddenly and we often simply die out.
This then makes so we're on the tracks where it is stabler, such as constant, or at least approximately constant for now.
I don't think any of these completely defuse the argument you give, but I do think they make it less odd.
I don't think it's true that you can't "make a uniform random draw from a set of infinite measure". Here's a way:
1. Make a uniform random draw from the interval [0,1).
2. If the number you draw has infinite digits, reject it and go back to step 1. (Caveat: the probability of getting to step 3 at any single iteration is zero, but it will happen eventually if you have infinite time.)
3. Read and choose the number right of the decimal point. Congratulations, you have picked a natural number at random uniformly.
The paradox Scott mentions is not a contradiction at all: you just get different probabilities if you have different information, is all. (Just like how the probability that a woman with two children has two boys is different if you know at least one child is a boy and if you know the eldest is a boy.) (Also, remember that with infinite sets a probability of 100% does not imply certainty, though the converse is true.)
But also: Occam's razor is a good epistemic tool to favour hypotheses with fewer postulates, but if you postulate it as an essential ingredient of reality then you incur an additional penalty as per Occam's razor! From a more "fundamental" viewpoint: meaningful measures arise from properties of systems, not the other way around. If every conceivable (mathematically consistent) universe exists, and probability calculations over the space of all universes should use a measure based on simplicity, there has to be a reason why that measure and not another is correct, and this requires either the known parts of this setup conspiring to bring this about or a new postulate at some point in the theory, which must be penalised by Occam's razor.
Your caveat in step 2 is fatal. Measure zero is defined already at the level of sigma algebras; the product of countably many sigma algebras still gives zero measure to the set that has a number with finitely many nonzero digits along *any* of the axes.
The procedure obviously must fail in this way; if it didn't, then you could just put the probability distribution it creates on the natural numbers directly and draw against that. But no such probability distribution exists.
It is not fatal. The procedure is merely an example of how this can be constructed, and it still returns an answer within a finite (albeit very long) time.
Really if I wanted to be extra annoying I could have just pointed out that nobody disputes it is possible to make a random uniform draw from the interval [0,1), which has infinite elements. All I did was reduce the size of the set we're drawing from (by _removing_ uncountably infinite elements) to make it countable – if it was possible before, it has to be possible afterwards. That the set ends up with measure zero just means you'd need a different measure to make calculations with it (like [0,1) having zero area doesn't mean you can't make a random uniform draw from it). In practice, most calculations you'd like to make will yield annoying zeros and ones, but you can still see things like the probability of getting an even number being 1/2, etc.
Depends on what you mean by "expect" there. Given any finite amount of time, the probability it'll happen is zero. However, if you literally never stop it is certain to happen eventually – and that will be after a finite amount of time.
>"it still returns an answer within a finite (albeit very long) time."
Are you saying that any injective function N->[0,1) must have a number in its range whose decimal represenation terminates? That's clearly false, but I can't see any other way to interpret your claim here.
What is your precise definition for "it is possible to make a random uniform draw from S"? If it means the same thing as "there exists a uniform probability distribution on S", then your claim here is false.
Such that no element is more likely to be picked than any other.
I appreciate we are talking of events with zero probability, so the probability of individual elements in A is the same even if the probability density distribution exists and is non-uniform. However, we can still compare the likelihood of different zero-probability events. Usually you would do this resorting to the probability density distribution, but this doesn't cease to be possible just because this density doesn't exist. (In fact, just like the probability density distribution allows you to calculate probabilities for sets of events which individually all have zero probability, depending on the topology of B you may be able to define different types of generalised measures which allow you to do the same for other types of [infinite] subsets. I'm arguing in this case you could do something like this to compare the likelihood of getting evens vs odds, etc.)
> But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable.
I’m not sure why not, is it the mention of the devil? Or that fossils aren’t enough. Maybe I’m not following any of this.
> Somebody must go tell the Queen!
I was going to suggest that was impossible but I realise the writer supposes that there’s an infinite number of her hanging about somewhere.
I also feel, as a practical man, that not only is it not proven that every practical mathematical object has to correspond to physical reality but it’s impossible. Realise that infinity is merely a mathematical abstraction and the number of universes we get, no matter how large, no matter that they fast being created exponentially is still finite.
Re: Boltzmann brains, how long is the conscious observer in the bag of conscious observer marbles we're drawing from to develop these probabilistic intuitions about mental entities supposed to exist? A whole life? Do people intuit that they are the same conscious observer now that they were as an infant? If you consider your own status as a conscious observer to exist fleetingly, does this disrupt either the mathematical or aesthetic issues around Boltzmann brains? What other entities go in the bag of conscious observers--is it weird that we aren't cockroaches when they outnumber by so many orders of magnitude us on planet earth? What if an observer is the kind of thing you simply can't make countable? Why isn't the bag limited to the kinds of conscious observers that can reflect on Boltzmann brains, which could plausibly exclude them? I guess my intuitions around consciousness, as ill-formed as they are, are so different from the premises here that I'm left just posing questions.
I wasn’t saying this is a retread - I was saying this is original! I was just doing so in a way that some people could put down to the narcissism of small differences.
Anyway, there’s a reason why people said this is Plato or Descartes or Leibniz but no one said it’s Aristotle or Bacon or Hume or Carnap. This is very much a rationalist worldview, and not an empiricist one.
Sometimes the internet rationalists seem to be like empiricists, but this is one of the cases where the connections to historic philosophical rationalists come through.
I think the resemblance to philosophical rationalists is because the MUH implies you can have some sort of a priori knowledge about what is likely and what isn't by just looking at the simplicity distribution. Alas, in practice, this is computationally intractable.
I second the lambda calculus as the natural candidate; it contains nothing that isn't already implied by the existence of universes with the following properties:
- There are things
- There are ways in which things can change
- Sometimes such changes actually happen
That's my own reading of the three rules of the lambda calculus and others might disagree, but it seems fairly reasonable to me.
Arguably there are simpler alternatives like SKI combinators (or just SK, since I is redundant) but in my subjective view, they don't follow naturally from the fundamental properties of universes with time/space/matter the way the lambda calculus does.
Just out of curiosity, did ChatGPT get the answer correct here? (https://chatgpt.com/share/67ba1cd0-0760-8001-a069-59a18e6f4504) - in response to my question: "I am having trouble understanding one sentence from this passage: "Argument from comprehensibility: why is the universe so simple that we can understand it? Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones; therefore, the average conscious being exists in a universe close to the simplest one possible that can host conscious beings." Specifically, I don't understand why "Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones" is necessarily true, or what it means. Can you explain what the author most likely meant by "Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones", and why it would be a true statement?"
The ChatGPT link is login-gated; what does it say?
> Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones
This would be more clearly stated as:
> Because in order for *a probability distribution over* the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones
This sentence is making a claim about how we should conceptualize “the set of all mathematical objects” in a way that allows it to be “well-defined,” and it argues that this requires a prior that favors simpler ones. Let’s break this down step by step.
Step 1: The Context—Mathematical Objects and the Universe
• The passage is discussing why the universe is comprehensible to us.
• The underlying assumption is that our universe is just one mathematical structure among many possible ones.
• If all possible mathematical structures exist in some sense (a view associated with the Mathematical Universe Hypothesis), then we need some way to organize and reason about this vast “set of all mathematical objects.”
Step 2: What Does It Mean for the Set of All Mathematical Objects to Be Well-Defined?
• A “well-defined” set is one where we can meaningfully talk about its members and apply probabilistic reasoning to them.
• The problem is that if we treat all mathematical objects as equally probable, we run into an issue: most mathematical objects (in a broad sense) are hugely complex and incomprehensible.
• There is an infinite space of highly intricate mathematical structures, many of which would be chaotic, uncomputable, or resistant to any form of general understanding.
Step 3: Why Do We Need a Prior That Favors Simpler Ones?
• If we do not favor simpler mathematical objects, then our probabilistic reasoning about which universe we are likely to find ourselves in becomes incoherent.
• The idea of a “prior” comes from Bayesian probability: it is a way of assigning initial likelihoods to different possibilities before seeing evidence.
• A prior that favors simpler mathematical objects means that simpler structures are assigned higher probabilities than more complex ones.
• This aligns with the principle of Occam’s Razor: given multiple possible explanations, the simpler one is more likely.
• Without such a bias toward simplicity, the “set of all mathematical objects” would be dominated by unfathomably complex structures, making any predictions or reasoning impossible.
Step 4: How Does This Connect to the Argument from Comprehensibility?
• The claim is that our ability to understand the universe is not a coincidence—it follows from the fact that conscious beings are more likely to exist in simpler mathematical structures.
• If complex universes were equally likely, we’d expect to find ourselves in an incomprehensible mess rather than a universe governed by elegant physical laws.
• By assuming a prior that favors simpler mathematical objects, we explain why we live in a universe that is mathematically structured in a way that allows comprehension.
Why Would This Be True?
1. Mathematical Objects Tend to Be Hierarchical: Simple structures are often building blocks for more complex ones. If we are drawing from a space of all possible structures, there is a natural ordering where simpler structures appear more frequently and are easier to define.
2. Computability and Predictability: More complex mathematical objects are often uncomputable or chaotic, making them poor candidates for a universe that supports stable laws of physics and conscious observers.
3. The Anthropic Principle: Conscious observers are unlikely to emerge in a highly complex, unstructured universe. A structured, understandable universe is a prerequisite for intelligent life.
Summary
The sentence is asserting that in order for the total “space” of all mathematical objects to be meaningfully organized in a way that allows probabilistic reasoning, we must use a prior that assigns higher probability to simpler structures. This is because if we didn’t, we’d expect to find ourselves in an incomprehensible and chaotic universe, which contradicts our actual experience. Thus, the comprehensibility of the universe is a result of the fact that simpler mathematical structures are favored in the underlying distribution of possible universes.
This confirms my stance that there's nothing here for religious believers to even argue with.
> Just as unicorns don’t exist (but we can say with confidence that they have one horn), so humans don’t have any special existence of the sort that requires Secret Fires (but we can say with confidence that they are conscious).
> Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
So you have a lexical entity "existence" with certain semantic and mathematical properties, and those semantic and mathematical properties seem to interface with traditional theological arguments in some way. But this existence is defined in such a way that either unicorns do exist, or (considered another way) humans don't exist. But either way, humans and unicorns have the same degree of existence.
And whether or not something exists has nothing to do with whether you might encounter it. Actually being able to find the thing in the world is irrelevant to this definition of existence. The theory doesn't consider it meaningful.
So that means that this quality "existence" is defined, explicitly, intentionally, in such a way that no living human being could possibly care about it. As the kids these days say, "go off I guess": Go ahead and say God "exists" or doesn't "exist". By definition, it doesn't matter. If you think it does matter, you've mistaken this "existence" for the kind of existence that means you might run into something on the way to the grocery store. It's not that, and nobody is claiming that it is.
Put another way: If somebody finds themselves caring about the results of this theory, they've made a category error; they've forgotten the explicit definition of one of the key terms. This theory can't possibly tell you anything that might impact your life; and if you think it does, it's because you've confused its specialized term "exist" with the common English word "exist". But the two have nothing to do with each other.
- Tegmark expands the circle of existence. Now existence includes unicorns and humans and Boltzmann brains and 4d Mandelbrot demons. Plus everything that traditionally exists also does in Tegmark theory world.
- If God exists given the traditional definition of existence, He should trivially also exist given this expanded definition.
- The traditional proofs of God are rendered moot because the Tegmark theory provides alternate explanations besides God.
- If the proofs for God's existence are rendered moot within a theory that makes it *easier* for stuff to exist than before, then they must also be moot within the old commonplace theory.
(I wouldn't be convinced by this. But maybe something else along these same lines could save the Tegmark theory from complete theological irrelevance?)
Yeah, exactly. On the original Tegmark post, I think I talked some about how you could syncretize Christianity or Hinduism with Tegmark and leave them all pretty much unscathed. Probably fewer epicycles to syncretize it with Hinduism.
Which for me means that they're just talking past each other.
(And I think it's fine that Tegmark-God can't control the multiverse, since the multiverse doesn't exist anyway. Tegmark-God is all-powerful over everything that exists.)
> If somebody finds themselves caring about the results of this theory, they've made a category error; they've forgotten the explicit definition of one of the key terms.
If we're trying to understand MUH on its own terms, it's better to just ignore the concept of "existence" entirely. MUH is not so much an explanation as it is a dissolution, a demonstration that there is nothing to explain.
The problem is that before MUH and its ilk came along, people were already using "exist" in the equivocal way you describe, leading to a sense that there is some mystery about "Why does the universe exist [in the broader sense], when it could easily have not existed [in the narrower sense]?" This question is confused because it's using "exist" in two different senses. But Tegmark et al are forced to adopt this framing because it's what people were already using.
I recall a quote from Scott Aaronson (paraphrasing): "We need good philosophy, if for no other reason than the refute bad philosophy!"
I think we completely agree. Yes, there have been many confused theological proofs. Yes, Tegmark dissolves the question of existence. You probably understand what I'm trying to say just fine, but I'll explain again, because maybe I'll do a better job this time.
Imagine that somebody on the Internet tells me that a safe and effective Covid vaccine doesn't exist. So I tell this person that, actually, according to Professor Tegmark, it's best to ignore the question of existence entirely. And then I set out to demonstrate that everything exists and nothing exists, and unicorns exist equally with rhinoceroses, and the question of whether you might meet something on the way to the grocery store isn't relevant.
I'm obviously failing to address this person's claim. "In some far corner of the abstract algebraic multiverse, a safe and effective Covid vaccine exists," is not a helpful answer.
I'm saying that applying Tegmark to theology fails in the same way. For religious believers, God exists in the same way the Covid vaccine exists (plus also other ways). He's an important normal part of everyday life. So anything that presupposes Tegmark is just talking past us.
(I should give a disclaimer: mathematical theorems are eternal and absolute, and God is also eternal and absolute. For this reason, there's a superficial similarity between Tegmark's theory and theology. The philosophy that merges mathematical theorems with God is called Platonism, which is why we've got a thread full of comments with people going "this is just Plato dude" and annoying Scott. But most religions reject that equivalency as too simplistic in some important way, as far as I can tell.)
"Unicorn" is shorthand for "something that doesn't exist". Perhaps a more philosophically rigorous example would be Nick Fury as portrayed in The Avengers. For Tegmark, Nick Fury and Joe Biden have exactly the same degree of existence. They're both pure abstract mathematical entities. Within the Tegmark theory, it doesn't even make sense to talk about Joe Biden existing and Nick Fury not existing.
> Falsifiability doesn’t just break down in weird situations outside the observable universe. It breaks down in every real world problem! It’s true that “there’s no such thing as dinosaurs, the Devil just planted fake fossils” isn’t falsifiable. But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword!
No, it isn't, because one edge actually cuts reality at its joints, and the other one is dull as a spoon. It is true that you can construct any number of hypotheses, all of which rely on some set of unfalsifiable axioms. And yes, you are free to pick whichever set of axioms you choose (by definition of axioms). But it just so happens that (at least) one set enables you to build predictive models of reality, and most of the others do no. If you assume that a nigh-omnipotent Devil planted dinosaur bones, that's it as far as the bones are concerned. If you assume that there exist natural processes that led to those bones being there, you can predict where other bones might be found, as well as many other things; and those predictions will ultimately enable you to control (some aspect of) reality at will.
The obvious counter-argument is, "All right, but what if I just adopt some version of solipsism ? Then I can imagine whatever I want in my head and control reality that way, assuming that what I mean by 'reality' is the simulation in my head". The short answer is that you lack this ability -- if you disbelieve me, try holding your breath as long as you can by an effort of will, while imagining that you do not require oxygen. The long answer is that all of your distant ancestors, all the way down to primitive clumps of primordial cells, picked the sharp edge of the sword; the ones who picked the dull edge all died. And if you choose to retreat into the navel-gazing world of arbitrary abstract constructs, you'll get outcompeted just like they did, and your brilliant-sounding insights will be lost with you.
Even assuming for the sake of argument that a nigh-omnipotent devil planted dinosaur bones in order to mislead us, there's a case to be made that we still ought to thoroughly study them https://www.schlockmercenary.com/2012-02-13 since a lie well-crafted, by someone who knew the truth themselves, often contains clues toward that truth and/or the liar's broader agenda.
Every psychophysical harmony argument defender is doing the “You fell victim to one of the classic blunders” meme in their heads reading about how evolution supposedly debunks it.
"But since the early years are finite and the later years (potentially) infinite, there will be more Boltzmann brains than normal life."
No way. If the later years have enough usable energy to do enough computation to produce a Boltzmann brain, then they have enough usable energy to make a self-replicator. The idea that the density of spontaneously-formed Boltzmann brains in the Multiverse is greater than the density of instances of the spontaneous formation of a self-replicator is utterly, beyond astronomically, absurd. And a timeline containing a single self-replicator surely has an also-beyond-astronomically greater expected number of consciousness-moments of evolved brains than the expected number of Boltzmann-brain consciousness-moments in a timeline without life.
There almost definitely are more Boltzmann self-replicators produced than Boltzmann brains, but they die just as soon as the Boltzmann brains because there isn't enough free energy for them to actually function as self-replicators for long. (Specifically, regardless of how long they've lived so far, they have an overwhelming probability of running out of available external energy and switching from "growing" to "dying" at every subsequent moment).
i still think its pointless and even proofs of it or refuting proofs of it are.
no amount of logical proof for God can make up for lack of experience of Him. this is the catholic fallacy imo, no amount of the priest telling you something can make you into a real believer. its not "i am
persuaded" as opposed to the road to
Damascus.
the flip side is even with an ironclad mental model of atheism, if you have an experience that would be described as conversion-level, you would either need to call yourself insane/unwell and unfit to perceive reality, or the proof would crumble.
i feel like people who don't get religion thinks it starts with the theology, the coherent system and dogmas that exists via proof and authority. theology though is people trying to make sense of the founding experiences. even believers forget the "making sense of" part and how tenuous it is. that we cannot define the mind of God.
to approach religion is accepting "mystery of experience" through faith, and often individually. The modern mind wants control, to the point it turns men into animal-machines to provide that ironclad theology. no mystery allowed. thats sort of inimical to religion; no system can ever explain why God makes a bush burn.
(also religious people need to remember it too, they were the ones who overempathised theology in the first place. Word of Faith people come to mind where they think confession or the correct language to a prayer compels God to act more, "speak those things that are not as though they are.")
its more pro. faith and experience are the basis of religion, logical proofs aren't really of as much value because thats not how we experience it any more than you start with a mathematical proof a specific person will become your friend.
it can be anti in a sense that believers forget that you cannot eliminate faith and intellectually guarantee it: religion is always seeing through a glass darkly, and i think people have to get used to that.
I feel confused by how rarely "anthropic principle" seems to be said in this discussion, given that I assume everyone involved knows the concept. Basically all the load-bearing parts here are independent of Tegmark's Mathematical Multiverse specifically! They're the logical conclusion of the anthropic principle, given a multiverse's existence! Existence, complexity, and the existence of fine-tuning all follow directly. Comprehensibility follows too - intelligence is based on pattern recognition, and pattern recognition evolved because it's useful, because the world on scales anywhere near human is highly predictable. If you have a sea of chaos, you're not going to get Darwinian evolution, even if at any given moment some sort of life is possible.
That said, I think kzhou7 has a point: the anthropic principle is, in general, surprisingly useless. At an absolute minimum it's insufficient: saying "X is true because it's necessary for humans to exist" is *not a scientific explanation*, and digging into why X is physically true is valuable, in terms of how scientific progress has proceeded. Of course, "God did it" is also not a scientific explanation in the slightest.
Since I went from atheist to Catholic with "moral realism" as a major factor, I wanted to take a swing at this:
"I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one."
They don't match that well! And it's clear they don't have to. One of (a number) of strong moments for me on this front was a college evopsych class where we learned it's not uncommon for gorillas to court by infanticide. (~a fifth of gorilla infant mortality in some studies: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0078256)
Evolution does not necessarily get off of local optima. It looks for stable, not absolutely optimal. And there's no guarantee anyway that what is right "wins" in evolution's measure.
I had *hoped* that the good and the evolutionarily sustainable might dovetail better, but it's pretty clear there are a lot of ways to organize a society (human or otherwise) that work *well enough* from evolution's point of view and you need a different tool to figure out which align with the Good.
And... why do you think there's a moral optimum to be pursued in the first place? And more importantly, even if it does exist, why do you think the Catholic church has any idea of what that moral optimum is, if humans were never even optimized for morality in the first place? You're just going to take their word that they speak the will of God?
I don't really understand Scott's comment there, because whatever "objective morality" is, it is clear a lot of human behavior or even belief doesn't match it. But unlike gorillas, human beings can develop abstract ethical principles (like the golden rule, human rights), which is analogous to developing abstractions like geometry. If geometry has some objective ultimate truth value, these moral abstractions can be thought of as having similar value. Atheists don't see knowledge of the one to be any more evidence of God's existence than the other.
Ok. We can think of optima, or ideals, that are better than the de facto ethics. That doesn't mean they are pre existing entities in Plato's heaven; and could mean that the process of imagining them is the first step in constructing them.
But aren't all the standard arguments for the existence of God so facially invalid that equally dubious mathematical speculations are unnecessary? "Why is there something rather than nothing?" That's an interesting question to which we don't know the answer. But "Because God chose for there to be something" wouldn't even be an answer, because God is already something. Fine tuning: again, it's a mystery, but thinking it demonstrates the existence of God is classic Sherlock Holmes fallacy. Tegmark's idea is something you can pull out as an example for someone who can't grasp the idea that an observation may have explanations they haven't thought of, but it's not necessary. And so on.
Oddly, I say all this as someone who at least tries to believe in God. With some success, but a great deal of awareness of the philosophical weakness of my position.
"Why is there something rather than nothing?" That's an interesting question to which we don't know the answer."
At least you admit we don't know. "God is already something" is part of the consideration; the answer is that creation is a gratuitous act of love.
Analogies are the divil to use, because someone always takes the wrong end of the stick, and one should *always* remember "not this, not this, is Thou' but let's take all the people on here arguing over "oh no if UBI came in/if AI took our jerbs, that would be disastrous".
Why? Because people need meaning in their lives and they need work to have meaning. A lot of people would reply "to hell with that, I hate my job and would quit in the morning if I could live on a basic income and do what I liked". But some people really do need work to exist to be able to experience the fullness of living.
Work in itself is not the end, the experience is. By analogy, creating a universe is not in itself the end, having sapient beings with which to interact in the economy of love is the end.
There's people who don't want there to be a God because they don't want an absolute authority, they want to be their own god and make their own choices and create their own values. This can be phrased very stirringly and with great show of principle, and maybe even principle at the base of it. But also some of it "you're not the boss of me, nobody is". I do wonder if they'd accept a God who shares all their values to the nth degree and makes no demands on them. Possibly not even then, that would still be too reminiscent of having your parents around when you're an adult and feeling they are trying to tell you what to do, even if their advice is meant well.
On Boltzmann brains, I think I just realized a contradiction. In short, I that brains that wonder if there are Boltzmann brains should overwhelmingly be those which exist in physical universes where the vast majority of brains that have that thought are not Boltzmann brains. So, in the mathematical universe, the set of simplicity weighted universes should overwhelmingly favor actual minds over Boltzmann brains.
To explain, we're first positing that the laws of physics we observe, which are consistent and coherent and imply the possibility of Boltzmann brains, are approximately correct, then it seems like we start by... assuming that the Boltzmann brains which would exist would have, I guess, randomly hallucinated the same class of physical laws which support Boltzmann brains. Why?
Only the overwhelming and insanely infinitesimal minority of Boltzmann brains should do that! That is, conditioning on Boltzmann brains existing, and having experiences such that they can consider laws of physics that apply to them, I would think that only infinitesimal subset of those brains should think that Boltzmann brains should be possible. (And that's after conditioning on the infinitesimal subset that have thoughts which are coherent at all; the vast majority of random fluctuations giving rise to brain-like states should be incoherent - mostly jokingly, perhaps you're only likely to be a Boltzmann brain when you're first waking up and can't coherently think at all?)
You could equally well say that the actual existence of physical objects seems implausible due to Heisenberg's uncertainty principle. Just because something can't be fully known doesn't mean it doesn't exist. (EDIT: the above comment was edited to remove "because of Goedel's Incompleteness Theorem". I don't claim this is an exhaustive answer to all reasons to doubt the actual existence of mathematical objects.)
The actual existence of mathematical objects is implausible because we have no knowledge or experience of them actually existing. You cannot drop one on your foot!
We experience physical objects all the time; I'm tapping on one at this precise moment.
Jürgen Schmidhuber’s ideas about computational universes address a lot of your points. Here’s the core framework:
1. The Great Programmer Theory
• All possible programs for all possible Universal Turing Machines exist mathematically.
• Every program corresponds to a universe.
• The fundamental “stuff” is program space, rather than physical matter.
• A program can contain nested subprograms that can act as observers or even other universes.
2. The Algorithmic Probability Distribution
• Each universe (program) is weighted by 2^-L, where L is its length in bits.
• Simpler programs (shorter code) naturally have higher probability.
• This distribution explains “why these laws?”—we observe simple laws because simple programs are most likely.
• Since you can sum over programs of length L with weights proportional to 2^{-L}, the total measure converges—unlike a naive “infinite universes” scenario where probabilities become undefined.
3. Observer Moments
• Conscious experience unfolds within computations.
• An observer’s “moment” is just a particular state within some program’s execution.
• We observe computable universes because only those computations that generate observers can yield observation-states.
• Our observed reality is weighted by the algorithmic probability of the programs that produce it.
4. Speed Prior
• Schmidhuber refines the basic simplicity weighting by also penalizing long runtimes.
• Universes (programs) that take “too long” or produce mostly random output get exponentially down-weighted.
• This resolves concerns about “freak” observers (e.g., Boltzmann Brains): a gargantuan runtime producing them randomly is far less likely than a neat, efficient universe that quickly evolves stable observers. Hence, we should expect our experience to involve relatively simple, lawlike physics running on “fast” code.
The way you describe time when talking about brains in the 'past' or 'future' also strikes me as wrong. Time is only a concept inside some of the computations. Outside of it, every program already exists and has already 'run'.
This is by far what most likely is just the fundamental nature of reality.
> But also, you can design a perverse programming language where complex concepts are simple, and simple concepts are complex. You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’. Now the Harry Potter universe is the simplest thing in existence and we should expect most observers to live there.
Note that on a scale from zero to infinity, Harry Potter is still closer to zero than to infinity. So even this perverse language *on average* prefers simple universes over complex ones.
Wrote up half a response but realized chatgpt can do it better than me:
The compiler (or any description language used for encoding a program) is inherently included in Kolmogorov complexity.
Kolmogorov complexity of a string , denoted as , is the length of the shortest program that produces when run on a fixed universal Turing machine . However, the complexity measure depends on the choice of the universal machine, such as a specific programming language or compiler.
The Role of the Compiler:
1. Dependence on the Universal Turing Machine: The Kolmogorov complexity is defined relative to a specific universal machine. If a different universal machine (e.g., a different compiler or interpreter) is used, the complexity might differ by at most a constant , which accounts for the complexity of translating between machines.
2. Compiler as a Transformation: If we write a program in a high-level language and then compile it into machine code, the compiler serves as a transformation step. The compiled program is an encoded version of the original source, and the complexity of the compiled version would be at most the sum of the original source complexity plus the complexity of the compiler.
3. Invariant Up to an Additive Constant: The choice of a different universal machine (such as a different compiler) only changes Kolmogorov complexity by a constant , since any two universal Turing machines can simulate each other with at most a fixed additional overhead.
Conclusion:
Yes, the compiler is part of Kolmogorov complexity indirectly through the choice of the universal machine. However, since Kolmogorov complexity is machine-dependent only up to an additive constant, this does not significantly affect asymptotic complexity analysis.
I wanted to answer to @Melvin but can't reach the send button.
I like his stance, that Scott's discussion with Bentham's Bulldogs, if akin to discussing about angels, focus on the interesting stuff about angels instead of how many fit on a pinhead, but I think it's bad to argue with God provers on their point at all.
Because their point isn't just wrong. Either their whole proving is nothing but them lying in your face (they don't believe it themselves) or it is them being terribly confused ontologically.
And both can have dire consequences for everybody else.
I think one should not argue with them how else then being created the universe could be existing, but that it cannot have a cause for existing.
Consequently, they cannot claim that what ever they think is right or wrong is backed by divine authority. There are authorities, but no divine ones. The dictator -- or mightiest being of the universe, itself a part of it -- might share your values, but still everyone has to find their own.
> But we know that it’s one-in-ten-billion and not one-in-two, because in fact you possess zero of the ten superlatives we mentioned earlier, and that would be a 1-in-1000 coincidence if you had a 50-50 chance of having each. So it seems like the universe must be finite rather than infinite in this particular way.
I don't think this argument works out. Consider the "Worlds tallest person" property which you almost certainly don't posses. This is not the only property related to height, we might also mention the properties "Worlds second tallest person", "Worlds third tallest person" ... "Worlds shortest person". If there are 10 billion people in the world, then clearly there is also 10 billion such properties for the measure "height" - and every person in the wolrd will uniquely possess exactly one of the properties. Since the properties are mutually exclusive and equally (un)likely, I don't think you can privilege the "tallest person" property specifically, and use *that* as an argument.
As I've gotten to be an old man, I've come to the realization that it's useful to just remain agnostic about some questions. To just say I don't know the truth about this. And then the important question becomes is this a useful idea (rather than a true one.) In this class I would put, "Does God exist?" and "Do photons exist?" IDK, but I find both concepts useful.
“We live in a possible world. How come we have real conscious experiences? Because the schematic of Possible World #13348 says that the beings in it have real conscious experiences“
Isn’t this the ontological argument for God’s existence, in which one of the stipulations of the concept of God is that said concept exists? (One possible rebuttal to it as a proof being that at least intra-universe one can’t magic unicorns into existence by tacking “and they exist!” onto the definition.)
The tegmark multiverse is a big place. Somewhere out there is a godlike being. Somewhere out there are unicorns. Somewhere out there is a giant flying spaghetti monster. The question is whether or not they exist in our universe.
Well, the quantum/inflationary multiverse is still pretty big. So it's more about whether they exist in our reality branch and past light cone. Which they probably don't.
Will they exist in our future light cone. That depends on what any future humans/AI's decide to create.
As for the discussion of psychophysical harmony, either I'm missing something or I think you are. It's a topic I find intensely fascinating so I'd be very glad for any correction!
The counterfactual of having conflicting psychic and physical events being strange is exactly the point! If you believe in epiphenomenalism, where mental events like qualia have zero causal influence, a misalignment like that couldn't have any effect on evolutionary pressures.
If you're not an epiphenomenalist, e.g, you think qualia do influence your physical actions, then it's trivial to see why evolutionary pressures would select for them, and I would be very surprised if that was the argument. (I don't think I understand your read of the argument, based on your example of the person expressing bewilderment at their inability to cast their hand into fire. The conflict wouldn't allow them to express or even think that thought, any more than it would allow them to repeatedly cast their hand into fire!)
"Lucian Lavoie writes: I think the biggest flaw with Tegmark's argument is that consciousness just doesn't exist."
Unfortunately for Lucian, consciousness clearly does exist. We don't completely understand it, but we experience it; it does exist.
I agree that Tegmark's hypothesis can be reframed to not depend on consciousness.
"... consciousness is the primary challenge to any materialist conception of the universe and that I don’t understand it."
Assuming what you don't understand is consciousness, then that (reasonable) lack of understanding is the very reason that consciousness does not challenge materialist conceptions.
All conceptions of the universe (materialist or otherwise) are incomplete. That's why we do science: to fill in some of that incompleteness. If consciousness happens to fall into the realm of "not completely understood currently" then that is just one of many things still to be worked out. It does not challenge materialism especially.
"What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other [explanation is true]?"
Okay; is "simplicity" a euphemism for "more likely"?
To me, it looks like this Mathematical Universe Theory is what happens when a Materialist bites the Dualist’s bullet.
In the typical “hard problem of consciousness” debate the Dualist will say that physical matter can only correlate with consciousness, but it can’t fully describe consciousness itself.
If a particular arrangement of matter results in the thought “that is a red ball”
Then a computer (or LLM) which read the details of the arrangement would *know* what it’s like to have the thought “that is a red ball.”
If it didn’t than that means the details of the matter-arrangement do not fully describe consciousness. Since the computer has read all the details.
So either there is something else to consciousness, that is not encapsulated by the physical, or the computer, by just reading this description, has now gained the ability to be conscious itself since it can now fully “know what it is like to see a red ball” in the fullest sense.
So it sounds like, if you bite this bullet, you accept that, yes, the computer has now gained consciousness. And similarly if you write down the description on paper the paper has gained consciousness… since the arrangement of physical matter ***is*** the thought,
than any sufficiently detailed description of the arrangement is also a perfect description of the thought.
And since, as the dualist will say, it’s impossible to fully describe a conscious thought to a non-conscious being, that means that any sufficiently detailed description of the arrangement is itself consciousness.
And once you go that far, you don’t even need the matter or the computer or the paper to begin with.
The description of the arrangement exists as a mathematical object. And that’s where consciousness comes from.
"Simplicity weighting" might come from the nature of substructures. Each mathematical object appears not once, but many times in the landscape, because they show up as substructures inside larger objects.
Most mathematical objects are going to be kind of like random noise. Inside a large random noise object, you'll find smaller ordered objects at random, just as if you have an infinite random sequence of 0s and 1s, you can find finite sequences at random within it. By "most" reasonable ways of ordering the multiverse, simpler substructures will be denser in the landscape than more complex ones.
For finding the probability of finding yourself in one world vs. another, you don't need an absolute complexity measure, only a relative one. So maybe it doesn't matter what programming language you use?
Finitely complex substructures infinitely dominate over infinitely complex ones, so you don't need to find the measure of weird sets within the landscape, either, as your probability of finding yourself there is zero.
The simpler the universe, the more often it appears, but more conscious observer-moments gives you more chances of finding yourself there. So there's a tradeoff: adding bits of information to your universe is worth it if those bits more than double the amount of experience (whatever that means). Tegmark's hypothesis seems to make two retrodictions:
1. That the universe will have simple laws. This is likely satisfied if the trend toward simpler laws continues in physics, and if our universe has a simple initial condition (an orderly Big Bang). Tegmark even suggested that our universe might contain almost no information.
2. That these simple laws will produce a large amount of experience (large number of observer-moments). This is satisfied if you take the Many Worlds interpretation.
I observe my own existence, and find myself to be an observer-moment within a sequence of observer-moments, which are collaborating with each other across time to form this thought, and we are in turn embedded within a simple structure with high measure on the landscape that also produces a large amount of such experience, and that is in turn a substructure of a large and likely noisy mathematical object.
But wait! There's no distribution over the whole landscape, so there being a specific top-level world leads to paradoxes of infinity. You can't observe it, either; each conscious substructure appears many times throughout the landscape, and there is self-locating uncertainty about which specific one you are. Need there be an answer to the question, "which, of these many identical copies of my experience, is the one that's me right now?" If we can't select a top-level world, perhaps there is no answer to which top-level world we're actually in. Perhaps we simply are the experience, not a specific instance of it?
The notion of an exact surrounding world is another possible sacred fire that could potentially be banished. You don't find yourself to be a world, but an experience, and if there are many possible mathematical objects that contain that exact experience, why must one of them be right and the others wrong? Think of the world around you as being in a fuzzy state, with a probability distribution over the possibilities. Your current experience exists within different worlds to different degrees.
It's also probably inaccurate to talk of "moments" experience, because consciousness requires time to compute and isn't instantaneous. I'm not sure how to reconcile this with the intuition that I find myself located in time and not spread out. I envision a sort of branching path-identity through these fuzzy probability distributions over worlds, with different probabilities of going down different futures. Even just with quantum branching in one universe, there are worlds where cosmic rays hit you in just the right way to change your experience into almost any other possible mind, so in that sense path-identity makes you of one being with everyone else, just in a distant and unlikely way. The branching paths of conscious experience through the multiverse would be a single fully connected tree, like Yggdrasil.
You could draw a path such that the most likely world that surrounds it at the start is different from the most likely world that surrounds it at the end; something like the world-hopping in Permutation City. However, you don't go down a single path, but all of them, so it's not really meaningful to draw one line.
This is a great conversation that I find fascinating as a graduate student in theology who has studied the classical arguments for God and the counter-arguments to some degree. The Mathematical Universe Hypothesis (MUH) was new to me — although I recognize some of the underlying ideas.
Summing up, it seems that these are the strongest rebuttals to this line of reasoning:
1) Tegmark’s Mathematical Universe Hypothesis isn’t really an argument: It’s a tautology. He assumes: All mathematical structures exist. Our universe is a mathematical structure. Therefore, our universe exists because it is a mathematical structure. But this is circular reasoning. Tegmark isn’t proving we live in a mathematical structure; he’s defining reality in a way that makes his conclusion inevitable.
2) More fundamentally, Tegmark never justifies why all mathematical objects must exist physically. He treats mathematical structures as inherently real without evidence. There’s zero experimental support for this claim.
3) The reliance on cellular automata (Game of Life) as an explanation also falls apart upon closer scrutiny. Game of Life still requires a designer to set its rules. Who set the rules of physics? Plus, no one has ever observed a real-world system starting without a cause. Also, Game of Life never creates biological complexity or self-replicating organisms.
Again, we face the problem that the MUH doesn’t explain why mathematical laws exist, why our universe follows them, or why consciousness emerges. It simply assumes everything exists somewhere and calls that an explanation. Which leads to ...
4) The MUH suffers from the same problem as “God did it.” It stops at a brute fact rather than seeking deeper explanations. Instead of saying, “The universe exists because God willed it,” the MUH just says, “The universe exists because math.” This could be dubbed a "Math of the Gaps" fallacy.
Theists are often accused of using "God of the Gaps" reasoning; i.e., invoking God to fill in gaps in scientific knowledge. The MUH does the same thing with mathematics, treating it as an ultimate explanation, even though it doesn’t actually explain anything.
Commenters have also identified some other problems for the MUH worth recapping here:
5) One major problem is the Boltzmann Brains issue. If all possible mathematical universes exist, most observers should be Boltzmann Brains, or randomly assembled conscious minds floating in chaos. A fine-tuned universe like ours is far less likely than a universe filled with chaotic, fluctuating observers. Since we do not experience chaotic, incoherent consciousness, this is evidence against the MUH and in favor of theism.
One proposed counter to this is claiming fine-tuned universes should still outnumber Boltzmann Brain scenarios. But this is based on made-up probability numbers. Even if this were right, theism doesn’t have this problem at all. A designed universe would naturally contain structured, embodied consciousness.
6) Another problem aptly raised is the issue of induction. If every possible universe exists, then there are infinitely many where the laws of physics change tomorrow. Why should we trust that tomorrow will resemble today? Theism offers an orderly, stable universe because it assumes a rational designer. The MUH does not. The attempt to solve this by introducing a simplicity-weighting system is an ad hoc assumption with no real justification. On that note...
7) Tegmark claims simpler universes should be more common. But what determines which universes are “simpler”? What enforces this weighting? If all mathematical objects exist, why should they follow a probability distribution at all? Tegmark adds this assumption only to prevent his theory from collapsing, but this assumption is just as unproven as the rest of the MUH.
Ultimately, if the question is which is the stronger explanation—theism or the MUH—then I claim theism "wins."
Theism assumes that an intelligent creator exists and established the universe. The MUH assumes that all mathematical objects exist and generate all possible realities. But while theism at least explains order, laws, and consciousness, the MUH assumes universes "just exist" without any justification. In other words, the MUH assumes more things than theism does while explaining less. If you prefer fewer assumptions, theism is the stronger argument.
In summary, it's important to recognize that the MUH isn’t a scientific theory. It’s a speculative philosophical model that assumes what it needs to prove.
Which makes me wonder: Would it be fair to sum up this debate as the one in which theists asserts divine aseity and Tegmark and his supporters assert mathematical aseity and everyone just reasons from there?
Based on his comments in this follow up, I think Scott's claim is intentionally slightly weaker than what you're arguing against (I won't comment on what Tegmark believes.)
You seem to think Scott/Tegmark is arguing from first principles, assuming things such as mathematical realism and simplicity-weighted probability.
I think instead he's just trying to break the theist's modus ponens which says: if conditions of the universe are well-tunded for life, they must be set by a creater god.
He's just identifying a plausible alternative that *could* result in well-tuned conditions, thus a creator god does not necessarily follow from the presence of well-tuned conditions.
I think that's fine as far as it goes, but in order to make this "slightly weaker" claim, Scott also has to weaken the arguments. That is to say, these classical theistic arguments aren't well represented by the "modus ponens" formulations presented. At least: Those are not the most rigorous forms of the arguments.
Take the one you have raised (fine-tuning). Theists don’t argue that fine-tuning logically necessitates a designer. Instead, they argue that the best explanation of fine-tuning is design, meaning it's an inference to the best explanation (IBE) rather than a strict logical deduction.
By treating the theistic argument as if it were a rigid modus ponens rather than an abductive argument, one misrepresents it and makes it easier to "defeat." It's not quite straw-manning, but it is approaching that sort of argumentation.
N.B. To be clearer: An IBE doesn't claim logical necessity (i.e., no other possibility) but rather explanatory superiority (i.e., the best available explanation). Scott’s argument, by merely offering an alternative, doesn’t refute theism. It only suggests another possibility, which isn't the same thing. If that is all he was attempting, then that's fine. But then I'm not sure why he would claim: "Tegmark’s Mathematical Universe Defeats Most Arguments For God’s Existence." Maybe that was just click-bait? (It got me to read it!)
I don't think you need to say that mathematical objects "exist physically" for it to work.
You just need to say that the sub-mathmatical objects that experience conscious (us) perceive the entire mathematical set that they are a part of (the universe) as though it existed physically
But that doesn’t seem to be Tegmark’s position (AFAICT). Tegmark asserts that mathematical existence IS physical existence, meaning that all mathematical structures that can host what you call 'conscious sub-mathematical objects' are real in the same way our universe is real. There is no "as though" about it; everything that can be described mathematically exists in the same ontological sense. Mathematical objects that describe physical laws are not just useful abstractions; they are identical to the physical world.
Your view, by contrast, seems to suggest that physicality is just a phenomenological illusion, something we merely *perceive* rather than something that *is.* But this shifts the argument toward antirealism (denying the independent existence of reality) or even solipsism (doubting whether anything outside one’s mind exists at all). Are you comfortable with those implications?
Also, what do you mean by "for it to work"? As a defeater for these classical arguments for God’s existence? If so, I’d be interested to see you develop that counter-argument more fully. Keep in mind that the classical arguments are, again, inferences to the best explanation (IBE), which are a form of abductive reasoning. To defeat them, you’d need to show that phenomenological illusion or some expression of solipsism is a better (simpler, more coherent, more explanatory) hypothesis. That seems like a hard case to make.
So unlike in The Matrix where you can take the "red pill" and wake up in a flesh-and-blood world ... it turns out there is no flesh-and-blood world? The Matrix is all there is, and the universe is, at base, something like that streaming code?
Classically people need to assert the existence of both the physical world and the mathematical structure of the physical world.
Saying that only the mathematical structure exists is self evidently simpler since it assumes the existence of one less thing.
The only question is can it explain the evidence, namely the evidence of consciousness. Like can mere mathematical objects be conscious?
Sure that does seem surprising. But as a dualist myself it doesn't seem that much more surprising than the claim that physical matter could be conscious.
But this assumes that mathematical structures are ontologically independent rather than merely descriptive. Sure, IF you assume that, then it's parsimonious to apply Occam’s Razor and conclude that a purely mathematical universe is "simpler." But, again, you have to assume that, which means you're begging the question against realism rather than proving the assumption itself.
Also, just a reminder that simplicity is only one criterion in an IBE. Explanatory power and coherence matter just as much. So even if we grant that ontologically independent mathematical structures are simpler on your view, we still have to answer crucial questions (like the one you yourself raised about consciousness) in a coherent and satisfying way. And that, it seems, isn't quite so ... simple..
N.B. I should also point out that if you accept dualism (mind and matter as distinct), then your view is no longer Tegmarkian, since the MUH is explicitly monist (holding that only mathematical structures exist). But I think you have already said you aren't defending Tegmark's view but rather a modified version of your own.
Okay, because I was initially pretty sceptical of Tegmark's theory and commented in line with that, I feel it's my responsibility to note that now I believe it's much more likely to be true. Here's an outline of what changed my mind (not arguments, just actions): reading the wikipedia article on the topic, thinking about the notion of "exist" a lot, thinking about what mathematics beings in a cellular automata might come up with, and reading the comments here (FeaturelessPoint's were particularly helpful). I might write a more detailed comment on my thoughts in the future, but now I don't have time, good luck to everyone!
I'm not the best at explaing things or at English in general, so it's unclear how much value you will find in my explanation, but I can't say no to such a direct ask, so I will at least attempt to explain the "exist" part.
So, first off: I'm assuming all the usual sensible things: no god, no human-independent, objective moral facts, no dualist consciousness. If you believe in these, I don't think you will find much value in the following, so you should look at other sources first to deconfuse yourself. (Also, some theory of computation knowledge is probably needed, in particular you need to be able to intuitively understand how general a concept computation is)
Let's start with assuming this reality exists only (the usual stance).
Let's simulate on an ordinary (but more powerful, than any actually available rn imo) computer an appropriate cellular automata (for simplicity) with intelligent life. They talk with each other, fight, think about consciousness, etc.. So far there is nothing special here. If you accepted the stuff in the prev paragraph, then you agree that they do actually exist (it doesn't matter whether you are implemented with carbon or with silicon).
You pause the running program. (This step is prob not necessary, but with it, its simpler to explain). Have they stopped existing? You can get another program which, by reading the memory of the simulator program, shows you their cities. They are simply frozen, but clearly still exist.
Here comes the trick: Notice that it is simply your interpretation (and the interpreter program's too) that the bit string in memory is cities, beings, etc.. You could invert every bit in memory and write a new program which when showing you the cities interprets the memory inverted. Or another program which before interpreting flips the first bit in memory and then, if you flip the first bit in memory, that program too will still have the same output.
Reality has no way to ensure that a given bit string in memory should be interpreted in a specific way. Okay, but then you could even interpret the number in memory as any other number, you have a completely general N->N map. But that means, that the freeze frame picture of their cities, could easily be something completely different, even a freeze frame from completely different automata, or the previous step of the same automata.
As we accepted that one of them exist, and there is no objective way to differentiate them, all of them exist, every possible step of every automata.
Furthermore, the electrical charges in the memory of the computer also don't have a sign put on them by reality that they are numbers, only we interpreted them as a big natural number! we could easily interpret a carbon atom as that specific big 'N' number, it has exactly as much objective connection to 'N' as the electric charges.
So, this means, that every possible turing machine's computation exist if we just have something, absolutely anything in our "starter" universe, because we can interpret that as any possible natural number.
Now, assuming our universe is also computable, we too can currently be in one of these simulated universes instead of us being the "starter" universe, we have no way to tell. So we've got (CUH+at least one featureless point) as our conception of existing stuff. I think the point can be removed, and that is achieved by inspecting what happens in your brain when you think 'exist' from a third person view, but I really should sleep now, so that's homework.
EDIT: Ah, this isn't correct unfortunately. My plan was that while assuming the previous existence concept, I transform the reader's view of reality to a state where it's easier to see the other existence concept, unfortunately I did not realise that I used the new existence concept in the derivation, so it fails. (From the view of the old existence concept I incorrectly assumed the existence of the incorrect map of a being existing in my map) A wholly new method is needed to illustrate why the new existence concept is desirable, if I ever come up with a linearization of my intuition about it, I will write a new comment.
I reply now to thank you but I will have to reread your message a couple of times before I can claim I understood. I mainly wanted you to be aware that some people read your comments.
I share your assumptions, for what it's worth (I don't think you meant that comma between "human-independent" and "objective moral facts").
I'm glad to hear my comments here were helpful to someone, whenever I comment here it's very easy to feel like my comments aren't communicating successfully and the only people who get what I'm intending to say are people who already know it, which is tiring. This comment makes commenting feel worthwhile, at least on this post.
His mention made me reread all your comments on this post! I genuinely admire your (and others) dedication. I usually start writing a comment, find it very tedious to explain all I need to explain, and quickly give up, thinking that nobody will read it anyway. I find it very difficult to change anybody's mind and hardly worth the cost. But I'm totally wrong on this. This entire discussion is evidence of it.
This whole conversation seems about as productive as humanities scholars arguing about whether a plane on a treadmill can take off, without bothering to learn the physics behind powered flight. Without the right mathematical grounding, armchair philosophy on a deeply mathematical proposition is going to lead to some wacky and incorrect conclusions (and not saying this post necessarily is wrong, but the *danger* of being wrong is exceptionally high). I'm reminded also of endless early 2000s internet threads on whether 0.999... = 1.
> If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined.
There's a similar argument that you can't pick a real number uniformly at random from an interval (if you pick a random number between 0 and 1, any number you could pick will be less likely than 1 in a billion) - why is exactly one of these arguments persuasive?
Let's go back from the beginning, because I think people are not quite grasping the profundity of this Tegmarkian view. The basic postulates of this cosmological view are:
- self-existing mathematical structures are *already* encoding every possible computation, including the one that simulates this very universe's physical laws, as well as every other possible computable universe. (the "self-existing" part is an axiom of the theory; this universe being computable is a common hypothesis of physics; the rest is just a basic theorem of computation)
- this is enough for the biological organisms in these 'simulated' universes to be conscious of themselves and their surrounding universe, as long as they have the (simulated) physical feedback loops necessary to sustain individual sentience.
- this 'simulation' operates in the manner of an Universal Dovetailer. The Universal Dovetailer ("UD" from now on) is a computational structure (a "program" for simplicity) that enumerates and executes every possible program, step by step, in an interleaved way. See https://alwaysasking.com/why-does-anything-exist/#The_Universal_Dovetailer for details.
Now, this is not a claim that there is a big computer at the bottom of everything (in which case you'd rightly wonder who wrote the program, and why it's coded in perl, etc.). Rather, the claim is that the UD is a self-instantiating logical fixed point. Since the UD simulates every possible computation, it also simulates itself, and every possible variation of itself. But you can easily prove that, within the UD's sense of time (an ongoing computation creates a structure of time within itself), longer programs will be simulated exponentially slowly, simply because there are exponentially more of them, and they share a the UD's finite computational capabilities. You simply cannot write a Universal Dovetailer in any programming language or mathematical representation that fails to simulate longer programs exponentially slower than shorter ones. (Also, the UD's finiteness need only be local; it can possibly be globally infinite and do a spatially infinite amount of calculations at each step, while still doing those interleaved steps one by one.)
So the simplest possible UD becomes a logical fixed point, because it's the one that gets simulated the most in *every* UD, including non-simple ones with useless epicycles. This is enough to ground the structural unicity of the simplest UD. If any logical structure has the capacity to be self-instantiating and infinitely generative, the UD is it, and due to any UD's bias towards the simplest instructions, the simplest UD wins out. From here on "UD" means this simplest UD.
This gives you a good basis to ground (accursed) anthropics. The outermost, most objective reference point you can find are not conscious observers within universes, or classes of them, or instants of observer-consciousness within worlds, but instants where the UD is simulating a conscious being. The steps where the UD is simulating insert matter like a stone obviously don't count for anthropics, because the stone (presumably) doesn't have the physical feedback loops that would allow it to sustain an individual subjectivity, let alone wonder what kind of world it finds itself in. And since the UD is (here, simplisticaly) modeled as a single-threaded computation, you can do stats on those instants, and ground anthropics on them.
So if you find yourself as a conscious observer within this Tegmarkian multiverse, the question becomes, what kind of world can you expect to observe?
The answer is simple: the mathematically simplest one that is still complex enough to give rise to a rich physics that can sustain evolutionary feedback loops. That's where the vast majority of computation steps instantiating consciousness are found in an UD.
And that happens to be a great match for the world we find ourselves in! Including discoverable physics based on relatively simple mathematical laws, with apparently arbitrary constants fine-tuned to make complex life possible. Everything that your theistic hypothesis explains, this explains too.
So let's compare this kind of theory to the philosophical theism that BB defends. We can define the theistic hypothesis as postulating an entity ("God") which is 1) self-sufficient, 2) generative, and 3) good. As BB has explained in detail, this predicts many of the features we find in the world, except for the bit where we'd (naively?) expect the world to be *more* good than it appears to be.
Now, self-sufficiency and generativeness are needed features of *any* possible cosmological foundation. You can't have a foundational theory without them. In this Tegmarkian view, the UD is also self-sufficient and generative. But in the theistic view, God is self-sufficient and generative *by fiat*. Here we actually have some structural justification for the UD to be self-sufficient (it's a logical fixed point), and generative (it's literally specified as a maximally generative program).
Note that God's goodness is not only problematic (see the epicycles of theodicy), but also an extremely onerous hypothesis. The sense of 'good' is one of the most complex things in human culture. Attempts to formalize it simply are also full of problems, pace Bentham and his defenders, human and canine alike. Like Scott hinted in one of his replies, the sense of "good" is an evolutionary product, the accumulated result of millions of years of biological evolution, plus ~100k years of much faster cultural evolution, of trying out and integrating strategies of collaboration/competition. To put *goodness*, of all things, at the cosmic level, as a fundamental self-existing element of reality is the most onerous hypothesis I can think of. Literally anything else would be simpler, including a tiger jumping at you right now out of nowhere.
And that's why the Tegmarkian view wins. It can do the whole job, including predicting a fine-tuned, comprehensible universe within which complex conscious beings develop a sense of the 'good'. And it only requires some maximally general postulates - pretty much the bare minimum to have anything at all.
Finally, I'll argue that the Tegmarkian view is *optionally* compatible with a spiritual, near-theistic view too. But that's optional; it's not an necessary part of the original theory.
For that, we start with the fact that biological life is compositional. You have life within life many layers deep — organelles within cells within organisms within superorganisms, and possibly higher (entire ecosystems are also not quite unlike organisms). Since bio life is basically feedback loops within feedback loops, and so is consciousness at the informational level, it's reasonable to think that consciousness may also be compositional, so there's consciousness within consciousness. All it requires, in both cases, is the combination of strong feedback loops with a clear enough demarcation between the (super) organism and the rest of the universe. So maybe individual cells are (very weakly) conscious; maybe brain hemispheres are; maybe an entire culture has an associated common super-consciousness; maybe even a tightly bound couple has one; maybe the entire Earth as an ecosystem does (that's the Gaia hypothesis). On the other hand, unnatural combinations like "my neighbor Peter and Donald Trump together" are not a conscious entity, because there's no strong natural demarcation that would put them together without including a lot of other stuff too.
If we assume something like this, then on a Tegmarkian view, there's no reason why the UD itself couldn't be seen as fulfilling the conditions for a global consciousness of its own. After all, the logical loops which make the simplest possible UD stand out within every possible UD's simulation of every other possible UD, are a bit like atemporal feedback loops too. But the UD is all about manifesting all possibilities without excluding anything, so beyond a sense of simplicity, it would be a completely neutral consciousness, quite unlike what we know as bio organisms.
And within that almost inimaginable whole, each computed universe stands quite separate from any other, being causally closed, so maybe each separate universe could have a something like a cosmic consciousness to it too.
This would match pretty with the reported experiences of spiritualists and mystics of all kinds. Not the part where they claim to break the laws of physics, that would be still impossible here, but the part where they report *feeling at one* or somehow entering into experiential contact with higher planes of consciousness, or God or gods. After all, "I" am a literal part of all levels of consciousness above me, from the smallest coherent human group I'm a part of, all the way up to this universe, and to the entire UD too. It's the stuff we're made of; in Indian terms, "atman = brahman". It's only a matter of tuning in, a question of attention. Hence the weird depths of meditation, which are hard to explain on a purely materialistic evolutionary basis.
Even more interestingly, this would match pretty well with the *varity* of types of spiritual experience, which individual religions usually have trouble accommodating. Shamanistic, Earth-based spirituality would be more like being in touch with Gaia; the historical God of the Tetragrammaton could be the spirit of the Jewish tribes at the time. And mystics of all religions have reported both personal and impersonal experiences. For personal experiences of the higher type, the relevant god appears as a great entity with a flow of its own and possibly advice or orders for you; that could mean you made contact with a larger collective consciousness, anything from the Earth all the way up to this entire universe. And pure impersonal, neutral experiences of pure spacious openness would mean you made contact with the UD-consciousness itself.
Again, to clarify, I like this and personally think something like this is true, but it's not a necessary part of the Tegmark view.
I read your comment twice so I could minimize my misunderstanding. I know that does not guarantee success, but I made the effort.
"The basic postulates of [Tegmark's] cosmological view are: ... self-existing mathematical structures are *already* encoding every possible computation, including the one that simulates this very universe's physical laws, as well as every other possible computable universe. (the "self-existing" part is an axiom of the theory; this universe being computable is a common hypothesis of physics; the rest is just a basic theorem of computation)"
Declaring "self-existence" an axiom does not make it true or real; it just attempts to put it beyond question. Such axioms MUST be questioned because the elaborate conceptual structures built on them are meaningless if the axioms are untrue. Axioms serve legit purposes--we cannot know everything--but Tegmark's axiom seems too carefully built to be taken on faith.
There is ZERO evidence that Tegmark's postulate (above) describes anything real. It is indistinguishable from any other fantasy.
Nothing real can be grounded on fantasy ideas. Not a "Universal Dovetailer" (UD), not anthropics, nothing.
Tegmark's view cannot defeat the various arguments for God because the proponents of those arguments (theists) generally believe their deity is real and active in the world. Their arguments cannot be defeated by fantasies.
"But in the theistic view, God is self-sufficient and generative *by fiat*."
Huh. As is the UD.
Fortunately, theistic arguments can be defeated without resorting to unmoored views like Tegmark's.
You know, I don't really disagree with that. It's also my attitude most tuesday afternoons. You'll find comments from me elsewhere in these threads making exactly this argument, that sometimes the sane say is to say "I don't know" rather than choosing among a bunch speculative options.
It's just a question of how far you're willing to speculate. If you want the safest attitude, you can stay up to date on the consensus view in physics (the Standard Model, General Relativity, the Big Bang with inflation, black holes, etc.) at whatever level of detail is appropriate for you, and call it a day. Anything beyond that, we just don't know. Why is there anything at all? We don't know. Why do we have consciousness? Same answer. Is math discovered or created? Same. I'm sure you're aware that there is a pretty big list of important questions that we don't have definitive answers for.
But if everyone took this view and stopped asking questions beyond the edge of knowledge, then knowledge would never advance! So people speculate. They make up possible theories that cannot be tested, or are not even very precise in their formulation, but sound neat and have some explanatory power. Some are paid to do that as scientists or philosophers or whatever, and they do it at a high level. People even make up meta-theories about how we come to know anything, and then argue about them. The frontier between proper research material and purely speculative ideas is not water-tight. Many, if not most successful and accepted theories started their life as speculative ideas with weak support. Over time, some ideas gain traction, others don't and are forgotten.
And many of us on forums like this enjoy watching from the edges, reading stuff to keep up to date with the latest ideas and how they are faring, and discussing such things here. You get to hone your understanding of complex ideas, and test it against live opponents, what's not to like?
So yeah, I 100% agree that you don't need to bring the full heaviness of Tegmark in, to say you don't believe in God. It's perfectly reasonable to just look at the idea that the world was made by a conscious, moral agent and say, "no, I don't buy that".
But I also find Tegmark's thing interesting in itself. Can't say at this point that I 100% believe in it, but it's the only speculative, vaguely scientific theory that goes all the way down to a self-supporting foundation, and that makes *some* sense to me. So I welcome the opportunity to spend an afternoon re-reading about it and getting up to date with some people's objections, and making up my mind as if it mattered. And if Scott brings Tegmark up to reply to BB's litany of arguments, I'm happy to chime in in support.
I am untroubled by speculation; as you say: it's necessary as well as useful.
But I am troubled by unacknowledged speculation. In these threads on Tegmark's ideas, the purely speculative nature of these ideas seems unacknowledged, ESPECIALLY since these threads began under the title "Tegmark's Mathematical Universe **Defeats Most Proofs Of God's Existence**". (**emphasis** added.)
Those proofs of God's existence are rarely regarded by their proponents as speculative; they generally believe their God is quite real. To offer pure speculation to "defeat" non-speculative claims is foolish. It cannot be done. So the very title implies that these comments are not speculation.
This is why I come to the belief that many on these threads don't think they are engaging in speculation. You might be, but it seem others are not.
So, speculate away! I only seek clarity: speculation should be expressly acknowledged, otherwise one can reasonably regard these comments as non-speculative. And defective thereby.
Given mathematical omniscience, seems to me a conceptually straightforward (albeit inconvenient for us mere mortals to check) way to prioritize simplicity would be to
1) work out the smallest finite state machine which produces an unambiguous description of the mathematical object in question as its output, and nothing else, then halts,
2) check how many states that machine contains, and finally
3) give the mathematical object in question probabilistic weight inversely proportional to the Busy Beaver number for machines of that size.
This could conceivably be implemented by a jumble of individually primitive parts with no hierarchical, or even coherently causal, organization. Consider a FSM built like a ribosome and string of RNA, protein being the output tape. Within some higher-dimensional manifold those ribosome-analogs end up loosely sorted by relative size, perhaps by a stochastic mechanism similar to what gives beaches and other sediment beds locally consistent fineness of sand, silt, gravel, or clay, so they're mostly competing with same-size peers for input materials. Such materials would be fragments of solidified lower-dimensional spacetime, selected and installed based on compatibility with the relevant ribosome-analog's local rules, sorta like trying to solve a 4-dimensional jigsaw puzzle by grabbing blindly from the set of all possible pieces and checking them against a https://en.wikipedia.org/wiki/Go/no-go_gauge
Turing machines that don't halt are statistically irrelevant from an anthropic standpoint, since they only ever extrude one universe each. Busy Beavers, on the other hand, gobble up all the locally available copies of some specific type of event, while requiring relatively few end-cap pieces. If that event is, say, a proton decaying (or some zigzagging decay / reform cycle), nearby alternate universes with only stable or near-stable protons are mostly unaffected, but if it's something like "lone photon continues unperturbed," any cosmology within diffusion range involving lots of wide-open spaces between light sources might be hindered by the resultant shortage of that critical piconutrient.
Tegmark's mathematical universe feels pretty natural, and I have had thoughts along this line previously when showed some existence proofs for god, but it feels like when I try to formulate it carefully things go a bit wrong. If I have some finite alphabet to describe physical laws and weight the universes to favor lower complexity, and "who I am" is chosen between some set of possible conscious beings in the universes, it seems like with probability 1 I will be in a universe with infinitely many conscious beings to choose from if any such universe is possible. (additional complexity should be penalized by a finite multiple to probability given a finite alphabet, so that our probability measure sums to 1). My possibly wrong understanding is that our universe appears to be finite (say in terms of how much energy was present in the big bang). So my observing this universe appears to be a probability 0 event. Does anyone have good ways to resolve this?
1) there are no possible universes with infinitely many observers and a finite description. This seems obviously wrong. If Conway's game of life is Turing complete and then it is likely possible to create an observer in it. Now just have the rule that the initial conditions are periodic on an infinite grid and you have infinite observers.
2) there could be other big bangs far away but in the "same universe" following the same physical laws such that this universe is actually infinite and so are the number of observers I am drawing from in this universe. (for instance our universe could have periodic initial conditions with a period far longer than the lightcone we can see.) Seems possible but if we our application of occam's razor gives different answers depending on if there are "duplicate big bangs" far enough away that we can never observe them this seems quite bad.
3) Something something multiverse interpretation of quantum mechanics gives infinite Everett branches in this universe with infinite observers total. This seems like it might be technically true in some sense, but I don't think it resolves the problem. Would need to think about what counts as a different observer to draw from. Seems like you should naturally be integrating over the Everett branches not summing in which case you are still 0 probability compared to a universe with infinitely many observers in each of a collection of Everett branches.
Tegmark's mathematical universe seems unnatural because there is no evidence that it is more than imaginary. Many imaginary worlds have been conceived (LOTR, Star Trek and Wars, Hogwarts, the Matrix, etc) for which there are no reasons to believe in their actual existence.
Our universe is likely to be finite, and the probability of any one of us existing was very small, and yet, here we are. It's not zero. Obviously.
If every possible universe exists, most will be infinite (adding a place where to stop, add some complexity).
Almost all states of an infinite universe is at almost maximum entropy (there will still be an infinite number of states at low entropy, entropy will sometimes decrease a lot).
And there is infinitely more beings in a infinite universe than in a finite one.
So, there is still a Boltzmann brains problem, but I agree it is a problem even without assuming every possible universe exists.
In fact, it is just like the MWI of quantum physics, but at a more meta-physical level:
If we assume Solomonoff induction is the correct way to do induction, every problematic conclusion we get from assuming the probability of particular universe, is instead a "measure" of how much this universe exists, we also get if we assume it is just a probability of existence.
Also we can't really know if we are a Boltzmann brains or not. If we could, it would not be a problem, we would just see that at posteriori, we are not, even if it is much more probable a priori.
>But I would also add that we should be used to dealing with infinity in this particular way - it’s what we do for hypotheses. There are an infinite number of hypotheses explaining any given observation. Why is there a pen on my desk right now? Could be because I put it there. Could be because the Devil put it there. Could be because it formed out of spontaneous vacuum fluctuations a moment ago. Could be there is no pen and I’m hallucinating because I took drugs and then took another anti-memory drug to forget about the first drugs. Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor).
What is your evidence that this is how people actually reason in this way, and what would it mean in principle for there to be a correct hypothesis about this? People reason this way because they adopt social behaviors that others deem worthy enough to accomplish their goals. Metaphysics is not part of the ordinary life of reasoning, and I see no epistemic reason whatsoever to accept whatever you deem worthy of calling a hypothesis that is bound by Occam's Razor vs a maximally inflationist ontology that accomplishes the same goals - because metaphysics never played a role in our reasoning processes in the first place.
>What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, faked OJ’s appearance, faked his DNA, then vanished into thin air? Does this depend on the police having some piece of evidence left in reserve which they haven’t told the theory-crafters, that they can bring out at a dramatic moment to “falsify” the latter theory? No. Perhaps OJ’s defense team formulated the second-killer theory so that none of the evidence presented at the trial could falsify it. Rejecting it requires us to determine that it deserves a complexity penalty relative to the simple theory that OJ was the killer and everything is straightforwardly as it seems.
when you just wrote this
>Every observation has an infinite number of possible explanatory hypotheses. Some of these could be falsifiable - but in practice you’re not going to falsify all infinity of them.
In practice, juries aren't instructed to deliberate with respect to Occam's Razor. Neither the lawyers nor the police operate with it in mind. Philosophers of science/espitemologists aren't brought in for expert testimony where they try to preach to the jury about theoretical virtues and falsifiability and simplicity and other competing models of explanation/scientific theories. The jurors simply act in accordance with human practice of behaving in the various social contexts that encompass the hearing. While these posthoc appeals to simplicity and falsifiability can influence someone's reasoning or aesthetic prefrences in reasoning, it's not clear at all that they are involved in, let alone essential to, deliberation. (Note that there is no universally accepted account of simplicity or falsifiability, so even applying those principles to e.g. a jury deliberation is going to display huge variability in what the subjects permit to encompass those concepts and what they rule out. It's even possible [if not probable] that the subjects will stick to the judgments they issued earlier and operationalize the principles so as to defend their prior commitments without using the principles as tools to update their reasoning.)
Why even give credence to Tegmark here by arguing about it?
Math exists WITHIN the universe, the end. The fact many things inside what WE perceive as the universe contains math is not what defines it. The fact you watch a movie on a screen does not means what you just saw at the screen is real NOR DEFINES all the screen can show you.
Moreover, if the universe was merely mathematical, we can then code god, which would make Tegmark into some sort of cyberpunk christian at best. So.much.nonsense.
I know this is an "old" post by this pointm, and that my gripe was probably debated in the comments already, but I think your notion of "falsifiable" is wrong, or at least not what people (among which are the relevant philosophers who coined and usef the term) mean then they say falsifiable.
You give examples of things you call unfalsifiable in the sense that its probably very hard to prove them (a conspirator doing the oj killings, infering things about an archeology question where all the pots and what not have been dug up already). That's not unfalsifiable, unfalsifiable means there isn't even a theoretical way, given whatever resources and compute you want, to answer the question (or more accurately, to differentiate IN ANY POSSIBLE WAY between the wrld in which the hypothesis is true to one in which it is false).
A world in which the devil magically put dinosaur bones in the ground is completely falsifiable from a world where there is not devil and dinos existed. In the former world there's a devil, you can presumably talk to him, he can show you how he creates bones out of nothing and puts them in all sorts of places. In the latter there aren't. There are experiments you can hypothetically hold, or at least happenstances that can hypothetically occur, that prove to you that you're in world A and not B or vice versa. "Ufalsifiable" would be there aren't such experiments or happenstances, even theoretically.
If a theory it unfalsifiable it means its tautological (logically the two terms mean the same thing), and as far as I know you (Scott) don't like tautological arguments.
The idea of falsifiability was designed to combat badly defined questions that mean nothing. If a person claims animals "dont have a soul, but humans do", and you keep asking them what does it mean to have a soul to begin with - if they don't have a well defined model it can be shown that there's know hypothetical way to differenciate between a universe where humans have this thing called a soul to one that they dont. Its a sanity test you can run on a hypothesis to check that its not a nonsensical tautology.
Souls are an easy example, but another could be highschool level debates on determinism vs free choice. In advanced philosophical debates there might be some sort of way to define what that even means, but when you're arguing about it in middle school or high school, nobody could answer the question "wait, what possible experiments can you hold that will tell you if you are in a universe with free choice or not? assume you have every resource you want. In fact, what could even happen that will prove to you one or the other?".
Realizing this has no answer (when you're the average nerdy school student) should then inform you the debate itself is ill defined, even though you have a strong intuition its a real and important question.
Falsifiability is, in fact, a must for any theory. It doesn't break down in every day life.
The commenters stating that this entire debate isn't falsifiable are also wrong since you need to ask the question "falsifiable from what?". Is Tegmark's theory falsifiable from the existence of God as a magical dude you can talk too? definitely, in a similar way the devil example from above is. Is it falsifiable from "the way we understand the universe now"? Maybe, I'm not sure there's a cannonical baseline theory that adresses the concerns Tegmark's theory does.
“I admit that in rejecting this, I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.”
Yeah came here to say something related.
Moral realism is hardly *that* obscure of a “only stupid theists believe this one” sort of moral and philosophical belief to hold.
Yes the moral knowledge argument proceeds on the assumption that it will be convincing to, and only to, people who believe in an objective moral reality.
But there are plenty of pretty robust arguments for objective morality out there; it’s not just endless question-begging.
If you’re not a moral realist when you encounter the argument from moral realism/moral knowledge then sure it’s not going to be convincing, but it doesn’t follow that moral realism is just a baseless view to hold. It’s just a case of differing priors.
I think moral realism probably is quite obscure. I think it's a view endorsed almost exclusively by philosophers and people who've been influenced by philosophers, and only specific philosophical traditions, at that. I don't think most people today are moral realists, nor do I think most people in the past were, either.
As far as arguments for objective morality: there's lots of arguments out there, but I'd hardly call any of them robust. There are no good arguments for moral realism.
I don't find moral realism ridiculous, just counting it as a "proof" of God's existence. It's more "God has to exist, unless this popular philosophical position is true."
> you are clearly suggesting that the inference being made in the moral knowledge argument/EDA is valid or cogent, therefore the only thing that could be "ridiculous" would be the premise that we have moral knowledge, right?
What is ridiculous is assuming that some controversial position is true and then when encountering counter-evidence to it, keeping your prior trapped and treating it as evidence in favor of some other, even more controversial position, instead of reevaluating your initial assumption.
Here is an example. Suppose you don't know whether a person is going to order a cake for dessert or not. You can think of some arguments why they would: cake is on the menu, cakes are tasty, a lot of people order them, the person has enough money to order it, etc. You therefore conclude that the person is going to order one.
But then a new though crosses your mind. Wouldn't it be a huge coincidence of this person ordered specifically cake? There are other dessert option. And, it's not even clear that the person wants a dessert that much. Why would they spend their limited money specifically on cake instead of all other things they like?
Would it be, therefore, reasonable to conclude, that not only this person will indeed order cake, but also that they are cake obsessed? That they value cake extremely, which explains away the coincidence of them ordering cake instead of spending their money any other way?
> Sure if one starts with an extremely low prior on Theism
The point isn't that Theism is so low probable that you should immediately refute anything that implies it. This isn't exactly a bad heuristic, but we are talking about a much weaker claim, that whatever your prior on Theism is it costs your theory *some complexity penalty* if it requires Theism to be true.
> But I did order cake?? and now you have to explain why?
Only *after* we indeed already observed that the person ordered the cake we can justifiably increase our confidence in the statement that the person cares about cake a lot. Until then it would be a ridiculous reasoning mistake to simply jump to the conclusion and assume both future order and cake obsession.
I'm starting to suspect that you are confusing "I believe in X" with "X is true". To be clear, cake order is a metaphor for the latter, not the former.
> Are your priors such to even be open to the possibility that you might be wrong?
They are open enough that when I hear "Boltzman brain" I do not immediately assume that I have to be extremely confident just on priors that I'm not one. I think we have pretty good a posteriory evidence against this hypothesis, though.
I don't consider fine tuning particularly interesting, as we are dealing with a sample of one here, so the most of probability mass goes to out of model error. Inside the model, yes it's unlikely that our universe just originated on sheer random, and there probably is some kind of optimization process involved. But any particular kind of optimization process is very unlikely, especially if you demand that it's also sentient, infinitely good and powerful.
If I encounter some evidence that evolution and naturalism *does not* solve psychophysical Harmony, I'd be happy to incorporate it into my world model. For now I see that all Mathew's arguments about it are rooted in being a dualist and not understanding evolution and that the whole concept of "psychophysical Harmony" is confused. https://benthams.substack.com/p/arguments-for-god-tier-list/comment/83765728?utm_source=activity_item
I'm pretty confident that torturing small children to death is wrong. Again, I'm ready to entertain a argument to the contrary, appealing to some other human values. But if God Himself would tell me, appealing to His divine authority, that it's actually good to torture children, because He is the one who decides what's good and what's bad, and He said that it's good so end of discussion and my values are just wrong, that wouldn't count as one. In such situation I'd claim that God got it wrong and is evil and then we will come to some negotiated agreement as reasonable beings, where my batna would be to coordinate as many people as possible to go full Elisha ben Abuyah on His divine ass.
I agree it’s silly to call it a proof — but in my experience, the more sophisticated arguments like the argument from moral knowledge are basically never referred to as “proofs” by their defenders. The arguments that get called proofs are invariably the less convincing ones. (The anthropic argument is an exception, since the paper under review Matthew and I wrote on it is called “A Presumptuous Proof of Theism” or something — but that’s just for the alliteration!)
I don't know about "ridiculous" but it is one of the least plausible positions to enjoy so much popularity in all of academic philosophy. There isn't a single good argument for moral realism.
>But also I have never understood why anti-realists aren't all just error theorists
Because something can be true without correspondence to a special-pupose domain of truth makers; and useful without being true; and meaningful without being useful or true
I'm not quite. following that. I like quasi realism, and I don't think it's extreme , and I can't tell whether you think extremism or good or bad. Are you influenced by Lance Barber?
What is true without correspondence to reality?
What is correspondence and how do you ever verify anything "in the world" corresponds to something "in your mind," which by the presuppositions involved in much of the discourse is inaccessible since you're only ever "in your mind?"
>"I have never understood why anti-realists aren't all just error theorists, like non-cognitivists, subjectivists etc. all just seem to be using language weirdly"
Speaking personally (from a broadly pragmatist perspective): error theory rests on a semantic analysis of moral language that seems to take us further away from the goal of providing an adequate description of moral language, rather than closer to it.
Error theory works fine as an analysis of the language of *realist moral philosophers*, since those philosophers have explicitly specified a semantics for their own uses of moral language. But ordinary moral language (like all ordinary language) doesn't come with its own semantics. A version of the principle of charity suggests that if we want to understand what people are doing with language, semantic analyses that attribute falsehood to large domains of discourse aren't going to be particularly helpful.
Lance Bush's philosophical position (as I understand it) is that moral language doesn't have a determinate semantics. He believes this position receives support from empirical work revealing variations and inconsistencies in lay responses to survey questions on metaethical topics.
I think that's fine, as far as it goes. One problem I have with Lance's writings on this topic is that he tends to skirt around explicit engagement with the problem of whether, under what circumstances, and to what extent, *any* domain of language can be said to have a determinate semantics. He has gestured towards the larger metaphilosophical issues at stake, but hasn't really given any clear indication of how he thinks they should be resolved.
(This problem isn't unique to Lance, of course — much contemporary writing on metaethics suffers because it tries to address metaethics too narrowly, in isolation from broader metaphilosophical issues. Without a more clearly elaborated stance on those broader issues, I'm not sure that the metaethical debates can even be said to be meaningful.)
Being Lance he treats "no determinate semantics" as "utterly meaningless" .
That's a bit uncharitable.
Lance says some kinds of metaethical views (e.g. "moral language describes objective non-natural facts") are unintelligible. If you want to round that off to "utterly meaningless," fair enough.
He *doesn't* say moral views are themselves unintelligible. As far as I know, he thinks he understands moral claims perfectly well.
To the best of my knowledge, he has never addressed the possibility of tensions between these two contrasting intelligibility claims.
"Lance says some kinds of metaethical views (e.g. "moral language describes objective non-natural facts") are unintelligible" yes, but it's a different claim to saying that ordinary language ethical statements have no determinate meaning, since it consiists entirely of philosophical terms of art. And there is no evidence of its unintelligibility except his own it-seems-to-me.
This is not true. I do not equate indeterminacy with meaninglessness. On the contrary, I think ordinary moral language is very meaningful. You don't have to have determinate metaethical views when engaging in ordinary moral language to make meaningful claims.
Yea, that's basically my view! Nice to see someone characterizing it; though I don't think ordinary moral language is incoherent. That's a suggestion Don Loeb has argued for, but I don't endorse that view.
The view I defend is "folk metaethical indeterminacy." Essentially, I don't really think nonphilosophers are realists or antirealists at all; I think they have no particular metaethical views. Hence, why I'm not an error theorist. Error theory isn't consistent with indeterminacy.
"I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one."
And yet most of us *do* act as if morality is objective and eternal. Take the current roiling about Nazis and Fascism: we don't argue that "well their standard of morality was a different version to the one we accept", we condemn them. Slavery is not accepted as "okay back then we believed one thing, now we believe a different thing", it's condemned as always being wrong. Abortion rights, gay rights, trans rights - there's no "we used to think A, now we think B" about that, those in favour argue that such things as rights exist, these cases involve rights, and there is an objectively correct and an objectively incorrect way to think and act about these.
Try arguing that it's just as morally acceptable/correct to be a 'transphobe' as to be an 'ally' because there is no objective set of standards, and see how far that gets you.
That's because most of us are treating morality as intersubjective - we agree to certain ideas or principles and then evaluate everything else against those ideas and principles, creating the illusion of objectivity. But you can't evaluate against an arbitrary standard any differently than you would an objective one, so it looks the same.
So what? Suppose you say it's wrong to be a transphobe because it doesn't cohere with some arbitrary ideal or principle. The transphobe can simply reject the ideal or principle.
Yes, they could.
...did you think that was some knock against what I was saying?
There's no objective set of standards that makes salad a better food than gasoline, but if you're going to feed me gasoline for dinner, as a human, I'm going to object. Similarly, if you're going to treat me in a way that might be perfectly appropriate for lizards to treat lizards according to lizards, I'm still going to object as a human and it would be ridiculous to expect me not to.
You are conflating first order normative ethics with second order meta ethics, likely because you are opining without ever studying either subject to at least minimally understand the subject area difference. I take it this your disposition towards every every study area you comment on?
"I have never understood why anti-realists aren't all just error theorists, like non-cognitivists, subjectivists etc. all just seem to be using language weirdly, sure lay people have confused meta ethical beliefs but like for a more sophisticated anti-realist to avoid orienting their anti-realism towards substantive realism seems weird."
They may seem to be using language weirdly *to you*, but there are no good arguments that ordinary people speak, think, or act like moral realists, and that therefore noncognitivists or subjectivists are speaking any more weirdly than anyone else.
Also, noncognitivism and subjectivism aren't declarations of one's personal commitment to speaking this way; they are typically claims about how others speak, so it would be question begging to think that they have got ordinary language wrong: their positions are positions about the nature of ordinary language! If they're correct, it would be realists and error theorists who are speaking weirdly.
I'm a moral antirealist, but there is absolutely no good reason to be an error theorist. Error theory relies on a semantic thesis about the meaning of ordinary moral claims, and, specifically, the claim that ordinary moral claims purport to describe stance-independent moral facts. But why would I endorse that? I don't think it's true and I don't think there are good arguments that it's true.
> it seems reasonable to also accept that such a universe could spontaneously create stable self-replicating structures that begin to eat the surrounding entropy and create an expanding region of simplicity. In this case, most universes would eventually be dominated by relatively low-entropy environments like the one we find ourselves in.
This is not possible. Or, rather, it's just as improbable as skipping straight to the large low-entropy universe directly. A state that expands without bound and without consuming free energy *just is* a high entropy state.
The thing that makes entropy entropy is that you *can't* eat it. The only way simple things can come to exist from a maximum entropy state is a stochastic fluctuation.
Jumping in before I've read the full post, but Bentham's Bulldogs comments about cardinality are incorrect -- the way you described things is closer to correct (although I do wish you would learn some actual math and stop mangling things :P ).
It is *not*, in fact, the case, that in mathematics we measure the size of an infinite set solely by its cardinality. Rather, cardinality is *one* way of measuring the size of an set, that can be used as appropriate. For a subset of the plane, one might use area. For a subset of the whole numbers, one might use natural density. For sets equippped with a well-ordering you'd use ordinals, not cardinals. Etc, etc.
Usually cardinality is not a very helpful measure when dealing with infinite sets, in fact, because it's so crude and lossy. (A rectangle of area 2 and of area 1 have the same cardinality, but they still have different areas!) I'd say one advantage of cardinality is that it can be applied to literally any set, regardless of context, whereas other measures will have a more limited domain of application; but as advantages go that's generally not a very relevant one. Most mathematicians aren't set theorists!
If someone says to you that in math the size of an infinite set is measured solely by cardinality, you can tell they haven't actually done much math involving infinite sets!
So is BB right or not when he claims that if every world has one billion real people and one Boltzmann brain, and there are an infinite number of worlds, the chance of being a Boltzmann brain isn't one-in-a-billion, it's 50-50?
I'm saying he's wrong, yes, or at least not necessarily right. Cardinality is usually not the right way to think about things outside of finite cases.
I agree you can take a measure over observers. But my point is that the measure will not actually be a good way to measuring probabilities. If your view says that 99.999999999% of people have some property, but you could arrange the worlds so that they're each filled with 99.999999999999% of people without that property, something has gone wrong.
“Measure” just is another word for “probability” in this context.
I don’t think there’s any good way to make sense of the kinds of objective probability that Scott and many others try to rely on, but whatever sort of probability might be useful here just isn’t going to be invariant under permutations.
Sorry can you explain what invariant under permutations means and why this is?
I think Kenny just means permuting the worlds will give different answers for the probability...ie he's agreeing with you that depending on how you arrange things, the probability will change, but that's just a necessary feature for a probability to have in this context.
I’m thinking of probability as a function from sets of outcomes to real numbers. Permutation invariance is the idea that if you take any permutation of the outcomes (ie, a one-to-one total and onto function from outcomes to outcomes) then the probability of any set of outcomes is equal to the probability of the set resulting from replacing each outcome by the one the permutation sends it to. “Cardinality” is just cantor’s term for the thing that is preserved under permutations. Your argument seemed to be stating that, since one in a billion of infinity is equal cardinality to the rest, it shouldn’t have wildly different probability, so I was thinking of it as a permutation invariance argument. But in infinite sets, there is no permutation invariant measure, because every infinite set has equal cardinality to infinitely many disjoint proper subsets of itself.
Oh ha Kenny responded before I could to say basically what I was going to. But to say roughly the same thing real briefly -- what do you mean you "could" arrange the worlds this way? *Are* they arranged this way? What power do you have to alter this arrangement?
Insistence on using cardinality is the same thing as saying that everything has to be invariant under every permutation. But, why does it in this case? I'm not seeing any obvious reason that should be true!
I discuss in more detail here https://benthams.substack.com/p/reply-to-scott-alexander-once-more.
He's wrong.
The number of natural numbers that are divisible by one billion is aleph_0 (that's a cardinality). The number of natural numbers that aren't divisible by one billion is also aleph_0. It emphatically does not follow that the probability that a natural number is divisible by a billion is equal to the probability that a natural number is not divisible by a billion!
Comparing cardinalities to get probabilities doesn't make any sense, and isn't something a mathematician would do. (The fact that the cardinalities are equal has other consequences, notably that you can make a 1-1 pairing between the numbers that aren't divisible by a billion and the numbers that are, that does't leave any number out. Or equivalently, a 1-1 pairing between the regular brains and the Boltazman brains. In fact, this 1-1 pairing is pretty much the definition of "cardinality".)
Saying the probability is one-in-a-billion is intuitive, but I don't know of a mathematically rigorous way to obtain it. It's tricky because the "probability" that you get changes if you group the numbers/brains differently.
An example where trying to compare infinities breaks particularly badly: https://www.philosophyetc.net/2006/03/infinite-spheres-of-utility.html
I have to nitpick and point out that you can't actually choose a natural number uniformly at random, and what you want is probably natural density rather than probability. :P
Some people use the word “probability” for the kind of non-countably-additive set function that you get from this concept of natural density. I don’t think it’s particularly helpful to be pedantic about which thing does or doesn’t count as “probability” here (even though I strongly disagree with the people who think these non-countably-additive functions are helpful here).
I disagree that he is wrong. He's "not even wrong": there is no question here to which the notion of probability can usefully apply.
You can have a sigma algebra and a probability measure on any set (eg trivial sigma algebra), how well that models reality is another question though.
Yeah, the real point is that you need to define your probability measure *before* you talk about probabilities. This isn't an "epicycle" as BB claims, it's a prerequisite
Some of the people in this debate (notably Scott) are committed to the idea that there an objectively correct probability function, and that closeness in approximating this correct function is key to rationality.
I think they’re wrong, and all that is needed is internal coherence.
I agree with you that there isn't an objectively correct probability measure. But at the same time, I think you can get most of the Tegmark argument, with its concentration on simple possibilities, without having to make ad hoc choices - you only need quite mild assumptions about the probability measure. E.g., *any* probability distribution on the natural numbers has the property of being concentrated on small numbers, in the sense that for any epsilon, there exists N such that (1-eps) of the mass is on numbers < N.
+1
He's not right and not wrong because it's nonsense to speak of infinitely many universes.
Apart from the fact that there is, was and will ever be only exactly one single universe (universe: everything taken as one), infinitely many universes is not a number of universes.
Infinity is not a number. You can do fun things with mathematical objects, because they aren't objects, just like fake diamonds aren't diamonds, but you cannot do the same fun with real objects.
Because you never have infinitely of them.
Yes. Additionally, it is absurd to speak of splitting the atom. "Atom" means "indivisible."
Atoms are called atoms because one had thought they would be indivisible. And one was wrong.
Do you think it is it even remotely, in any sense of the word possible, possible to be wrong about all matter, not all matter here or there, but absolutely all matter being all matter?
Universe is called universe because at one point it was thought that it was everything. Then we realized that if there is no possible way to ever observe or interact with another area, then those are separate universes. You are playing word games and he (and I)are calling you out on it.
> Universe is called universe because at one point it was thought that it was everything.
That's wrong. Universe is called universe because one wanted to convey that one is talking about everything as a whole. You dont even need to know what all belongs to it to talk about it. That's the beauty of this concept.
> Then we realized that if there is no possible way to ever observe or interact with another area, then those are separate universes.
You use the word "then" but the latter doesn't follow from the former.
I would call this a word game or trick, not what I'm doing. But I guess you're not even interested in the point I made with my original comment, but only in calling me out for something.
The meanings of "universe" are different in different contexts. If you read some stuff from the early 1900's you'll often find "universe" used the way we use "galaxy" today. The very word "multiverse" implies that there are multiple universes. (Which doesn't prove that this is true, but proves that the word can have a different meaning from "everything". I've occasionally encountered the term "cosmic all" to mean everything that exists.)
You are right. And I'm aware of all that.
But the discussion is not about different parts of the universe, or different "universes" in a "multiverse" but different, well, universes.
The participants don't recognize this, because they confuse epistemic possibilities (mathematical descriptions of different universes are no different universes, they are just ideas of different universes) with reality (the one universe that can have only one exactly right mathematical description (not counting how many equivalent formulations of this description there might be)).
Inside your mind, though, there's an epistemic approximation which corresponds to the world you live in... but how do you know that the world you live in is the "real" one, or even bears the vaguest similarity to it, rather than being some simulation based on simple, arbitrary rules with results complex enough to evolve fractal models of itself? As Conan put it,
"Let teachers and priests and philosophers brood over questions of reality and illusion. I know this: if life is an illusion, then I am no less an illusion, and being thus, the illusion is real to me. I live, I burn with life, I love, I slay and I am content."
> But the discussion is not about different parts of the universe, or different "universes" in a "multiverse" but different, well, universes.
I think the discussion is absolutely about different parts of the universe.
The idea was that the universe, was everything mathematically possibles, and the different parts would be what is implied by different mathematical structures.
BB is wrong. His argument, as I understand is, is as follows. Let R be the number of real people and B be the number of Boltzmann brains. Then the chance of being a Boltzmann brain is B/(B+R). But since there are an infinite number of possible worlds, B and R are both infinite. So, using the symbol ∞ for infinity:
B = R = ∞, and
B/(B+R) = ∞/(∞+∞) = (1*∞)/(2*∞) = 1/2 * (∞/∞) = 1/2 * 1 = 1/2
The math is wrong because, as BB realizes when he sets B = R in the first place, infinite cardinal numbers don’t behave like finite numbers. (2*∞) is equal to ∞, and (∞/∞) is undefined. So the math should be:
B/(B+R) = ∞/(∞+∞) = ∞/∞ = undefined
That means that to get a probability, you have to go back to the drawing board and formulate the question differently. I would take the limit as the number of worlds approaches infinity, giving a probability of 1/(1+100000000), but I think you need to use measure theory if you want to be rigorous.
What are you taking the limit of? The worlds aren't spatially arranged. Plus, if they were somehow, you could change that result just by moving the worlds around.
The ability to rearrange to change the answer assumes that you have some case where there are (infinitely) more Boltzmann brains than human brains. But I haven't seen a good argument for that - especially if we're discussing the original claim that we might be Boltzmann brains, so that we need to posit that we care about not just brains, but ones in scenarios where we could not realize we are one of them. (And this is even more true if we're using any kind of reasonable simplicity weighting over mathematical structures, though that gets into much deeper speculation!)
It doesn't assume that. It just assumes they're the same cardinality.
OK, but saying they are the same cardinality assumes that there are infinities in the relevant universe, which is something we've debated elsewhere, but is a critical assumption. (Or it assumes that it would make sense to do ordering of minds across infinite universes - but any specific mind making the calculation presumably shouldn't do that, since whether it is a Boltzmann brain or not, it exists in a specific universe!)
I was responding to Scott’s formulation, but to avoid confusion I’ll switch to discussing what you wrote: “Suppose that there are infinite galaxies. Each galaxy has 10 billion people who are not deceived and just one person who is deceived.”
I assume that “infinite” in this case means countably infinite, so there are one to one mappings from the set of natural numbers to the set of all galaxies. Chose any such mapping. It doesn’t matter which mapping we choose because the result will be the same for all of them. The total population of galaxies 0 through N - 1 will consist of N * 10 billion people who are not deceived, and N people who are deceived. Thus, for any N > 0, the fraction of the total population who are deceived will be 1/(1 + 10000000). The limit a N goes to infinity will therefore also be 1/(1 + 10000000).
If different galaxies have different ratios of people who are not deceived to people who are deceived, then taking a limit won’t necessarily work.
The more general issue is that when you write that, “even if deceived people are rarer and non-deceived people are common, the number (measured by cardinality) of deceived people will be the same as the number of non-deceived people,” what that really means is that the claim that “deceived people are rare” is not a statement that can be formalized in terms of cardinality. That doesn’t mean that it cannot be converted to formal mathematics; it just means that you have to formalize it using a different mathematical concept. I used a limit, which is essentially a hack to make the rules of normal arithmetic apply by replacing “infinity” with “arbitrarily large finite numbers.” As I noted, it would probably be cleaner for formulate the statement using measure theory.
He literally wrote it himself, even in italics:
"So long as the number of deceived people is not a function of where the people are located".
He makes the assumption that this is not the case, and then he is right.
But you make this assumption, by assuming that simpler universes get more weight. Then you are right.
(The word "number" in his sentence can be replaced by "weight". But I guess it's awkward to take about the weight of people. The comment section would flame up into a cultural war about fat-shaming.)
(i'm not a mathematician but) This feels like something that might be a Philosophy Of Math question more than a Math question
tangentially related here's a textbook on phil of math by one of my teachers at UofT: https://annas-archive.org/md5/4597bc58ff35e11e2539335f7f193e87
Yeah im pretty sure this debate is literally the frontier of human inference
Not enough information to answer. There is no "the chance" until you impose a measure. The same is true of finite sets, it just doesn't look that way because people always assume we're talking about the discrete uniform measure if not otherwise specified.
If you randomly choose an integer what's the chance it's a multiple of one billion? Well, it depends on what exactly you mean by "randomly choose an integer", but according to any remotely natural definition the answer is either one-in-a-billion or undefined. Any definition according to which the answer is 50-50 will pretty much only ever come up if you specifically select it for that purpose.
This is not my claim. I think it's undefined.
If I'm understanding him right, then he's wrong. His argument seems to be that any infinite set is equal in value to any other infinite set. Even without understanding the special math that dictates the rules of infinites, this breaks down into absurdity pretty quickly. I could just as well say that each world contains 3 types of people - 900 million right-handed people, 100 million left-handed people, and one Boltzmann brain. And therefore, by his same logic, the chance of being a Boltzmann brain is now 1 in 3 instead of 1 in 2, just because we have now separated all beings into 3 groups instead of 2. And of course, we could just continue dividing the millions of real people into more and more groups, each of which would constitute an infinite set.
You're arguing intuition from finite sets to infinite sets.
For example: what percentage of numbers are divisible by 1 billion? You'd argue that it's on-in-a-billion, by taking the limit of the proportion of such numbers as n approaches infinity - but that depends on how we order the numbers; I can put aleph-naught numbers that are divisible by 1 billion in a row to start. (And making this worse, minds or universes don't have any natural order!)
I think the simplest not-wrong answer is that the problem is not well-defined and it is inappropriate to try to use it as an argument in favor of God (or anything else) is not valid.
The fact that there’s so much math Scott hasn’t learned is, from a certain standpoint, *phenomenal*. Since virtually everyone else with Scott’s clarity of thinking *does* know about things like asymptotic density, Scott is as fascinating for a STEM person as a fluent English speaker raised by wolves would be for a linguist. When, as almost always, Scott gets things right anyway, he shows how it’s possible to do that without formal apparatus, like Archimedes calculating crazy surface areas and volumes with no calculus or even algebra in the modern sense. And in the rare cases where Scott gets something wrong, we can take pride that the formal apparatus we learned is apparently good for something after all!
Yeah, reading this I kept wondering if Scott actually doesn't know measure theory, or if he just decided to avoid using math language in order to make the writing clearer for non mathematicians.
tbf a lot of things can have physical intuitions that normal people understand, if they get the right analogy, more easily even than a mathematician understands the formal math equivalent.
E.g. my eyes glaze over at the formal version of the squeeze theorem. But if you ask me: "You're a firefly flying through a tunnel. At any given distance can you ever go above the ceiling or below the floor?", I'll know the answer!
To be fairer, getting the right analogy as consistently as Scott does is what other-Scott is referring to.
yes
(my comment was meant as a general point re: math not as a point about scott)
If there are infinitely many Scotts, what are the odds that this one was raised by wolves?
I just hear this stuff from the people who do understand math and repeat what they say.
No, no, Scott!—I used to have this problem too, though, so allow me to assist: what we /actually/ say here is "Aw, shucks. Thanks!"
Easy-peasy (lemon-squeezie). 🙂 Now you try!–
Yeah, I'm going to agree here. Scott says he doesn't actually understand, he just applies the informal understanding from hearing and understanding what others have said to new domains correctly, which seems like it's the same thing as understanding.
Chinese Room all over again 😁
Hi Scott, sorry to put you on the spot, but I'm curious if you have any new thoughts on the Tegmarkian view since you last(? that I can find, anyway) wrote on it in 2014?
I feel like some of the ideas you moot in Ghost in the Quantum Turing Machine might be relevant ie on what it means to actually run certain kinds of computation; and I trust you to think about what incompleteness might mean for a Tegmarkian more than I trust myself.
No pressure, and sorry to impose, but I feel like you're the person who could add the most value to this conversation of almost anyone.
It seems odd that you can use a random draw from an uncountably infinite set to simulate a random draw from the countable infinite set of integers. Very interesting though. Maybe the entire conversation is just suffering from lack of crazy advanced math courses.
Yeah, I was a math major back in the day but I'm weirded out by the fact that apparently I can make a uniform choice of a real number between 0 and 1 but not a uniform choice of an integer. And I'm thinking that means I can't make a uniform choice of a *rational* number between 0 and 1, since there is an easy mapping between that and integers.
The mapping would break the uniformity, though.
Can you uniformly choose a real number between 0 and 1? How would you do it?
Possibility 1: you reel off a long string of digits. Obvious non-starter, you can spend the rest of the life of the universe reeling off digits and you'll never escape the rationals.
Possibility 2: you attempt to choose via some kind of physical analogue, like pointing at a particular point along a physical line. But you haven't _really_ specified a number then, you've just pointed your big fuzzy physical finger at a general region. Physical reality is made of big dumb fuzzy things like atoms and electrons and I'm not sure how you'd go about finding something that can be unambigously defined (not even measured mind you, just defined) to arbitary precision.
Possibility 1 is the correct one. If you're interested in practical applications of a random number between 0 and 1, you can continue this process until you get a number of arbitrary precision. If you are not, then you don't need to actually "spend" any time generating one.
Not sure what you meant with regard to rationals. Those would happen if at some point the string of digits enters a cycle which repeats forever. That has a probability of 0 (also known as "almost never"). Nothing strange here.
The rationals include any number that can be expressed with a finite number of digits, not just those that repeat. Any number which you can express by writing out digits is a rational number, but there's vastly more reals than rationals.
If you were to uniformly pick a real number between 0 and 1 then it will almost certainly not be a rational one. But the digit-picking process can only produce rational numbers.
For all practical purposes, that's another question.
Since you're talking about expressing the number in digits, you have to say "in any appropriate integer base", otherwise 1/3 isn't a rational number.
I said they include any number that can be expressed with a finite number of digits, not that they are exclusively composed of them.
If I'm not misinterpreting anything, you can't actually make a uniform choice of a *real* number between 0 and 1. You can make a choice of a *float* number - an approximation of a real number to some accuracy. The trick is that for every given degree of accuracy there is a finite amount of float numbers between 0 and 1.
Actually, one can create a mapping, but not one with a reasonable order, and I think it may require continuity. And you've got to be able to pick numbers with an infinitely long (decimal) representation. So it's not a very useful mapping.
FWIW, I normally decide that nothing containing infinity is real. It's only a convenient way to talk about real things. So the first problem is the assumption that there are infinitely many integers. You can argue based on ANY particular number as the maximum, and it often doesn't matter which, as long as it's a large enough number, but you can't argue based on infinity. Not about "reality". I normally decide that 10^66 is large enough, but which number you need to use depends on the problem. Math is a *map* of reality, it isn't reality.
I have never been very good at infinite-adjacent reasoning, but if you can make a uniform draw from the reals in (0, 1), doesn't that mean you can also make a uniform draw from the integers? e.g.:
1. Draw a number from the reals in (0,1)
2. With probability approaching 1, this number will be in the form 0 point [infinite number of random digits]
3. Chop off the "0 point"
4. Et voila?
I know they are different cardinalities so there must be an error somewhere - but what I really mean is, how is the above argument different from the one Scott gives?
That doesn’t work. As you say, what you get is an infinite string of digits, which is not a natural number — every natural number has a finite number of digits. Even if you stripped off any trailing zeros (however that would work), then for instance it would be impossible to pick a power of ten.
Thanks - seems obvious now! And confirms that I remain not good at reasoning about infinities.
How about this, then?
1. Draw a number from the reals in (0,1)
2. Divide 3 by that number
3. Round to the nearest prime, or nearest integer and take its largest prime factor
4. Repeat steps one through three, add the two prime numbers together
5. Assuming the strong Goldbach conjecture holds, divide that sum by two, and subtract some constant to line up with your preferred lower bound. Alternatively, if you want to include all negative integers, flip a coin.
Toss a coin infinitely many times.
Treat the infinite sequence of heads/tails as the binary expansion.
This gets you 1 random sample from an uncountable infinity.
Tossing N coins gives you 2^N options, so to sample an infinite set by tossing coins, that infinity needs to be an exponential of a smaller infinity.
You can't toss a coin an infinite number of times, and you definitely can't carry out any procedure that has an "and then" after it.
Also as long as I'm jumping in with early comments, I was going to link to Sarah's old Twitter thread about how a lot of the claims about pain asymbolia are likely wrong and the whole thing is probably misdescribed, but she appears to have deleted it. Well -- go ask Sarah about pain asymbolia. :P
A randomly selected person named Sarah from the set of all those currently alive, or...?
Sarah Constantin, to be specific. Uh I don't know that *you* in particular should go pestering her about pain asymbolia :P
Ross Douthat has gotten "refuting proofs of the existence of God" and "arguing for the nonexistence of God" confused, huh? :-/
i am not so sure it has been his singular or greatest confusion
How are they any different? Any attempt to falsify God's existence is inarguably apostasy.
I'm a little unclear on what you're trying to say here. Are you just asking "How is refuting an argument for X different from making an argument for not-X?", and then also making a note about apostasy, or are you saying that you don't *care* about the difference because both are apostasy?
In either case, the bit about apostasy is... uh... unlikely to be convincing to anyone who doesn't already agree with you. Not a great way to argue your case! And if it's the second you meant, then, well, I have no particular reason to pay attention to your comment, do I? As evidently it's not trying to convince me.
If it's the first, and you are actually confused about this basic distinction... generally, an argument for X attempts to establish the truth of X, starting from a position of ignorance about X. If an argument for X is refuted, however, that doesn't generally provide an argument for not-X; it just means that that *particular* argument doesn't work. Other arguments for X may still work, and even if none do, refuting arguments for X still cannot establish not-X. When one incorrectly concludes that because an argument for X was false, this means X must be false, this is known sometimes as the "fallacy fallacy".
Now granted the above paragraph is basically implicitly treating real-world arguments as being roughly like mathematical proofs, when in reality, one would want to use a more Bayesian perspective, where refuting an argument for X *would* raise the subjective probability of not-X, and refuting all such arguments might "establish" not-X depending on priors, although of course really from a Bayesian perspective you don't truly "establish" anything. See Yudkowsky's sequences, etc. But these distinctions are still useful even outside of mathematics because they do bear on how much evidence a given refutation or counterargument provides.
For a non-mathematician, all the arguing by the mathematicians in the comments is like the arguing of theologians, and does lay itself open to the same jibe about "how many angels can fit on the head of a pin?"
Are you arguing about real things here? No? Yes? If they're all just mental concepts, who cares?
The answer is that Tegmark's mathematics are indeed about real things, but if you're religious, so are angels and therefore if angels exist and pins exist, can they interact is a valid question.
The problem with the question of how many angels can dance on the head of a pin is not just that it's unknowable, but that it's one of the least important questions you could possibly ask about angels, if they were real.
Stuff like how were they created, what powers do they have, do they intercede in the lives of mortals, do they have any separate free will of their own or are they purely controlled by God's will, if they're purely controlled by God's will then what's the need for them given that God is immortal, et cetera. These are the important questions about angels.
I think that the people in this thread are at least asking the important questions about mathematical universes, rather than focusing on irrelevant minutiae.
"I think that the people in this thread are at least asking the important questions about mathematical universes, rather than focusing on irrelevant minutiae."
For people familiar with maths, yes. For those of us ignorant, it's all meaningless words words words. "Is there such a thing as infinity in reality or is it only a mathematical construct?" Angels. Pins.
That's not what the word means.
I can easily produce an incorrect proof of a correct statement. Therefore refuting a proof is not the same as proving the claim wrong.
But in this particular instance, often the refutation is taken and used as proving the claim wrong.
True for those worrying about divinity, perhaps. Not at all true for those of us worrying about proofs.
And you're right that they're wrong to do that. But it does serve as a good reply to someone who presents the proof as the reason why you should believe this or that.
Attempts to refute attempted proofs of gods existence are not attempts to falsify gods existence. These were in fact one of the core topics of medieval universities when they were run by theologians - they wanted to ensure they only used good proofs of gods existence, not spurious ones, so they always appointed someone as the devils advocate in any disputation to try to refute the proof, and see if it could be refuted effectively.
And yet even in that scenario, the devil's advocates existed purely to strengthen proofs of God. In other words, strawmen. Anyone seriously implying that the existence of God was anything but an absolute truth is just a heretic.
If you use your brain you're always someone's heretic. It's a been a badge of honor for a long time already.
Upgrading Scott's conversion date to 2030. Probabilities remain 70% Catholic, 20% Orthodoxy, 10% other (mostly likely very high church Protestant). I would put Orthodoxy higher but I don't think he will want to give up on Scholasticism.
Just curious, what were the numbers before this update?
Denomination probabilities unchanged. Previous date was 2032.
I think that if you convinced Scott that God exists, he'd become a religious Jew, not a Christian.
If you had asked me a year or two ago, that’s what I would have said, but with the number of posts in the past few months that engaged with specifically Christian arguments (including some bad ones based on claims of early miracles), I’m now wondering whether he’s already somewhat Christian.
It's probably a lot easier to argue with people on here about Christian arguments, given that the cultural water of Western Europe and the USA is Christian-derived, than to argue with fellow Jews about fine points of Judaism as to how they apply in the everyday life of those who read ACX. If you're anxious about Sabbath-keeping, then declining what counts as the eruv is going to be both fascinating and relevant to you, but not as broadly applicable to the wider audience as "so the Trump clampdown on immigration violates principles of 'love thy neighbour as thyself'".
Scott will never convert to any brand of Christianity, he's going to remain Jewish and if he does come to some version of belief in God, it will be within the bounds of Judaism. C'mon, give the guy a break, don't tease him about this because I'm sure he's fed up of it by now.
(Yeah I'd love if he became Catholic but I'm not dragging him to the baptismal font, and God will judge us all in the end on our own merits so let that be).
You know, a shocking number of my friends, both from LessWrong and elsewhere, have converted to Catholicism.
Scott undoubtedly doesn't remember me from back then (possibly because I'm not using the same username, so how could he–), but we had a few pleasant interactions—e.g. we both generally agreed with Eliezer on things, but the one time I was like "no wait Yud seems wrong about this" (in re either animal consciousness or dust specks, IIRC), Scott (as Yvain) had /also/ disagreed: a welcome reinforcement, because arguing with Eliezer could be somewhat... frustrating—so I'm tentatively including him in the group "/could have been/ a friend (if I weren't so durn lazy & reclusive)"; and I interpret this to mean that a conversion isn't /totally/ out of the question. 💡
I've always personally had some sort of affinity for the Church, meself. I grew up completely non-religious(ly), but my mother—convinced both that a) I would be terribly bullied in a public high-school, and b) that religious people were happier—(probably she was right about the former; not sure about the latter)—placed me in a Christian (Protestant) school.
This ensured that I would never, ever become a Protestant. They liked trashing on Catholics, but my very first real girlfriend was a Catholic girl (from an all-girls Catholic school! the stereotype is gloriously true, boys, I'm here to tell you that!–) and took me to Mass with her once. I did not know that I was supposed to refuse the Host, so when I was in front of the nun handing 'em out, I plucked it from her fingers, nodded my thanks, and ate it right there. Oops. (To her credit, she didn't turn a hair. I think she was about 100 years old, so perhaps she had just seen it all already–) When we got back to our seats, my C.S.G. said "omg Kvel you uh weren't supposed to do it like that", in a fit of giggles... I was miffed that she hadn't warned me!
Anyway, the point is: both in the books I read, which were heavy on the European history and historical fiction sides of things, and in my personal estimation & experience—that of a solemn & dignified service at a glorious cathedral, with ritual (something I've always loved) and the weight of years behind every element, vs. that of snotty, grasping (donate for the LORD! donate! donate!), awkwardly-"modern-&-kewl!", Christian Rock(™!) service at Protestant churches; and, too, even of doctrinal differences: e.g., Purgatory and the "righteous pagan" vs. "even the best & kindest of all humanity will BURN IN HELLFIRE FOREVER if they don't have FAITH!"—I found I much preferred the Catholic stuff.
My mother—such a thoroughgoing atheist that she was unable to even pretend to believe (she tried, when I was small, again due to the "if he grows up religious he'll be happier"; she'd had an unhappy life, as you might be able to tell, and this was just one more way to ensure—in her mind—that I... didn't)—has /also/ converted recently. Keeps bugging me to too.
To be honest, I'm seriously considering it(!); I love everything about the Church, really... the sticking point is that I just have a really difficult time with, uh... a lot of the Bible, you might say. The OT in particular.
It is hard to accept that this fellow going around deeply concerned about propagating the ceaseless slaughter of innocent animals and the same of non-Judaic tribes, and about the sexual practices of His people & whether or not they're mixing types of fiber together on their persons & if they're getting too close in proximity to menstruation & ... etc., etc. ... is /really/ the portrait of the omnipotent & omniscient God of All; /really/ the way He would like to be represented.
It's blasphemous, I know, but---I can't help but feel like "if I were to write a Book for All Mankind Forever, I'd do it a little differently", y'know?
Not to kick the Protestants, but some denominations do seem to put more emphasis on the Epistles of St Paul than the Gospels, to the point that I can see why people go "It was Paul who invented Christianity".
No, he didn't, but the post-Reformation emphasis on using texts from Paul to back up their positions made it seem that way.
Knox translation?
That's extremely interesting! I wasn't aware of that; I knew I preferred much Catholic /doctrine/, sure---but most of my reading on the Church has either been from medieval texts or incidental (I really love Papal history, for example, but it doesn't tend to go much into /current/ Catholic stances, heh)... so this is very intriguing---and welcome---news.
--------------------------
I remember, at my (Protestant) Christian high-school, a science class wherein the teacher informed us that "stellar evolution" is just as much a lie as is "Darwinism". Stars do not change; supernovae & black holes are snares laid by the unbeliever---ably assisted by the Enemy, no doubt---and "you will never see these atheist 'scientists' ever come with direct evidence of either one. Why? Well, because... THEY DON'T EXIST!"* [cue derisive laughter from teacher & students]
I argued, naturally. By sheer chance, this was not long after one of the first direct imagings of a black hole, and I had /just read/ some articles about it in a recent issue of Astronomy & of... some other magazine (Popular Science, maybe); I said "I will bring them next class."
I thought that the instructor would be impressed by my knowledge & initiative, and would eagerly read through the magazines---perhaps even right-then-&-there---and we would all have a fun, enlightening conversation about astronomy and stuff. :)
Surprisingly, no. In fact, he did not seem at all pleased, for some reason; he took my magazines, said he'd return them once he read them but not to bring it up in class again before that, reiterated to the class that "black holes are not real no matter how many magazines you bring in [gives me a look]", and... well, I never saw those particular issues again, heh.
This sort of thing happened quite a bit, although I learned not to argue back (...sort of--); memories of biting my tongue repeatedly, as arguments such as "they want you to believe that YOUR GRANDFATHER was a MONKEY" were advanced, are engraved deep in my mind...
...so /that/ was my dominant impression of Christianity, for a long time. Hence: I'm more intrigued than ever by the Church, now, after reading your comment---maybe I ought to take that RCIA class thing after all...
--------------------------
*(I mean, black holes & supernovae, presumably---not atheist scientists. He definitely thought /those/ existed.)
catholicism has issues because of that beautiful, ancient church filled with artistic merit and appealing to fine sensibilities.
the bible is messy. Jesus spits on the ground and makes mudpies to heal someone. The father of the church hacks somebody's ear off with a sword, who Jesus heals out of frustration. John ran away bare ass naked from the guards who came to capture him.
even the jews lol, the ot is hilarious in that much of it is God going wtf why are you jilting me for yet another God? Ezekiel, go lay on your side for a year and bake food over a fire burned with human shit to remind them who is their God.
idk how we got from homeless Jesus getting accused of being a drunk who hangs around prostitutes to this.
That's why the Eucharist, though. It's not a symbol or an ordinance or 'you ingest Jesus in some spiritual fashion via faith', it's material bread and wine and the Body of Christ. God interacts with us through the material, hence sacraments.
the eucharist as i understand it is more a theological explanation of how a believer is justified or in right standing with God through partaking in a ritual. the experience is the ritual which believers do "in remembrance of him."
but the "christ is physically present" is there to say "God forgives and changes you when you enact it lawfully" and thats another issue. it leads to a whole scaffolding of explanations that try and tell us how God does what he does as they reason out his mysteries.
not sure that is the way
Okay, this is where the rubber hits the road. The Eucharist is not just a ritual or commemoration. Let me make it clear from the outset that I'm one of the lunatics who believes in the Real Presence and in Transubstantiation (which admittedly is our 'best guess' theory and not at all the full explanation).
But when Jesus said This is My Body, this is My Blood - the bread becomes flesh, the wine becomes blood. It's not just "do this ritual in remembrance of me", it's the whole "this is a hard saying" thing where many of the disciples up and left after He said all this:
John 6:
"51 I am the living bread that came down from heaven. If anyone eats of this bread, he will live forever. And the bread that I will give for the life of the world is my flesh.”
52 The Jews then disputed among themselves, saying, “How can this man give us his flesh to eat?” 53 So Jesus said to them, “Truly, truly, I say to you, unless you eat the flesh of the Son of Man and drink his blood, you have no life in you. 54 Whoever feeds on my flesh and drinks my blood has eternal life, and I will raise him up on the last day. 55 For my flesh is true food, and my blood is true drink. 56 Whoever feeds on my flesh and drinks my blood abides in me, and I in him. 57 As the living Father sent me, and I live because of the Father, so whoever feeds on me, he also will live because of me. 58 This is the bread that came down from heaven, not like the bread the fathers ate, and died. Whoever feeds on this bread will live forever.” 59 Jesus said these things in the synagogue, as he taught at Capernaum.
60 When many of his disciples heard it, they said, “This is a hard saying; who can listen to it?” 61 But Jesus, knowing in himself that his disciples were grumbling about this, said to them, “Do you take offense at this?"
So to say "Christ is physically present" is much, *much* more than to say "God forgives and changes you when you enact it lawfully".
It is a Mystery, who can understand these things? 😁
I think part of the appeal in Catholicism may be all that Scholasticism, which enables a lot of word-wrangling. You do get equivalent denominations which like to do the same, such as Calvinism, but Calvinism can be a bit grim unless you scrap a lot of the emphasis on the negatives of double predestination.
So if you're smart and like arguing about concepts, there's a smorgasbord laid out for you in Catholicism.
the problem is when you go to a "high" church, you quickly find its two extremes:
1. A heavily intellectualized magisterium class that may actually subvert the fundamentals of the faith if not watched.
2. a laity that just does it because their parents did. They'd be Elks or Freemasons if they did too.
you have a small "convert" class that does experience something and is devout but i think a lot of the 60s fundamentalist revival in the states happened precisely because high churches ossified into this. its so bad in new england that all those elegant churches rent themselves out to fundamentalists because they can't actually maintain congregations
Absolutely---I find that, at least here in the American South, there is a strong current of anti-intellectualism in the Protestant churches; as someone who /loves/ word-wrangling (nice term, there), this is very unappealing to me.
One thing I always liked about Islam is that... well, I love language, and I love history; if I'd chosen better, as a young man, I think I could have easily acquired a PhD in historical linguistics or the like, and spent all my time reading, writing, & thinking about that sort of thing---like I do now... except I'd get paid for it, and published, and invited to conferences to speak upon it...
...whereas now---being outside the gleaming Red... er, ivory! I mean ivory!... halls of academia---no one really cares about that sort of stuff; the co-workers & friends I speak to don't have any interest in it, the ladies I eye are not impressed by it, the employers I meet won't pay for it, the ... etc. Interest in philosophy, language, history: a thankless card to draw, of no benefit even socially.
But in the Muslim community, ah... with the Muslims, if you're real good at fine distinctions of philosophy, if you've a facility with Arabic, if you possess talent for argument & exegesis, you're /hot stuff./ (The price, of course, is that the philosophy, exegesis, and language you must study are all of the Islamic variety; they does have some inherent interest, but to be the requisite degree of monomaniacal, I think one might have to really believe in the religion...)
Similarly, there's /one community/ in the world---outside of certain ever-shrinking, ever-more-irrelevant uni departments---that will be impressed, excited, interested, and perhaps even remunerative, if you spend all your time studying Latin & thinking about medieval theories of semantics: the Church, of course. THIS is a big draw for me, heh. If I could ever sincerely convert, I might try for seminary. If you're gonna go, go hard, I say!
--------------------------
Somewhat related, somewhat apropos-of-nothing: Protestants, and atheists, like to bag on Catholics for the doctrine of transubstantiation---but I remember being quite young (late teens?) & reading some philosophical explication of the concept; to my utter amazement, /it made total sense./
It just requires one to approach it upon its own terms, rather than from the angle that modern thought might suggest; if you're not familiar with ontology & metaphysics---particularly the Aristotelian/Thomistic varieties---then you're probably going to conceive of the doctrine incorrectly. It's not even really obscure or difficult (like some Trinitarian theology is, heh): understand "accident" vs. "substance", and it all falls into place!
I dunno---no real point here; I just thought it was interesting. Surprised the heck outta me, at the time!* (With the modern bias against historical thought/thinkers, I've found that most also don't realize that any "obvious" quick objection one might think up has /definitely/ already been considered by multiple Catholic philosophers, heh.)
--------------------------
*(unfortunately, the heck came back with a vengeance)
> It is hard to accept that this fellow going around deeply concerned about propagating the ceaseless slaughter of innocent animals and the same of non-Judaic tribes, and about the sexual practices of His people, and whether or not they're mixing types of fiber together on their persons or being in too close proximity to menstruation, or... etc., is /really/ the portrait of the omnipotent & omniscience God of All; /really/ the way He would like to be represented. It's blasphemous, I know, but I can't help but feel like "if I were to write a Book for All Mankind Forever, I'd do it a little differently", y'know?
If you really like Christ but not the OT, you could always fall back on the (non-standard, non-Catholic) view that the weird fellow of the OT was not much of a God at all, more like a reification of the Jewish tribes' own ideas of an ideal supernatural leader. That doesn't preclude Christ from being divine and the NT from being a real revelation if that's your thing. All the bits where they try hard to tie Jesus to the old Jewish prophetic tradition can be explained away as cultural emergency patching at the time.
From there on you could also give up on the unicity of the incarnation, and end up with a kind of universalist view where you get to choose your preferred divine symbol out of potentially many (or maybe it chooses you).
This seems sort of like the Gnostic view, to me---and, I admit, it does have appeal. The NT reads, to me, as if it does have /something/ of the numinous in it; that is, out of all the holy books I've read, I have felt that that which had the greatest proportion of something-which-could-be alien & divine is the NT (in particular, the Gospels).
That said, though: what motivates either the "Gnostic" view, /or/ the "universalist" one? I.e., why are we going for /either one/, instead of just throwing up our hands & saying "material monism & human authors, that's all I see 'round here"? I have difficulty with this.
Well Gnosticism is a whole historical movement with a complicated history, but these days it mostly stands for a form of spiritual conspiratorial thinking: there's something wrong with the world, and it's the fault of a kind of trickster lesser God. I don't find this very appealing; evolution and game theory are plenty enough to explain most of the ways in which we commonly find the world persistently unpleasant - what the Buddhists call 'dukkha'.
As for why not just materialism, it's really a question of sensibility and preference. No view is free from difficulties, materialism included; it rather abruptly denies any depth or power to subjective experience, awkwardly attempting to subsume it under measurable, objective facts. Yet our actual experience is the other way round, if we pay attention: apparently objective things only ever appear within a field of subjective experience. There's a whole strand of thinkers that take this kind of thing seriously, check out for example Bernardo Kastrup or Arnold Zuboff, they have whole lectures on youtube.
Excellent point. Any further readings, figures, search terms etc. you'd like to recommend re: idealism (or "metaphysical realism"? sometimes it seems t'me that the terms "idealist" & "realist" are used in several—& almost opposite—senses, in philosophy) shall be gratefully received!
My prediction does not preclude a pit-stop in Judaism. But if Scott concludes that God exists and decides to practice a religion (Deism or other system where you acknowledge the Creator but you don't change your behavior in any way are possibilities), he's going to want to pick the correct one. He'll need to decide that Judaism's specific claims about God, both positive (He wants us to follow the Law, and a Messiah is coming) and negative (Jesus of Nazareth isn't that guy), are correct. Having accepted that the Old Testament is true, he'll need to come up with a specific reason to reject the New. You can't just say "miracles are impossible" or "the dead can't come back to life." So I just don't think he gets there, not with the way that he engages with Christian arguments. It's going to be all or nothing.
Sure, lapsed or never-practicing Jews sometimes rediscover their childhood or ancestral faith. But I imagine that's more out of a sense of spiritual longing. They connect with something familiar because it feels right. I don't think people reason themselves into Judaism.
> I don't think people reason themselves into Judaism.
Um, no, this is actually quite common.
Simplicity is often used to assert higher probability in "reality". In fact, the Occam's razor is only a guide for humans on how to pick a model worth testing, mostly because it is easier to work with and test, not because it is likely to be more "real".
There have been some successes with simpler models, but mostly as stepping stones to something much more complicated. There have been plenty of failures. In fundamental physics nearly all "simple" extensions or replacements of the Standard model of particle physics have experimental consequences that contradict observations. Same happened with all known extensions of General Relativity.
If you are a researcher and look critically over your own area, you will notice that "simple" is not a good approximation of "accurate".
I think you're talking about some sort of vague philosophy-of-science political debate. On a mathematical/technical level, which I think is what we're doing here, simpler simply *is* more probable, that's how math works.
Very simple example is that "the first object I will pick out of this hat is blue" is more probable than "the first object I will pick out of this hat is a blue sphere between 1 and 2 inches in size".
Slightly more complicated example: suppose that I am rolling ten d20s, and I will declare success if EITHER the red dice comes up 20, OR the blue dice comes up 5 and the yellow dice comes up 3 and it's raining outside. I declare success. Which is more likely - that the red dice came up 20, or that the conjunctive thing was true?
I realize this seems cheating because I'm using easily-quantified things like dice rolls, but I think the same principle extends to everything else. The reason "OJ killed his wife" is more likely than "An international spy killed OJ's wife, then used supertechnology to fake the DNA" is because it's the conjunction of p(international spy) * p(has supertechnology) * p(wanted to do this), and we can expand each of those into more complex propositions in turn.
I'm probably not doing a great job explaining this - https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation is slightly better although it's not really focusing on this question.
Right, the simple examples in well-researched areas work as you describe. Anything out there, like the MUH or, well, actual science, does not.
For example, the simplest extension of the standard model is the grand unified theory SU(6) -- it is wrong. SO(10) is also wrong. String theory is very simple... sort of, if you know the right math. It has failed every experimental test so far.
In your professional area, does the Occam's razor work well? I guess, maybe, in most cases? If someone comes in talking about vegan trans alien-looking people trying to murder them with a katana, they are probably in psychosis and hallucinating, and need some antipsychotics stat. Unless you know a few more details about their lives.
The topic you picked for your original post is way out there, your normal intuition is pretty useless there. You simply do not have enough information to tell whether there are metaphorical Zizians out to get you. Well, there always are, but you do not know where they are hiding and how they look.
AIXI was a great start for AGI research, it ended up being completely useless, real progress was messy and complicated, though it might look "simple" with the benefit of the hindsight.
More examples: tiny animals causing disease. BS until you accidentally stumble on them by looking through an unrelated instrument.
Rationality, Transhumanism, frontier research is not something where a dice model is very useful. "the same principle extends to everything else" seems to be empirically wrong when that something else is a poorly understood area.
My understanding is that string theory has received zero empirical tests, and in fact has produced zero concrete predictions that even could be tested.
This is a bit of a contentious area. Supersymmetric partners was one famous predictions that those doing high-energy physics research expected to be confirmed in the LHC with high probability. There were theory/phenomenology papers with confidence intervals and everything. The utter desert of what was found (Higgs and nothing else) was a surprise to most area experts.
Other, less talked about predictions were refuted right away but quickly patched up, such as the predicted 10/11 dimensions (oh but we can fix this with Calabi-Yau manifolds, nothing to see here!).
The main reason String theory is still around is because there is nothing better, and it inspired holography, which rings true to most experts, though has not been experimentally tested, either.
"If someone comes in talking about vegan trans alien-looking people trying to murder them with a katana, they are probably in psychosis and hallucinating, and need some antipsychotics stat. Unless you know a few more details about their lives."
Ah - you wouldn't be referring to recent example of some fluttering in the dovecotes, would you? 😁
EDIT: Never mind, I see that you are. Well, in the Tegmark universe, every possible thing that can happen will happen!
I don't think the contention is that "the simplest alternative is always right", but rather "the simplest alternative /that fits certain criteria/—e.g., like 'doesn't contradict something else we have observed to be true'—is more likely to be right"; hence, your examples re: extensions of the SM don't offer a refutation of Scott's claim.
“Rationality, Transhumanism, frontier research is not something where a dice model is very useful.'“
This is correct and probably why Scott and BB are arguing about Boltzman Brains. The reality is that most people here are clearly minds that operate on the frontier of knowledge. There are not enough frontier minds to get the data needed even think a frontier mind follows any normal distribution!
For all we know frontier minds are the only true agents and while they're probably highly correlated with a type of brain structure that is found in the tail end of things like IQ and/or level of personal financial discipline we dont actually know that.
Say we are in the simulation and the simulator(s) obtained the ability to get the technology to make it naturalistically. It makes sense to me that in order to either save resources or to prevent observers emerging in bodies that are more likely to suffer only minds the most likely to become a superintelligence or minds that have increased odds to make more minds that help build it would be granted agency.
I dont understand at all why we think we all won the lottery.
Your examples all involve a conjunction where one part of the conjunction is isomorphic to the entire alternative hypothesis. Your conclusions don't hold for examples without this special property.
We can make options complex in ways other than conjunctions. Which is more likely: The die will roll a 20, or the die will roll EITHER a prime number OR a perfect number OR an even number?
We can also make conjunctions that are still more likely than some non-conjoined alternatives. Which is more likely: I will win the lottery tomorrow, or the sun will rise AND the grass will grow AND the birds will sing AND I will blink at least 7 times during the day?
There's some point in The Sequences where Eliezer has a hypothetical involving an alien mind that thinks the universe is like drawing papers out of a hat (without replacement), such that each outcome becomes less likely the more times it's already been drawn, and when you ask them why they'd expect this theory to suddenly start making good predictions NOW, they answer "because it never has before!" (I think this was somewhere in the section about No Universally-Compelling Arguments.)
I don't think those are even the sorts of alternatives that Occam's Razor is meant to apply to. There's another point in The Sequences where Eliezer gives an example of a photon that escapes the observable universe in a way where no detector could ever possibly catch it, and asks whether we should believe that the photon disappears, or whether it continues in the way that photons usually behave. He says some might argue that it would be simpler if it disappeared, because then the universe would contain less stuff. Eliezer argues that the kind of simplicity that has proven to be an effective heuristic in scientific history is simplicity of rules, not simplicity of the universe's contents, and so we should believe that the photon continues normally. Here, you seem to be trying to apply it to "will A or B happen next?" examples.
I think Occam's Razor has a decent historical track record, but I don't think I've ever heard a strong theoretical justification for it. My first hit on Google says it's because the more assumptions you put in your theory, the more chances it has to be wrong about something, but I think that sounds more like an intuition pump than a serious argument. (This justifies why you shouldn't just add random extra stuff onto your theory for no reason, but doesn't justify preferring the simpler theory where one of them isn't JUST "the same thing but with extra stuff tacked on.")
Though, I did come up with this parable that makes it seem more intuitive to me that simpler explanations would have more weight. Perhaps you'd find it interesting? (Warning: I haven't run this past a mathematician.)
Suppose your teacher announces that she's going to hold a random drawing to decide what rules to use for the next class simulation, and sets out a stack of blank forms for kids to write down the rules they want. Further, she says you can make up more than one option if you want, but she's going to remove all duplicates before the drawing, so don't bother submitting the same option more than once.
Alice gets a form, sees that there's 500 character-sized boxes, so apparently your system needs to fit within that. She thinks really hard and comes up with a system she likes, and carefully writes it out, finishing with the "end program" instruction exactly at the end of the 500th box, filling the form entirely.
Bob gets a form, and thinks really hard, and carefully writes out a system that happens to take 499 boxes. After he hands it in, he gets a thoughtful look, then grabs a big stack of blank forms and brings them back to his desk. He writes out his entire system again, but this time, in the final, extra box, he writes "A" so that it's not a duplicate of his previous submission. Then on the next form, and writes out his system again, and adds "B". And so on.
Carla sneaks a peek at what Bob is doing, then she picks up a stack of ~8 billion blank forms so that she can write out her 493-character system with every possible 7-letter postscript.
To summarize: If you have a "fixed budget", then there are many equivalent ways to describe a simple system using that budget, and not as many equivalent ways to describe a more complex system. Thus, weighting things inversely to their length is sort-of equivalent to a _uniform_ weight across all fixed-sized inputs where the inputs are allowed to have non-functional padding to reach the correct length. Take the limit as the allowed input size goes to infinity and you get the Solomonoff Prior.
(This still doesn't solve the issue that Kolmogorov Complexity is language-dependent, of course.)
I don't think your examples of "more complex but more likely alternatives" hold here. For using OR: we're not really testing a more complex alternative, just seeing which one of a group of alternatives obtains. I.e., what's the equivalent here in terms of "hypotheses about the world"? It's that the elements you've composed with "OR" are separate and independent hypotheses, surely—in a way, reminiscent of p-hacking: "either I will observe that women respond better to this treatment OR that men will OR that Hispanics will OR that Hispanic men will OR that elderly men will..." We don't get to call this a /single/ hypothesis, I'd think, not for these purposes.
For the blinking and birdies example: well, yeah, if we purposefully take things for which the probability is known and compose likely ones here and unlikely ones there, we'll get this result. The contention isn't that the simpler alternative is /always/ to be preferred; it's that /all else being equal/, it is.
Edit: Although it wasn't a direct response to your comment here but rather to Scott, I believe hsid's response below is actually saying the same thing I am here—except less messily, heh. That is, surely Scott's comment is to be read "lacking better information / all else being equal, the simpler option is preferred", not "the simpler option is /always/ the best".
The typical informal statement of Occam's Razor usually includes a caveat like "all else being equal", but in the comment that I replied to, Scott claimed "simpler simply *is* more probable, that's how math works." I was attempting to refute that, not the common informal claim. (Though if you attempt to formalize what you mean by "all else being equal", I think you'll run into trouble.)
Saying that chaining things with "and" is a legitimate form of increased complexity, but that chaining things with "or" is cheating, sounds to me like special pleading. Conjunctions and disjunctions both have higher Kolmogorov Complexity (in typical computational languages). If you want to convince me this isn't just special pleading, come back to me with a principled and general way to measure complexity, and show how this result emerges from your method.
>I was attempting to refute that, not the common informal claim.
Ah, I see---pardon; my mistake!
>Saying that chaining things with "and" is a legitimate form of increased complexity, but that chaining things with "or" is cheating, sounds to me like special pleading.
I have difficulty putting my intuition here into words---most especially, the proper words, since I don't read much about this sort of thing---but I don't think it's "special pleading", insofar as this seems like the usual way to view it in terms of testing hypotheses against the world. That is, the argument isn't about the definition of "complex" (as I intended it, anyway), but rather about what counts as "one hypothesis/option/alternative/prediction", maybe.
I.e., "either this pen is red OR it is blue" is testing two separate, independent hypotheses, wherein if either one is true the other one doesn't matter at all---so, while the entire statement is indeed more complex than either one alone, in some sense we're just making two predictions. One statement, but two entirely independent & separate predictions.
Or, say: "either A will happen, or not-A will happen"---is this a prediction/hypothesis /at all/? Certainly not, in the sense we're meaning?
(If that's not convincing, I'll have to just throw my hands up & shrug---I doubt I'd be able to articular it any better than that... but I bet you this has been looked at /somewhere/ in the literature before.)
>If you want to convince me this isn't just special pleading, come back to me with a principled and general way to measure complexity, and show how this result emerges from your method.
I have a truly marvelous demonstration of this, but---unfortunately---this comment-box is too small to contain it.
Notice that statements like "this pen is red" and "this pen is blue" do not refer to a single possible outcome, but each refer to a large class of outcomes, and those classes are not even equal in size. A color table I quickly looked up online says that "red" light is light with a wavelength anywhere in the range 620-750nm (a range spanning 130nm), while "blue" light is only 450-475nm (or 450-495 if you include cyan, spanning 45nm of difference; still less than half the numerical range of the "red" category).
Why are all possible shades of red allowed to be combined into "a single hypothesis", but shades of red and blue are not? Seems pretty arbitrary.
The hypothesis "this pen is either orange or cyan" probably covers fewer distinct possibilities than the hypothesis "this pen is red". Are you going to count the former as 2 distinct hypotheses just because of the way those wavelengths were summarized in English?
(This is before considering that the pen's color might not be uniform, or that one "color" can be a mixture of multiple wavelengths, or that the same color could be produced by more than one physical arrangement of matter. The total number of theoretically-distinguishable physical states covered by "this pen is red" is vast, and you'd need to know a lot of esoteric physics to even say whether or not it's finite.)
Twenty is a strict subset of the even numbers. Principled way to describe hypotheses like that would be to list every distinguishable resting position the die could land in, and a probability for each.
If the sun fails to rise tomorrow, that would imply some calamity, most likely one severe enough nonessential functions such as lottery drawings will be canceled.
What's your point? None of that changes that the more complicated option in those examples is more likely, so they still work as counter-examples to Scott's claim that "simpler simply *is* more probable, that's how math works."
And if you still object to those specific details for some reason, it's trivial to modify the examples to avoid the specific details you pointed out. For instance, you could change the first example to be odd numbers instead of evens. So I don't see how this could possibly change my conclusions.
(Also, your "principled way" of dealing with the die isn't actually possible; the number of in-principle distinguishable ways the die could land is far too vast for a human being to ever list them all in real life, and might be literally infinite depending on esoteric details of how physics is implemented. And even if you could do it, I suspect you only believe this would help because you're assuming that the die is fair, which isn't guaranteed; in fact ~100% of real-life dice are at least slightly unfair. I'm not aware of any principled reason to expect that reality is fundamentally divided into a discrete finite set of equally-likely options.)
An object of arbitrary complexity can - and often will - have a small, discrete set of ways it can come to rest on a flat surface. Consider the https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c
If I think a given d6 is approximately fair, I might list the six symbols on its flat surfaces and "0.166" for each, with that last 0.004 as rounding error, plus a hedge against various unlikely scenarios like it landing on edge, being stolen and destroyed mid-throw, or simply bouncing off the table into some crevice where the result can't be feasibly determined.
For a weighted die, maybe one favored side is 0.375, the four around it are 0.125, and 0.120 for the side directly opposite the favored one, remainder increased to 0.005 since mid-throw interruptions are notably more likely, in part due to risk of someone noticing it's weighted and getting angry about being deceived.
Those predictions are equally complex, or at least close enough that Occam's Razor cannot slide easily between them.
"Or a perfect number" is irrelevant padding if you include even numbers, since all known perfect numbers are even, and there aren't very many in the range small enough for feasible polyhedral dice. https://qntm.org/trick
Assuming we're talking about a d20, "will be even or prime with certainty" reduces to "will not be 1, 9, or 15," and "will be 20" is similarly equivalent to "will not be 1 through 19." Principled way to encode such absolute predictions when selecting capriciously from a short list of discrete options would be to assign a single bit each, in which case there are roughly 2^20 = 1,048,576 equivalently complex theories. Those aimed at narrower subsets are more impressive if validated, but that's Popper or Bayes, not Occam.
Since I directly asked you to explain your point, and you still have made no attempt to do so, I'm done here.
Even contrived dice examples point the opposite way, I think. Imagine I roll a trillion-sided dice, and I declare success if the result is exactly one trillion, OR if the result is divisible by 3, 7 and 11, and if that number appears as a group of digits of pi in some place, or some similar contrived rule. Saying “we got one trillion” seems simpler from the epistemological/information theoretic point of view, but it’s probably still less likely. Since we have a completely certainty model of how the system works, we go straight to counting probabilities, without considering simplicity at all.
Simplicity is only useful when we have so little information about the underlying model that for any extra assumption or epicycle in our explanation, it becomes very likely that it will clash with some unknown part of the model.
I wrote this in another comment before reading this one, but dinosaur bones are another good example, actually. The simplest explanation is probably something like “they were already there from the start”, but since we already know how the world actually works, the real most likely explanation is a way more complicated one that involves real living creatures with skeletons, the theory of evolution, special kinds of bogs to preserve the remains, etc.
I don't think Scott would disagree with this, nor that he meant to—surely his comment is to be read "all else being equal / no other information being available, the simpler alternative is to be preferred."
>simpler simply *is* more probable, that's how math works.
No it's not. Mathematicians like studying simple objects because mathematicians (like myself) are simple. Mathematicians also like structure, and simple objects tend to have structure.
But: consider e.g. the set of functions from [0,1] to [0,1] (i.e. the unit interval). Such a function takes a number from [0,1] and spits out a number from [0,1]. Let's limit ourselves to the functions that are equal to f(x) = x on the interval [0,0.5]. How does the function continue? What's the most "common" way for it to continue? There's no good answer for it, and nothing interesting to say in this case, so mathematicians will study more complicated things (like continuations with certain properties, or impose additional structure on the problem) but the simplest case - a function where we don't impose any structure actually has the *most* possibilities.
This doesn't seem very relevant to the question at hand, to me; i.e., this isn't to say anything about probability, except about what mathematicians are more likely to study.
Actually, this might support Scott's point instead: the simpler function having more "possibilities" means that any given instance of the latter is more likely to belong to the simpler function, since possibilities that fit the-function-plus-additional-structure are fewer.
I don't understand what you're trying to say. "Probability" *is* a mathematical abstraction. If you're talking about probability, you need to define what it is you're talking about, and the way to do that is mathematics.
>Actually, this might support Scott's point instead: the simpler function having more "possibilities" means that any given instance of the latter is more likely to belong to the simpler function, since possibilities that fit the-function-plus-additional-structure are fewer.
My point is exactly that the "simpler" functions (in the day-to-day sense) are not more common. I am *refuting* the point you are trying to make. The additional structure is what makes the function *interesting* and *useful*.
"More common" in what sense? As far as I can tell, you mean "more commonly studied"---which isn't relevant at all, so I must be misinterpreting.
The only other interpretation I can come up with (of your initial comment) is the one I explained above: a function "having [more] possibilities" than another means it is more likely any one random draw from the combined pool of possibilities will belong to the former. That is, in the context of making predictions / forming hypotheses about the world: a hypothesis that more possible evidence fits is more likely to be confirmed by evidence; imposing additional structure makes it /less/ likely, all else being equal.
>I am *refuting* the point you are trying to make. The additional structure is what makes the function *interesting* and *useful*.
I know what you're /trying/ to do; the confusion---whether on my part or yours---lies elsewhere (i.e., in whether or not you're actually doing it). I make no comment on "interesting" or "useful".
> so I must be misinterpreting.
Yes. What I mean is exactly the opposite of what you understood.
> I make no comment on "interesting" or "useful".
I'm sorry, let me rephrase - mathematically, the things you and I (or mathematicians) call "simple" are usually the "interesting" or "useful" or "intuititive" objects. If you just pick a random object the overwhelmingly most common thing is just noise (and I *don't* mean noise in a technical sense).
If you define “probable” as a synonym for “simple in the language I use to describe it” (as the solomonoff people do) then yes this is trivial. But if you think that every measure satisfying the kolmogorov axioms is an equally good candidate for probability (as is Bayesian orthodoxy in many fields) then this just isn’t true.
There are multiple justifications of occams razor.,some more theoretical, some more practical.
The tautologous one is that if you have a theory with N premises whose individual plausibility is unknown, their conjunction is going to be more plausible than the theory with N+1. That's not just methodological.
It's also very impractical , because you usually do have some information about how likely your premises are. But the problem with the entirely methodological approach is that you haven't solved the basic question at all ..you are considering the simpler hypotheses first because you must, not because they are more plausible.
I wish you'd been able to work in a discussion of the Ontological Proof (that the definition of God necessitates his existence) somewhere, since it seems to have a lot of similarities to the Tegmark theory (or at least my vague understanding of it from your post) that mathematical truths must necessarily have existence. You have to carefully steelman the Ontological Proof and appreciate its nuances to get any enjoyment out of it, otherwise it just sounds silly; there are some non-obvious arguments for it that evade the obvious arguments against it. [I write not as someone who believes that "Proof" to be valid, but as one who was favorably impressed by reading nuanced versions of it deployed by smart people who knew what they were doing and weren't being ridiculous.]
I also found Tegmark's claim that if a conscious being possibly exists it is conscious similar to the ontological argument.
When I got to "Consider various superlatives like world's tallest person," I braced myself for it. Then it didn't come.
The ontological argument is similar because it confuses the mathematical or logical space with reality. The proof is really saying - imagine in your mind (or in some logical space) that there is a perfect being. Does it have to exist?
Well non existence is imperfection so of course the imagined perfect being exists. But only in the mind that imagined the perfect being. Not in reality.
Some mathematicians argue that we discover mathematical rules, that Pythagoras is discovered, that Mandelbrot set exists independently of us. This might be true but it’s a world away from every mathematical object having to create a physical reality, there seems no compulsion there. And a lot of the absurdities that come from this belief and the belief that we can treat infinity as a number - which Scott got wrong here anyway leads to philosophical and mathematical absurdity.
If Tegmark's theory includes universes where for example c (speed of light) takes every value in some real interval like [2*10^5, 5*10^5] km/s then that means that the set of existing universes has a cardinality of at least continuum. In this case you can't order the universes in a sequence with first, second, third element, etc.. (this is proven by Cantor's famous diagonalisation argument). However, I don't think this hurts your case at all as you can still have a non-uniform measure, and this is just a small technicality as far as I can see.
This isn't really right since it equivocates between two technical senses of "sequence" (a function with domain the natural numbers or an initial segment thereof, which you definitely can't do; and a function with domain an ordinal, which you can if your mathematical foundations are ZFC, but not necessarily if they are ZF or something weaker).
Is the second interpretation possible in the context I'm replying to? If we take Scott to be defining f(lambda) = (1/2)^lambda for lambda ordinal, then we need a definition for ordinal exponentiation with rational bases and non-finite ordinals which I don't know of. Also, because of the definition of a measure we need the assigned function values to be extended reals. Honestly, even if it works somehow I don't really like this approach, you will need to assume that the universes are well-ordered by simplicity, ie there is a simplest universe in every subset of universes which, to me, seems like a pretty big metaphysical assumption.
Ah, I see what you were getting at, it was a different point than I thought. The standard answer is that the reals involved in parameterizing the universe need to be computable, or definable without parameters, or otherwise somehow contain finite information, but yeah there probably is some fancier answer that, for example, bases the probability measure on an oracle Turing machine with random oracle, or definitions-with-parameter with a random parameter, or something.
I think this is a lot worse than it looks. Whats the complexity of a universe with an irrational natural constant? It doesnt fit easily into the computation-based models of complexity.
This is sort of fine for Solomonoff induction, where you just get a sequence of theories with increasingly precise approximations, but you cant do that for reality.
Our universe's natural constants are not rationals with small numerators/denominators. We'd have no way of knowing if they are rationals with large numerator/denominator, but we have no reason to think that they are.
Are you by any chance mixing up irrationals with uncomputable numbers?
I don't think Schneeaffe is claiming anything about the constants of *our world*.
For what it's worth I don't think you can assign objective numbers to these constants. Values that we assign depend on our arbitrary units.
Dimensionless constants like the fine structure constant have objective values.
Fair enough!
It seems like a decent amount of trouble might be sidestepped by narrowing the Tegmark Universe from "all mathematical objects" to "all mathematical structures that are defined over a finite universe[1]." You'd still have infinitely many such structures, but I *think* it might at least be restricted to a countable infinity. And it doesn't seem like letting physical constants take on any real number (or even any rational number) is especially necessary: changing the speed of light by one part in 3^^^^3, for example, doesn't seem like it would have any observable consequences on any sort of physically reasonable timescale.
On the other hand, it's entirely possible that defining physics over some set that was really large but finite would have weird consequences that aren't immediately obvious to me. Certainly a lot of the formalism of modern mathematics based in very fundamental ways on infinite sets, and I'm not actually sure how possible it would be to formulate finite equivalents.
[1] This being "universe" in the formal logic sense of the word. A terminology choice which is going to get real confusing, real fast in this particular context.
> Boltzmann brains are a problem for even a single universe
They *can* be. My response is that clearly our understanding of the universe is wrong in some subtle way and there are no Boltzmann brains. Maybe something about the expansion of the universe causes their probability to keep decreasing, so there's only a finite (and very small) probability of being a Boltzmann brain. If they do exist, any basis for understanding our current universe is wrong, so either they don't exist and we have a model that's very accurate except for t he far future, or they do exist and we have no idea what universe we're in.
Okay, here's the deal with the Boltzmann brain stuff.
Firstly, typo note: the estimate from wikipedia is 10^(10^50) years, not 10^500. The first number is vastly larger than the second.
Boltzmann brains are not only a problem even for just one infinitely long-lasting universe, but even for just one universe that lasts a finite amount of time (before repeating itself). So long as the amount of time it lasts for is exponentially large, then we might run into problems. But we could assume that for one reason or another, our universe is one where Boltzmann brains are not possible, maybe due to undiscovered physics.
Okay, what about a Tegmark multiverse of finite universes? Say that to try and get around some of the paradoxes of having an infinite number of observers, we make a rule that universes in Tegmark's mathematical multiverse can only do a finite amount of computation. If we represent them as Turing machines, each machine must halt. Due to the absurdly fast-growing busy-beaver function, the maximum number of observers in a universe of a given complexity grows way faster than the complexity penalty. So we can't just sample from all observers in the multiverse. It's not that most observers are Boltzmann brains, it's that such a sampling process is mathematically undefined.
But, if we first sample a universe at random, then sample an observer from that universe, we can see that Nevin's objection fails to correctly count information. Under any reasonable encoding scheme, all laws of physics of our universe along with all the the "fine tuned" physical constants easily fits in a megabyte. (Each constant is only a few tens of bits.) This is a complexity penalty of 2^(1000000). But this means we only need to make a megabyte of orderly observations before we've got enough evidence to prove that we're not Boltzmann brains.
It bemuses me how horrified people are by the thought of Boltzmann brains. If I'm a BB and I am hallucinating such an effective and convincing hallucination, then this hallucination is just as "real" as the "reality" of me floating in space. I'm fine with it.
I think the main thing that "horrifies" people about Boltzmann brains is that we clearly are not Boltzmann brains (whose conscious experience would be fairly random and confusing and short), yet our theory about the universe or multiverse implies that most people should be Boltzmann brains, who why aren't we? Seems improbable.
Who's to say our experiences aren't random and confusing and short compared to what they ought to be, but we since we have nothing to compare them to…?
I still don't get how that's a problem. It just sounds like saying "our model of the world implies a random person is more likely to be Chinese than any other nationality, so that means I should expect that I personally am most likely Chinese". It doesn't mean that, because "I" isn't a term I use for randomly-selected people, it's a term I use specifically to identify myself, and I already know that I'm neither Chinese nor a Boltzmann Brain.
There's also another fundamental problem here. The calculation that says most observers will probably be Boltzmann Brains is itself conditioned on our not being Boltzmann Brains; otherwise, all the starting assumptions of the calculation would just be random hallucinations, and we'd have no reason to think they hold in reality. So what you're actually calculating isn't the absolute probability that you're a Boltzmann brain, it's the probability that you're a Boltzmann brain *conditional on your not being a Boltzmann brain*. Since this is a logical contradiction, its probability will be 0% no matter what, so the number of Boltzmann Brains existing in the universe or multiverse doesn't affect anything here at all.
How can you tell that you're not a Boltzmann Brain?
I know how to tell (assuming I inhabit the conventional sort of existence) that I'm not Chinese; I observe myself being really bad at making my syllables have tones. But I have no argument to prove that I'm *not* a temporary mind created for a split second with a memory of a lifetime that will vanish in the next second; I just have an argument for why it's not very useful to consider what to do in case I am.
>How can you tell that you're not a Boltzmann Brain?
Oh, it's very simple. A Boltzmann brain has only false memories, whereas I have true memories, so I can't be a Boltzmann brain. (Refer also to Samuel Johnson's proof of material objects, GE Moore's proof of an external world, and Diogenes' rebuttal to Zeno.)
And no, "but a Boltzmann brain would also say it had true memories, so you can't know you're not a Boltzmann brain" is not a valid rebuttal. The premise of the argument is itself conditional on the assumption that I do know I'm not a Boltzmann brain (otherwise it's just some random words I hallucinated and there's no reason to think it's true), so to infer the conclusion is incoherent.
That's a good reason to assume you're not a Boltzmann brain, but not a reason to know it.
It also suggests that it's more likely for us to be Boltzmann brains conditional on our experience even though us not being Boltzmann brains better explains our observations.
>we clearly are not Boltzmann brains
There is no practical difference if we were vs if we weren't so this is a silly thing to investigate or adopt a positive attitude towards. By your own stipulations of the thought experiment - there is no way to tell if you're a Boltzman Brain or not - you trip yourself into arguing over something undecidable, as is the typical milieu in analytic philosophy.
Generally, it is because the thought of minds appearing at random "many times" and plausibly suffering or feeling completely insane as they get sensory noise is quite unpleasant. Something we'd prefer not to happen.
Of course, if you, at this current moment are a Boltzmann brain, then you should believe that your hallucination is a lot more in-depth. It has a lot of structure to it with what you've experienced so far, and thus is effectively real in many ways. That's just the question of whether a simulation is real, and isn't too much of a problem... beyond that people do care about whether things are real. (ex: Boltzmann brain with low-quality intuition-based sim might be simpler than Boltzmann brain with high-quality sim of reality such that all the people you interact with are actually people)
A Boltzmann Brain that only lasts for a few seconds might have something like a near-death experience, and as far as I know, NDEs are very rarely negative or painful.
We can sample from observers (or anything else) rather than universes if we use a universal distribution using machines that simulate a universe and then find/decode the particular thing of interest within that simulation. I'm not quite sure whether this cuts in favor of "we are not Boltzmann brains" or not. Practically speaking the simplicity penalty of finding Boltzmann brains is bounded by the difficulty of describing how to find the simplest-to-recognize form of conscious observer that can exist in the universe they inhabit. This predicts at the very least that if we are not Boltzmann brains we should be about as simple as possible to externally recognize as conscious ...
You might think that you could treat universal Turing machines simulating each other as a kind of Markov process and find a stationary distribution, which would give some notion of a "natural" universal Turing machine, but Mueller showed there isn't one.
https://arxiv.org/pdf/cs/0608095
This is a nice reference, thank you
Wow, I had no idea there could be multiple ArXiV papers with the same number, in this case 0608095. https://arxiv.org/search/?query=0608095&searchtype=all pulls up no fewer than nine, made unique only by the combination of number and category. (I think with the new numbering system adopted a few years ago the number alone is now unique per paper, but I wouldn't swear to it.)
Your argument about falsifiability seems over-simplified. The issue with arguments for the existence of god is not that it isn't falsifiable, it's that there isn't any evidence either way. The OJ Simpson example that you bring up just drives this point home. Occam's razor is useful there because there was tons of evidence and we need to find a way to differentiate between possible theories.
What exactly is the evidence for or against the existence of god: that we exist and that the universe is complicated? Seems like pretty weak sauce to me. If you count that as evidence then you can claim that just about any ridiculous claim has an empirical basis. For example, I could argue that when I was a baby, I was briefly teleported to an alien spaceship. How do I know? Because I have hair on my legs and this is obviously evidence that aliens planted it there.
>What exactly is the evidence for or against the existence of god: that we exist and that the universe is complicated?
We have a very large amount of eyewitness testimony to the existence of God: certainly more than we had for the OJ Simpson trial. Leaving out the rest of the world, in the US 45% of adults report having a spiritual experience where they felt a connection with something beyond our world, and 30% of Americans say they have personally encountered a spirit. (https://www.pewresearch.org/religion/2023/12/07/spiritual-experiences/)
Although I agree with your general point, these seem like bad examples. Neither "something beyond our world" nor "a spirit" necessarily imply a deity, let alone a capital-G God.
> We have a very large amount of eyewitness testimony to the existence of God
As opposed to, what, eyewitness testimony to the non-existence of God? There logically can be none. This is a classic Devil's Proof. This statistic alone cannot be grounds for favoring existence of God over non-existence.
"There logically can be none."
What about all the "if God exists, I challenge Him to strike me with lightning now?" things. I agree those are more stunts than anything, but some people do seem to be convinced by them.
I'll happily provide eyewitness testimony to the non-existence of God. In several decades of life I have never once seen God. That's not proof, of course. But it is evidence, in much the same way that opening your fridge and not seeing mayonnaise in it is evidence that there's no mayonnaise in your fridge.
I've never seen Australia, but obviously that's not meaningful evidence that Australia doesn't exist. It just means I'm far away from Australia :)
It's not very good evidence, but it's more than zero evidence.
The word is "evidence," Mark, not "proof."
Of course your not having seen Australia is evidence that Australia doesn't exist. For it to be otherwise, you would need to somehow be MORE likely to not see Australia in worlds where Australia did exist than in worlds where it didn't. That is, the existence of Australia would need to make observing Australia *less* likely. Which would be a weird state of affairs.
It is, however, not strong evidence, nor is it the only evidence on-hand. It's not strong evidence because you've never created a situation where Australia-including models *strongly* predict you ought to see Australia (such as flying to Sydney, or sailing northwest from New Zeeland). And it's not the only evidence because we have maps and pictures and books and videos and news articles and first-hand accounts that all maintain a reasonably-consistent, reasonably-independent consensus about the existence and properties of Australia. So the preponderance of evidence quite strongly favors the existence of Australia, despite your real-but-small mass of evidence for the non-Australic model.
Alas, some other facets of existence are not nearly so well-attested as Australia is, and so weaker pieces of evidence must weigh much more heavily in our current models.
Personal testimony doesn't mean anything - you could at most be confident they weren't trying to deceive you, not that they were accurately reporting what they supposedly experienced.
What if an observer has their own personal testimony? I think the historical evidence argument isn't a bad one its just not the strongest. I dont believe in the type of entity BB is arguing got has much weight but i don't think the simplicist universe that causes the dovetail is an athiest one. Seems like humans dont reproduce enough intelligent minds fast enough in those worlds.
I don't think "no evidence" is really an epistemic state you can be in, at least not in the way people like to think. See https://www.astralcodexten.com/p/the-phrase-no-evidence-is-a-red-flag
The Simpson trial is a good example. I watched it with my late father and we went in thinking "yeah he did it" and came out "well there is reasonable doubt". The defence team did a fantastic job, no two ways about that, but the police just made such an utter mess of the case (descriptions of chains of custody being non-existent, the cavalier way the labs treated samples, how they took and preserved samples, questioning, etc.) that you couldn't convict on what was presented to you; a conviction would have had to come from "I think he did it and nothing is going to change my mind". And a trial has to start with the presumption of innocence.
> this is probably a vast underestimate - it’s about the number of humans who have ever lived, so it’s ignoring aliens and future generations
and all the sentient non-human animals
I continue to think that too little attention is being given to the super shady idea of a measure on all these necessary mathematical objects. It's a huge bait and switch. "Oh, look, we know all these mathematical objects necessarily exist!"
[later]
"Oh, and there's an extremely non-necessary, arbitrary measure on them -- ignore that that doesn't make any mathematical sense -- that's rigged to give us the right universe since the other idea definitely wouldn't, and that we, as mere portions of *one* of the objects, somehow have access to."
It's highly disreputable just for that reason, in my opinion. Say nothing about its other problems (such as conflating abstract and concrete).
Yeah. It's exactly the same problem that M theory (commonly called string theory) has. It doesn't uniquely identify any particular theory until you constrain it by a cherry picked set of criteria post hoc to match our observations. And that's just bad pool scientifically.
Yeah, to be honest I highly doubt the theory is true, but that's just my intuition, and it's entertaining to see how people argue for and against it.
My view is that MUH is more of a theory of "how you should interpret reality". You run into forced simplicity measures often in probability theory.
That is, all of those mathematical objects don't necessarily exist, but that this is a good baseline model for overarching reality that doesn't assume much. It doesn't assume only our local universe exists, which is actually quite a big assumption!
I think you run into MUH-like issues when interpreting reality even in a single universe, as "what is the logic this matter implementing" is *relative to some language*. We just have such a natural language/interpretation of reality that it is less of an issue. Partially what I think Scott is advocating for is that a "natural" language probably isn't super far from natural ones we might use.
I don't think this is ignored, Scott is hardly being a researcher in the area, but that it is a hard philosophical question to see if there's a way to 'naturally' resolve it.
Yes, the measure needs to be determined by the MU itself...not an arbitrary "prior".
"If some weird four-dimensional Mandelbrot set somehow encoded a working brain in it somewhere..."
https://xkcd.com/10/
Another angle on the subject: http://www.thecodelesscode.com/case/128
> Imagine trying to pick a random number between one and infinity. If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined. Reddit commenter elliotglazer has an even cuter version of this paradox:
> » “The contradiction can be made more apparent with the "two draws" paradox. Suppose one could draw a positive integer uniformly at random, and did so twice. What's the probability the second is greater? No matter what the first draw is, you will then have 100% confidence the second is greater, so by conservation of expected evidence, you should already believe with 100% confidence the second is greater. Of course, I could tell you the second draw first to argue that with 100% probability, the first is greater, contradiction.”
--
I think this could be extended to conjecture that it is impossible to *observe* *anything* infinite without minimally collapsing something about that infinite thing. Which would be a useful way to wall off a universe from the rest of a cosmos
I think you miss the point of the Plato recapitulation an Popperian falsifiability arguments.
Tegmark's multiverse is mostly silent on what constitutes a suitable mathematical object capable of existing. Is specifying it in English enough? ZFC? Do the objects need to be computable? Does ultrafinitism have a say? Does the set containing all sets exist? What about other logical impossibilities?
The most detailed specs we have in math these days reduce to formal logic, but even that relies on some unspecifiable meta-logic. I.e. no formal proof verification system can verify the hardware it's running on.
Even worse, if we assume materialism, then all the math we know is implemented by physical processes in this universe. Do we also accept the existence of objects from completely alien maths impossible in this universe due to physical constraints?
A maximally permissive answer to my questions above forces us to accept that any "system" exists, without the possibility for determining what is or isn't a system and not even being picky about impossible things. It's not just a fuzzy border issue. Every object is illogical relative to some logic, especially the simplest one! I.e the logic that permits nothing is arguably the simplest, at least relative to standard set theories like ZF.
Hand waving Kolmogorov or whatever measures onto these things just begs the question by fine tuning the definitions to get whatever result desired. Or said another way, it's a model with tons of tunable parameters and answers only things that are encodable in that free parameter set. I.e. we just created a lookup table.
This is the basic mechanical problem with Platonist metaphysics, IMO. It is simply incoherent, despite first appearances. Popperian falsifiability does point at a utilitarian resolution however. What does Tegmark give us when we play taboo on the concept of real? I.e. can we operationalize what Tegmark's existence claims even mean? Are we dealing with replicable realist things? Non-replicable but consensus reality-like things? Or is it more like how dreams and false memories operate? Etc.
That said, the as an intuition and discussion pump, the Tegmark idea is fun, so hedonistically, I'm all for it.
The metalogic is usually just ZFC, or occasionally ZFC+some large cardinal axiom. You can absolutely use ZFC to study itself, as long as you're okay with all your theorems taking the form "if ZFC is consistent then _" or "if ZFC has a transitive model then _" etc.
You can certainly encode ZFC within some models of ZFC, but you're still running on some unformalized hardware. The punchline being that this let's you smuggle in as much complexity as needed to encode the lookup table of answers you want.
We could try sidestepping these issues by declaring that "mathematical object" means anything definable in ZFC or whatever, but then then we must contend with Godelian monsters like BB(745), which are definable but provably have no value in ZFC. We can even construct objects one more level removed that are only expressable in ZFC even by defining an object that's capable of defining the object. You get the idea.
On the one hand, permitting these objects to exist means that "mathematical object" now means ZFC, plus all the infinite other things we arbitrarily added in, sneaking in infinite parameters for our lookup table. On the other hand, if we don't do thT, then we have admitted that there are "mathematical objects" which don't exist.
When reading about metalogic it's important to keep the distinction between definability, provability, and truth in mind. A number can be definable without being provably equal to anything. If a sentence isn't provable or refutable in in some logic system, doesn't mean it's neither true nor false.
ZFC considers BB(745), and all busy beaver numbers, to be a definable object, and it will tell you it's a natural number, it just doesn't know *which* natural number it is. Further, all models of ZFC that agree on what the natural numbers are, will agree on the value of BB(745) and indeed on the truth value of all first-order arithmetic statements. I'm sympathetic to saying that the higher reaches of set theory are not very meaningful, but in order to hold that first-order arithmetic statements don't have well defined truth values you have to deny that there's a privileged standard model of the natural numbers - and in that case it's *you*, not me, who has introduced an infinite array of new objects by fiat.
When you say "the value of BB(745)" you're sneaking in non-ZFC arguments. ZFC, i.e. the stand-in for what we agree to be all of existence, cannot demonstrate that any (arithmetically defined) value equals BB(745) without also demonstrating that ZFC is inconsistent.
That's the state of affairs I gestured at when saying "doesn't have a value". I'm afraid the level of technicality here detracts from the main point.
Which is that Tegmark's multiverse permits this uncomfortable situation whereby almost all objects are neither equal nor disequal to infinite others. Resolving this conflict is where all the fine tuning I previously mention comes in.
We could attempt to be laissez faire about allowing any definition independent existence regardless of equality with others, but then we also allow things like Russell's set to exist, and the whole philosophy reduces to permitting anything at all to exist, no matter how self-contradictory or not.
The mathematical platonist view is that ZFC, which is ultimately just a formal system for deriving finite sentences from other finite sentences, is a limited descriptor of a true mathematical reality. As finite beings, neither we nor our logic systems can know every true fact, which should be of no surprise since there are infinitely many platonic objects and infinitely many things to say about them, but this is no problem for the objects themselves, which exist independently of our attempts to understand them.
I think this view encounters some genuine issues with things like the Axiom of Choice or the Continuum Hypothesis, but it seems much more grounded when it comes to the natural numbers and first-order arithmetic.
You seem to be rejecting this in favour of some kind of hyper-formalism where you can only talk about what is provable or refutable, and not what is true or false, and that if ZFC fails to answer a question then there is no meaningful answer. Going this far seems epistemologically unstable, since ZFC doesn't resolve the question of its own consistency, so you have to hold that the question is meaningless or arbitrary. But if it's meaningless to say that ZFC is consistent, why care what it has to say at all?
Regardless, asserting a position isn't making an argument for it, you've really just observed that mathematical platonism requires believing in more than just formal logic systems, which is the basic point of it.
I feel like we're not really connecting here. Though, I'm finding this back and forth really helpful, so thanks for continuing to respond.
My original post outlines the contradictions that arise in Tegmark's Multiverse if we just use an intuitive "know it when you see it" notion of "mathematical object". Really, these just recapitulate the problems with intuitive set theory.
One could object that we can resolve the contradictions by pinning down sufficiently what we mean by "mathematical object" but then the you get the fatal Gödelian issues I mention.
Whether or not we're a hardline formalist or not, in practice we study mathematical objects via some semirigorous formaism. Maybe that formalism is governed by Platonic ideals or not, but if we take Tegmark seriously and try to use it as Scott does here, we have to contend with the fact that we can tune the formalism to give whatever complexity answer we want, rendering it useless IMHO.
I think actually the Mathematical Universe Hypothesis is exactly what you get when you take normal scientific reasoning and then taboo the concept "real".
There is only one universe. That's what the first three letters of "universe" mean.
The universe includes all of space and all of time. It is meaningless to consider whether something "came before", or "will come after", or "exists outside of" the universe, and hence also whether something "caused the universe to exist" or to have certain properties.
i don't think it's even useful to say that the universe has properties. Properties are useful to distinguish one item of a class from another, but there can never be any sense in "distinguishing one universe from another" because there is, has been, will be, and could be only one universe.
Then substitute the argument to be whether 'universe' means our local universe or some larger encompassing sort. That is a meaningful question.
It then becomes a question of "why assume the one pocket is precisely ours", which lends weight to defaulting to a MUH-style theory.
We need better language to distinguish 'the totality of everything that exists' from 'the thing that started at the Big Bang' in case they are different.
I suggest 'Monad' for the former.
The 'observable universe' is what I hear physicists use for the latter when they want to be precise.
I agree that if language were to be logical, 'universe' should mean 'everything'. Alas and unfortunately for everyone who likes the etymology of words to be relevant, 'multiverse' seems to be the most accepted way to explicitly refer to the former.
>Boltzmann brains are a problem for even a single universe
Only if it's infinite in time or space. A really small universe, that's doubly finite, doesn't have the problem.
A finite universe doesn't suffer from the "non-Boltzmann brain observers have measure 0" problem, but it can still have an extremely low ratio if it's sufficiently large.
Ratio of what to what?
Non-Boltzmann brain observers to Boltzmann brain observers.
If you expand a small universe by adding matter that's capable of evolving life, the ratio stays the same. You can only reduce it by adding matter that's favourable to BBs, e.g. a long period of cold high entropy.
>I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one.
I'm not sure you're grokking the moral argument, at least the moral argument BB is defending. He's arguing that evolution cannot in principle explain why we could have true moral knowledge (or, as he includes, true mathematical or rational knowledge) while theism can. If our moral sense is purely the result of evolutionary processes then it could only be correct by accident: so it is either not correct (just a nice hallucination that improves our reproductive fitness) or something other than evolutionary processes has intervened to provide us with the ability to ascertain true moral (or mathematical, or logical, etc) knowledge.
BB links to a paper in his subsection on moral knowledge and here is a relevant quote from it:
"Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” When we consider certain moral claims, we can just “see” whether they’re true: we can see that (at least absent extenuating circumstances) hatred is bad, virtue is good, killing innocent people is wrong, etc. With these intuitions in place, we can reason and make our moral judgments more accurate. For instance, initially we might not think that factory farming is wrong. However, once we realize that causing suffering without a good enough reason is wrong, that producing cheaper meat is not a good enough reason for producing tremendous suffering, and that factory farming causes tremendous suffering in order to produce cheaper meat, we might change our minds. But this requires that we trust our intuitions: we wouldn’t be able to reason our way to the wrongness of factory farming without intuitively grasping the wrongness of causing suffering and the insufficiency of producing cheap meat as a justification for it. The problem for the naturalist here is that, if naturalism is true, it seems that the faculties responsible for our intuitions were formed through purely natural processes that didn’t aim at producing true beliefs. For instance, it seems plausible that our intuition that you shouldn’t cause pain without a good reason was instilled in us by evolution, since communities of our ancestors who flippantly inflicted harm on each other wouldn’t have lasted. But this might unnerve the naturalist who believes in moral knowledge. After all, it seems that we might have easily had very different moral intuitions. For instance, Charles Darwin suggested that:
"'If . . . men were reared under precisely the same conditions as hivebees . . . our unmarried females would, like the worker-bees, think it a sacred duty to kill their brothers, and mothers would strive to kill their fertile daughters, and no one would think of interfering. ( 1902 , 137)'
"Similarly, the philosopher Mark Linville (2009 , 397) suggests that
"'Wolves in a pack know their place in the social hierarchy. A lower ranked wolf feels compelled to give way to the alpha male. Were he endowed with [moral thoughts], then, presumably, his “moral sense” would tell him that obeisance is his moral duty. He would regard it as a moral fact that alpha interests trump beta or omega interests.'
"In light of this, the philosopher Sharon Street, who has done as much as anyone to draw attention to this issue, suggests that trusting moral faculties that have been formed by such natural processes might seem to be analogous to
"'setting out for Bermuda and letting the course of your boat be determined by the wind and tides: just as the push of the wind and tides on your boat has nothing to do with where you want to go, so the historical push of natural selection on the content of our evaluative judgements has nothing to do with evaluative truth. While this historical push might sometimes give us correct beliefs, this would be “purely a matter of chance” '
"Further, with naturalism, how the moral domain is doesn’t seem to explain why we hold the beliefs we do; even if these beliefs are true, we don’t hold them because of how anything in the moral domain is, but instead because beliefs like that helped our ancestors survive. As we discuss in Section 4.3, even if we couldn’t have easily had very different, and therefore largely incorrect, moral intuitions, this fact might be enough to make it the case that, if our beliefs are true, they are true only by accident in a way that rules out knowledge."
Scott believes that there are no objective, human-independent moral facts, there is merely the evolved stuff, so no explanation is needed for a correspondence.
The presence of "moral facts" that don't cash out in any kind of facts about material reality doesn't provide any more reason for moral action than their absence. What you're saying above is just the contrapositive of the Naturalistic Fallacy.
First off, making the only alternatives "objective facts" and "social construct" is a false dilemma. "Ice cream tastes better than gasoline" is neither.
If there are no moral facts then there's no objective sense in which one "should" follow one's desires either; there is no stance-independent "should", even that one. We follow our desires (to the extent that we do) because we're built that way. Similarly, we follow our consciences (to the extent that we do) because we're built that way. Recognizing that "rocks should not fall on people's heads" is a fact about ourselves rather than a fact about rocks doesn't change anything about what we will do. It also neatly solves the mystery of why rocks make no effort to avoid falling on people's heads.
What if I'm planning to live a long time, and eventually visit that part of Africa, and would prefer malaria to already be gone from it when I do?
He believes that people have values. That these values are not objective moral facts about all creatures, or even necessarily all of humanity.
> For example, giving your own money for malaria nets in Africa as a clear recognition of the fact that there is some moral good in doing so instead of spending that money on hookers and blow for yourself.
(from your other comment)
Yes, that he has empathy which makes him value what other people are experiencing. That he wants to improve their situation!
But, he wouldn't believe there's some moral rule encompassing both humans and an alien artificial intelligence. That's what moral realism would usually postulate. The human and AI could still trade with each other and apply game theory for some degree of cooperation, but it wouldn't be as empathetic as two humans interacting.
Now, you could say "The AI just destroyed a nice earth-like planet for resources, that is bad *relative* to my moral values", just like the AI could say "I think it is bad that the human is not making paperclips out of the metal."
But they both could also say "I understand that by their values, that is the better action, even though it is bad according to mine."
> I am transported back to 1939. Would I fight for the Nazis or the Allies? I “know” I should fight for the Allies. But for a person who thinks morality is entirely subjective, he should be able to decide “I’ve been socialized to think the Allies are good, but nothing really matters, so if it’s more fun to fight for the Nazis, I’m going to do that.”
They would be able to decide that. It just would clash with a lot of their values! You seem to be going "oh, there's no absolute morality, thus anything is permissible and I don't have any reason to pay attention to morals", when... no?
I dislike people being killed. The vast majority of humanity is like that. I like people living healthy happy lives.
> imagine you can steal something from the store with a 0% chance of getting caught. Why not steal it then? Of course you’ve been taught not to, but as a thinking animal who knows morality isn’t a real thing you should steal the thing, because it’s all upside.
Yeah, you keep saying morality isn't a real thing, but you're misunderstanding the core idea. People still have morals/values about situations! Most people know it is bad to steal, because they've been taught that, and they also can form the implication of "person got this item by their own work -> I steal it -> I get a benefit, but they are harmed", and thus fallback to intuitions about cooperating and not betraying others.
----
The morality that Scott espouses does not behave that exotically compared to what many moral realist positions espouse. It is more person-centric, but as a view into their own values.
Another important element is that humans share a lot of values! Most people have altruism! Most people don't like backstabbing someone who helps them! People have fairness intuitions! And that's even before we get into notable socialization-induced values.
Law does change in how its viewed, becoming less about instituting morality, and more about ensuring that a society of people with varying views can cooperate peacefully... but that's not that different from how law has been viewed for centuries in America. (That's why Freedom of Religion exists to varying degrees, so that people with different ideas about morality can cooperate!)
Because you have those moral preferences.
You could then ask the same question to the person who's considering hookers and blow vs donating to people in Africa. Why doesn't he just opt -out of enjoying hookers and blow *and* donating to people in Africa? Then he doesn't have to care about anything, seems much easier!
Well, the reason is that we have fundamental values for that. Hardwired into the brain, via our tastebuds and instincts.
Similarly, altruism to some degree is hardwired into the brain. The evolutionary explanation would be that it is useful, especially in small tribal groups, and encourages cooperation to avoid zero-sum or negative-sum problems.
We have preferences for being consistent and fair, varyingly influenced by socialization. Now, there's arguments to be made about how "valid" social influenced morality is. My view, and I believe Scott's too, is that it is more than 0% but less than 100% in terms of how you should adopt it.
----
Relatedly, I often make a distinction between felt-emotions and ~decision-motivations. Felt-emotions would be "that food was good" or "I'm glad my friend is happy with the present I got them". Felt-emotions often include altruism, just like any other pleasures.
Decision-motivations I think are often morality-centered. Extrapolations from our felt-emotions, because we're thinking creatures. This is where you get more abstract theories of "I should donate to people far away, they're people just like anyone else." or more intricate moral theories like utilitarianism. These may not have strong emotional-components, but they still affect how you make decisions. Think of this like 'duty', there's not quite a strong emotional sense with that, though I think there's some of one, but it still drives decisions to some degree.
You can't just "opt out" of your own moral preferences any more than you can "opt out" of your other preferences, you are not some kind of universal mind-instantiating machine. Furthermore, "moral preferences that benefit only you" is borderline incoherent, because "benefit" means "bring good to" and thus only has meaning under a specific set of moral preferences.
> people in Africa whose lives have nothing whatsoever to do with mine
That's a hell of an assumption, in a world with densely interdependent supply chains and fragile shipping bottlenecks. You ever use consumer electronics with components that require cobalt, or pass through the suez canal? Been scammed by a Nigerian prince? Eat any chocolate?
That's fine (I mean, it's not completely fine, but that's a different subject*) but as BB points out the same argument applies for mathematical and model knowledge, which I don't think Scott believes is unreal: for one thing, if mathematical knowledge was not human-independent than Tegmark's theory doesn't work at all.
*As an addendum explaining why moral anti-realism is not completely fine, I would like to quote C. S. Lewis from his book "Miracles":
"They say to themselves, ‘Ah, yes. Morality’—or ‘bourgeois morality’ or ‘conventional morality’ or ‘traditional morality’ or some such addition—‘Morality is an illusion. But we have found out what modes of behaviour will in fact preserve the human race alive. That is the behaviour we are pressing you to adopt. Pray don’t mistake us for moralists. We are under an entirely new management’…just as if this would help. It would help only if we grant, firstly, that life is better than death and, secondly, that we ought to care for the lives of our descendants as much as, or more than, for our own. And both these are moral judgements which have, like all others, been explained away by Naturalism. Of course, having been conditioned by Nature in a certain way, we do feel thus about life and about posterity. But the Naturalists have cured us of mistaking these feelings for insights into what we once called ‘real value’. Now that I know that my impulse to serve posterity is just the same kind of thing as my fondness for cheese—now that its transcendental pretensions have been exposed for a sham—do you think I shall pay much attention to it? When it happens to be strong (and it has grown considerably weaker since you explained to me its real nature) I suppose I shall obey it. When it is weak, I shall put my money into cheese. There can be no reason for trying to whip up and encourage the one impulse rather than the other. Not now that I know what they both are. The Naturalists must not destroy all my reverence for conscience on Monday and expect to find me still venerating it on Tuesday.”
Given all that, if Scott is a moral anti-realist then why does he keep trying to convince people to give 10% of their income to charity and donate their kidney to a stranger?
I might be misunderstanding him as it's late here and I'm dumb, but I don't understand what's his problem. He has moral want X, learns that moral want X is explainable without (effectively) supernatural and for some reason that lessens his moral want X compared to non-moral want Y. How about not doing that?
>How about not doing that?
Why shouldn't he?
I want to cheat on my wife and I'm pretty sure I can get away with it: but my conscience tells me that would be a betrayal and would hurt her and I shouldn't do that. But now I know that my conscience has no claim of superiority to my desire to bed beauties out of wedlock, which is a real relief. I can safely ignore it. Why shouldn't I?
You can't safely ignore it, because you have conscience, that is exactly what is accepted! this is like assuming something and deriving (with some highly questionable derivation) the opposite of the statement assumed
Well... yeah, why shouldn't you? Plenty of people cheat on their wives, and some of them don't even regret it. There's nothing actually stopping you other than your own empathy and psychological disgust of the act, the latter of which is a result of that idea being drilled into you since you were a child.
Two reasons.
The first is that strong, deep emotions don't just vanish the instant you realize there's no logical basis for them. If they did, a lot of mental disorders would be a lot easier to treat.
The second is that your "desire to bed beauties out of wedlock" is a result of the exact same evolutionary and social processes that your conscience is. If one is somehow invalidated by its context, so is the other.
His argument only proves that publicly supporting moral anti-realism is bad if you want to control people, not that moral realism is real.
It proves that you can't have your cake and eat it too: you can't act like your moral sense means anything more than your other desires while believing it doesn't point to anything beyond survival instinct. Which I think you would agree with, but most people are not nearly as nihilistic as you are and might recoil at the idea that our moral sense can be ignored.
Anti realism is a range of positions, including the one the. Lewis expounds...that morality is useful.
Anti realism is not nihilism.
What would you say the difference is?
Anti-realists can regard moral claims as some or all of meaningful, useful, and true.
>if Scott is a moral anti-realist then why does he keep trying to convince people to give 10% of their income to charity and donate their kidney to a stranger
Because he thinks that people are similar enough, which includes the mechanism producing moral intuitions. Since his intuitions point towards EA, he thinks that he does the rest of humanity a favor pointing them in the same direction.
>Since his intuitions point towards EA, he thinks that he does the rest of humanity a favor pointing them in the same direction.
Under moral anti-realism, how is that doing anyone a favor? Before he attempts to "point" them they are content. Afterwards they feel guilty: and there is no reason for them to feel guilty, it's just a built in social survival instinct that Scott went ahead and activated when it wasn't previously. Why do that? It would be like going around giving people a sharp whack to their funny bone. Yes, people are similar enough that they almost all have the mechanism necessary to feel nerve pain and lingering discomfort when struck on the ulnar nerve, so what?
>Before he attempts to "point" them they are content.
He thinks they're confused and unsatisfied instead, because their altruistic inclinations don't find a sensible outlet.
>there is no reason for them to feel guilty
Well, not if you accept that this is likely to make them do EA, and that increased amount of people doing EA makes the world better on average.
You're taking this from a point-of-view where moral anti-realism is thus inherently very selfish and only thinks about current values.
People have values about the world, and many people have altruism to varying degrees. Some people avoid it because they don't trust charities, some because of usual procrastination, and some because they're afraid of feeling pushed to 'give it all up'.
The usual view is then that they do have these values already. They may not be feeling aware of their altruistic values at the moment, but it is still a part of them. They do want others to be better off.
He does believe that their values will be satisfied better by helping others, of knowing that they've improved the world.
But I also think there's an important distinction to be made between felt-emotions and values. Humans have values that they don't feel particularly strongly about either way. They also have values that would make them feel negative. Yet they do them anyway. Why? Is that not just hurting yourself? Why put in that effort?
Because our decision-making is not wholly built upon felt emotions! I value altruism and generalize that to helping people across the globe even though I'll never see them personally.
This is why people say they wouldn't want a dead relative replaced by a fascimile so they never recognize they are dead. Because they value more than just felt emotions, they have values, and also some degree a care for the reality of the situation or counterfactuals.
(We could quibble about felt-emotions, you could say that sadness/happiness/guilt are strongly-felt emotions while acting-by-values/duty are more decision-orienting emotions that are less visible, but whatever.)
I think your argumentation style is common, but is a misunderstanding of how people think of morality. You've attached the "cares fundamentally" to moral realism, but other moral theories can have that.
If he is a utilutarianr, is he an anti realist? Utilitarianism could be considered naturalized moral realism.
Is that supposed to be a good counterargument against moral realism by Lewis? It basically reduces to just 'any preferences are acceptable and subject to the whim of the individual'. Which, okay, I can see why you would dislike it, but doesn't feel like a strong rebuttal. It ignores that besides preferences, we live in groups that can push us one way or other. And also, that you can still have *really* strong preferences even if you know they are just that. Humans who have been indoctrinated and internalized some preferences won't be able to get rid of them just because they can rationalize 'well, this is just a preference' (we could call this the Raskolnikov fallacy). And there's always the external, amoral 'society will punish you for doing x' reinforcing those socially valued preferences.
I don't know about Scott, but I'm happily anti realist about maths. There's just no evidence that it's true beyond "deriveable from axioms" and/or useful.
Don’t you have the same issue with logic, and hence derivation?
I suppose.so. Is that a problem? Logic isn't even supposed to genrate truth, it's supposed to preserve it.
It would feel problematic to me to have to chop out derivability from your criteria and be just left with pragmatic utility, because it feels to me like there’s something more objectively right and wrong about statements of mathematics beyond usefulness, which can change over time and based on circumstances and the persons asserting the mathematical claim.
According to Tegmark's theory math *is* reality. There are no two different magisteriums which just so happen to coincide. Everything is just math. Therefore there is no coincidence to explain.
This, of course, is completely unsatisfying, and immediately such questions as: "okay... but how exactly does it work?" come to mind. Though, I suppose if one's curiosity can be satisfied by "God did it", for the sake of consistency, it should also be satisfied by "It's just math".
"Though, I suppose if one's curiosity can be satisfied by "God did it", for the sake of consistency, it should also be satisfied by "It's just math"."
God is a lot simpler for me than maths.
Exhaustive infinite sets are simple.
Not for my brain, I don't have the maths brain. So I'm unable to go past the twelve times table, don't even talk to me about calculus and algebra.
I'm in the position of the illiterate peasant having to take on trust that the village priest is telling me the truth when it comes to "but the theory is so simple, here let me explain it, xyz&%*thj, so of course it follows that fifty angles of ghjjj means the apple is green not red!"
Mathematical intuition helps navigate and succeed in the physical world and as for why, from the point of view of the MUH this is obvious. Therefore it can contribute to fitness and be selected for by evolution. And it isn't an absolute thing either, as witness nonhuman sentient animals which have insufficient mathematical intuitions to do axiomatic reasoning.
I can testify that it's possible to reject strong realism.about morality, maths and morality (and money, marriage and morgages). Mathematical multiverse theories aren't the only multiverse theories available. If non mathematical existence is coherent, then they are arbitrarily small.
Evolutionary processes are quite capable of non-accidentally generating usefullness, so the question is whether anything more than usefulness is needed.
Not sure I understand, are you suggesting that morality= usefulness and usefulness is created by evolution and is objective, therefore objective moral facts exist?
If so, then I disagree, and I believe you are commiting some version of the naturalistic fallacy, but I don't want to elaborate before you confirm the above is your position.
Almost. My view doesn't imply that any de facto moral system is ideally useful, or that all systems are equally useful.
"Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” Most people's beliefs depend on what society tells them. Few people try to derive morality from first principles.
When you were a kid, and some other kid got a thick slice of cake but you got a thin slice of cake, did you get upset because society had taught you to expect fair treatment from others? Or did you intuitively feel it was unfair?
Evolution makes you want as much cake as possible, and society whittles that down to a fair share.
Longer-term cake maximization strategy in the environment of ancestral adaptation might reasonably involve distributing enough benefits to fellow tribe members that they'll willingly contribute to the prep work next time. Modern version of that basic plan is the joint stock corporation, and while such superorganisms have been widely and thoroughly criticized, to call them a failure by Darwinian standards would be nigh-impossible to justify.
Asking people to introspect is very bad methodology to answer psychological questions, especially about distant time periods like someone being a child, and about situations that didn't merit reflection in the moment like an underspecified emotionally charged reaction to some putative injustice. It's also pretty obvious that children are not immune from socialization, so I don't know on what grounds your (mostly rhetorical?) question could succeed in vindicating your position.
"evolution cannot in principle explain why we could have true moral knowledge"
I'm not sure why we should think we have true moral knowledge. The vast amount of literature trying to explain morals would suggest we have only a vague approximation of moral knowledge.
> If our moral sense is purely the result of evolutionary processes then it could only be correct by accident:
Yes, by definition that's how evolutionary processes work: get it right by accident once, notice it's working better than before, do more and more of that right thing thereafter.
Most humans believe it is wrong to murder children; yet gorillas, lions, dolphins, baboons, prairie dogs, mice, and many other species regularly kill the children of their rivals. There is a clear evolutionary advantage to doing so: in lions, for instance, the lionesses will not be fertile while they are nursing their young, so if a rival lion takes over a pride and kills all the cubs the lionesses will become sexually available, so they can bear his children instead. Would you say that evolutionary processes got it right with us, and wrong with them, or vice versa?
It is clear to me that evolution does not hinge on whether something is morally correct, only whether it increases the odds of successful reproduction and survival. I see no reason to believe that what is moral is also what is best for reproduction.
> Most humans believe it is wrong to murder children
Most humans who've ever lived? Categorically, no circumstantial exceptions?
https://en.wikipedia.org/wiki/Exposure_(infant) There's a huge amount of stuff in the Old Testament and other contemporaneous sources about ritual murder of children for religion-mediated strategic advantage, or killing all the males above a certain (very young) age in some conquered group to ensure cultural assimilation. There are even present-day moral conflicts over "honor killings" and "officer-involved shootings" and such, where one side argues that some child's behavior was such that to let them remain alive would have been more wrong.
> Would you say that evolutionary processes got it right with us, and wrong with them, or vice versa?
Evolutionary processes neither plan ahead nor strive for conceptual consistency across different categories; they only care about fitness under circumstances actually encountered. Human children are a lot more labor-intensive to raise, and many important hereditary traits are more behavioral than genetic, which means the optimal repair / replace threshold is different. Industrialization shifted it further, and morality has been scrambling to recalibrate accordingly.
>evolution cannot in principle explain why we could have true moral knowledge
This isn't a problem because most moral behavior is a result of socialization. Evolutionary debunking arguments concede too much ground - evolution is only reaponsible for the barest of moral behavior. It's like saying we need God to explain why we can play baseball if evolution can't do it. Baseball is primarily a result of socialization, not evolution, so the entire discourse about debunking vs defending evolutionary arguments about baseball is defunct.
>Our moral beliefs ultimately depend, in some way, on what philosophers call “moral intuitions.” When we consider certain moral claims, we can just “see” whether they’re true
This is an empirical claim about moral actors' psychology with no empirical evidence supported. It's also using a metaphorical version of "see" that likely postulates some sort of moral sense, which philosophy has a long line of doing with divine senses and rational faculties, but has never had an inch of empirical evidence in favor of it.
>true only by accident in a way that rules out knowledge.
Only if you buy into a parochial conception of knowledge that in the quoted section so far has not been motivated for anyone to accept.
>What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, […]
I believe the first option because I have a model of how the world generally works, in which the first option is more *likely*. The epistemological concept of simplicity doesn’t really come into play there.
I guess that the more cloudy your model is, the more this rule starts to correlate well with simplicity: if you have a lot of assumptions and epicycles, but of a lot of uncertainty in your model, it’s likely that some unknown part of the model will clash with some of your assumptions. But when the uncertainty on the model is relatively low, the opposite is true: the simpler explanation for dinosaur fossils is probably something like “they were already there since the beginning”, but knowing how the world actually works, an extremely complicated explanation involving living creatures with skeletons, which were similar to modern animals but a little different because of evolution, special kinds of bogs to preserve their remains, and so on, is actually more likely, and it’s the one we choose to believe.
For this reason, I would be very wary of assigning any kind of intrinsic value to simplicity. To me, it mostly looks like a way of saying “the less words in your answer, the less likely that you’re wrong”.
If God is a possible mathematical construct, this theory would require that He exist.
Right, but many gods probably aren't possible mathematical constructs. Zeus is basically no different from an advanced alien and I see no particular reason why Zeus cannot exist as a mathematical construct, and the god of Spinoza/Stoics is at least close to if not identical to the Universe (or Tegmark's mathematical multiverse) which does exist so perhaps this similar-and-plausibly-the-same object exists too, but going by the Epicurean argument, omnibenevolent, omniscient and omnipotent god is self-contradictory, and so these gods (probably?) aren't possible mathematical constructs. If Tegmark's mathematical universe is true, then all gods with properties such as "is all that exists at the most fundamental level of existence" and "some property, which isn't a property of a set of all mathematical objects" would also result in contradictions.
But it says nothing about whether He exists in our universe.
The classic Christian view of God as omnipotent/omniscient is inconsistent in general, because he can't affect all the other universes. Now, he could have a local copy of the MUH, because you can nest it arbitrarily, and control that.
> there is no neutral programming language, no neutral compiler, and no neutral architecture.
But Forth comes close.
In general you can have different languages that capture some simplicity (Forth, a simple Lisp, Turing Machines, say even C), and they will each be able to model the other languages tersely.
What makes them more neutral than Chicken?
Does the universe even need to be consistent to exist?
Ask yourself the question: what would happen if a million dollars appeared in your room as you are reading this comment? Well, you would be surprized, then you would reread this comment, then you would... figure out that you don't actually exist, but are just part of the answer to this hypothetical, right? Then you would probably spiral into an existential crisis and think about what this means for the nature of reality, and whether or not you are even conscious or a philosophical zombie...
And it's not like 'realize you're in a story' fourth wall break either, because there doesn't need to be a story written anywhere or simulated on any hardware. The answer to the question "what would you do if a million dollars appeared in your room right now" exists as a hypothetical object, even though the physics of the underlying hypothetical universe are not consistent.
What, Scott, would you do if a million dollars appeared in your room as you are reading this comment? What would you think about the nature of reality? Would you reply to this comment asking me how I did it? Would you tell people that this is evidence that the MUH is true? What would you do?
I would infer I'm in a simulation, not necessarily that I'm in an inconsistent hypothetical.
My view is that a universe doesn't necessarily need to be "consistent" in some models, because those models alter the rules.
My goto example is pointfree topology. You can have a locale of "All functions that bijectively map from Natural Numbers to Real Numbers" (it can go back and forth, uniquely). There are no functions that fits that. However, in pointfree topology, you can get all the arbitrary 'approximations' that get close to that, satisfying rules about how they locally interact (join/intersect each other).
Now this doesn't answer whether it needs to be consistent, but I think it does provide a view that you can have inconsistent-looking setups that have consistent paths-to-the-end.
(But honestly this is just slight rambling)
>Where did all of that come from? It was . . . inherent in the concept of z^2 + c, I guess. Somehow lurking latent in the void. Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
You can talk about them, so they are meaningful. They are probably difficult to prove or falsify. But simplicity is a thing as you said. Having said that, the ontological razor, assuming the minimal amount of stuff, and the epistemological razor, making the fewest arbitrary assumptions, pull apart.
Is the puzzle of Secret Fire (or Reality Juice) epistemic or ontological?.is the problem "why only those things?" or "how does it work?". , MUH and materialism are different theories, and a potential answer to "what is the reality juice that makes only some possible mathematical entities real" is "matter".
To me any theory that has "reality juice" is a form of dualism. To be a real monist, you either have to have material facts *just* be facts about math, or mathematical facts *just* be facts about matter. Now as for which, and exactly how, that's the hard part.
is dualism always bad though? Or is it only bad because of things like the interaction/epiophenomenalism problem?
Well, there is an interaction problem with math and physical reality. It's usually brought up under the heading "the unreasonable effectiveness of mathematics". It's puzzling that mathematical facts can constrain our expectations of physical reality, and that physically-implemented minds can derive (at least some) mathematical facts, if they are really in completely separate ontological magisteria.
Well, the matter is reality juice theory isn't that kind of dualism. It's more like hylomorphism -- everything real is a fusion of matter and form, and you can't have the one without the other.
But that's just epiphenominalism--if the Form determines all the behavior of the Matter, and the Matter just adds "reality", how is that different from Matter determining the behavior of the underlying p-zombie and Mind just adding "phenomenal reality"?
Tegmark is a technically credentialed thinker, so I assume he has considered with some care the following obvious objection, but there is more wrong with assigning some simplicity distribution to "all mathematical objects" than just the easy problem of putting a finite measure on an infinite set. Whatever this phrase is meant to mean, it must certainly be unsetly large, since there are plenty of unsetly large mathematical objects, for example the category of sets. To my knowledge, there is no standard theory of integration over proper classes, so just saying it's like how an infinite series can take a finite value falls far short of the issue, even as a toy example.
Does Tegmark mean all computable objects? Finite objects? Objects in some specified logic or universe of sets? If so, why these and not others?
Edited to add: it occurs to me that it was a great missed opportunity in mathematical nomenclature to refer to proper classes as unsetly large rather than unsettingly large.
See the Criticism part of https://en.m.wikipedia.org/wiki/Mathematical_universe_hypothesis
He dramatically shrinks his theory
I guess I'll need to read the book or some of the discourse around it - that section and the one following it make it sound like he is picking in a pretty ad hoc way very restrictive definitions of "all mathematical objects" to suit particular objections at particular moments. I think this obviates a lot of the immediate appeal of the theory, which comes precisely from the "if you can think it, it exists" flavor of the unasterisked phrase.
I actually think it doesn't shrink the theory, because Kolmogorov complexity already assigns 0 measure to uncomputable worlds.
Or there was always more than one theory.
I have to say reading the original post and also this comments post makes the cable I'm plugged into the Matrix with glow red hot.
As a sidenote, I’m reading Douthat’s book and the amount of research he does for the people who disagrees with him is…extremely bad. He may be smug against New Atheists or whatever because it’s popular, but it’s clear he never really understood many of their arguments/counterpoints. To name just one example: he constantly frames our understanding of the universe (i.e the human ability to detect patterns and organizing our understanding of those patterns into theories) as discovering the order of the universe, sort of presupposing an orderer. I’m not sure how to explain this without cutting/pasting examples, but he literally says at what point that the multiverse explanation demands a “by whom” answer. This was basic creationist argumentation - begging the question (sneaking in your conclusion within your premise).
Anyway, there is a pattern of behavior by Douthat of not actually keeping up with the discourse. In his first few chapters he tries to delve into the multiverse/debunk it and the most recent article is 20 years old! In a revealing off-handed comment he refers to Lawrence Krauss’s “Universe from nothing” as “recent” when it was published 13 years ago. His work is super shotty, and borrows from the Christian Apologist playbook of quoting scholars who disagree with him as if they do, not actually portraying the arguments of the people he disagrees with (citing Nagel to debunk Dennett instead of quoting Dennett) and citing irrelevant scholars pontificating on subjects irrelevant to their expertise (a classics scholar talking about neuroscience/consciousness).
None of this is inherently disqualifying, but anyone who’s a veteran of the Internet Religious Wars of the Bush/Obama years will be frustrated. His book tries to make a neutral case for religious belief but it’s dripping with Catholic assumptions about a mind-universe and quite flippantly dismisses arguments about consciousness/self as illusory, which is pretty common in Buddhism, so odd to say his is the neutral perspective.
I’m not as high IQ as Scott A or BB, but I try to keep an open mind and i’m only 1/3 way through the Douthat book and it’s so frustrating.
At the end of the day, all cosmological worldviews rest on faith in things unobservable*.
Philosophical debates can be fun, and much more likely so when it’s philosophical world-building with math and physics. Falsifiability *may* indeed be inadequate as a vetting mechanism for multiversal cosmological questions, but the boundary where observability ends** and entirely independent cosmological spaces that are by definition unobservable to us is the boundary between reason and faith.
* More accurately but messes up the shorthand aesthetic: things that can not be forced to become observable.
** Obsersavbiilty within our light cone plus secondary, tertiary…effects from outside our light cone that have physical effects within ours.
You didn't answer Douthat's critique, which was specifically of the claim he posted
That said, you and Tegmark are probably smarter than Ross and certainly smarter than me.
To me the most likely theistic and atheistic account both involve multiverse. So they don't really add much "cost".
>I think of MUH as claiming that there is no Secret Fire, no difference between possibility and existence.
I mean, sure, but you mentioned the logical necessity of mathematical objects as the answer to the question of "why is there something rather than nothing", and as far as I can tell, MUH doesn't have much to say about that.
"Somebody must go tell the Queen!"
Fortunately, they've stopped burning people for odd assertions since Giordano's time
> [...] but shouldn’t be necessary for it to be obvious). Once you put it like this, I think it makes sense [...]
Sorry, you want me to believe that I'm talking to an equation, and then appeal to "obviousness"? It's *obvious* that Tegmark is a crackpot, but if you want me to suspend my belief in the obvious, you can't continue to rely on it when it suits you.
> Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor).
Sorry, you want me to believe that *everything* exists, even stuff that isn't real, and then appeal to Occam's Razor? (I mean, I'm willing to accede to Occam if you're willing to accept that simplicity is relevant because God values elegrant universes…!)
There's more than one razor. The epistemic razor supports everythingism, the ontological razor....doesn't
And how did we go from "draw form an infinite set" to a "draw form a set of infinite measures". I was ready to accept the first statement, but the second one is not obvious at all. Especially talking about drawing from a set of possible conscious beings
Another point is that in a standard metric spaces (like Rn with Lebesgue measure), a set with infinite measure cannot be compact because it would fail the boundedness condition. However, in certain non-standard topologies, exceptions might exist.
""Consciousness” is a useful shorthand ..., but I don’t think anything in this discussion hinges on it."
You can't just take consciousness out of the topic of discussing God, since it is implied that he is a being possessing this property of superconsciousness
Decade+ lurker here.
Here's a possible argument for giving simpler universes more probability measure than complex ones: Simpler universes appear often as subuniverses of more complex universes. Simply put, if you throw a (conscious, observing) dart at the multiverse, what will it likely see? It would see the smallest self-contained universe in which it landed. Even if the dart actually hit an incredibly complex universe, it probably landed in a much smaller subuniverse inside that, and wouldn't be able to see past the confines of that subuniverse (which is what makes it a subuniverse) to the bigger universe containing that subuniverse.
I really must read Tegmark's book, I wonder if he mentions this argument.
Doesn't the definition of "universe" (whatever that may be - something like, "everywhere where these particular laws of physics apply", maybe?) preclude subuniverses?
If you're just saying "the dart will likely land in the most common type of sub-universal structure, which must tend towards the simple because complex structures by definition can contain simple structures but not vice-versa" then I'm absolutely with you, of course!
P.S.: Welcome...!
Simple structures can contain complex structures, given the technical sense of "simple" that is at play here. The inherent complexity becomes subsumed in indexical knowledge, that is, in specifying where *within* the simple structure the complex structure is located.
I don't fully understand your explanation here (sorry!) but if you're saying that simple structures can contain complex substructures whilst retaining less complexity in the overall [simple structure + complex substructure] system than the complex structure would retain alone" I'm afraid I don't agree with you at all.
It's hard to give a specific point of disagreement though, since I don't understand your point. If you're saying it's because the simple structure just contains a pointer to the complex substructure, then sure: a simple structure that contains a pointer to a complex structure is simple, true - but it does not in any sense contain the complex structure itself. If I write "The Second Oxford Edition of the Complete Works of Shakespeare" on a slip of paper, my slip of paper remains simple - but it doesn't actually contain the complete works of Shakespeare*. And if I write-out the Complete Works of Shakespeare in full on my slip of paper then, yes, now my paper does contain a complex substructure - but the inscribed paper as a whole is no longer a simple structure.
* 'Ceci n'est pas une pipe'?
No, it's quite the opposite. The fact that there's *no* pointer is what makes the larger structure simple. Let's use your example of "The Second Oxford Edition of the Complete Works of Shakespeare" as an example of a complex structure. The library that contains books containing all possible sequences of letters is much larger, but simpler to describe, and contains "The Second Oxford Edition of the Complete Works of Shakespeare".
Okay, thanks, I (think!) I understand your point now, and it's very clever and very elegant... and I really don't like it!
1) [Mostly to check that I understand your point] The Library of the Infinite String contains the Complete Works of Shakespeare as a substring, and you could generate the Library String with a super-simple algorithm like "Map every three digits of Pi to the ASCII character set, skipping over digit-triplets 256-999 (which fall outside the ASCII range)".
2) I can't help but think of the old joke: "Michelangelo had it easy: all he had to do was chip away the bits of marble that didn't look like David". I don't think it's accurate to say that a 20-foot block of marble contains all possible sub-20' marble statues. In order to get a marble statue out of the block we would need A) the block, and B) instructions for which bits to chip away.
3) So I think I would say that the block, by itself, contains the *raw materials* for a statue without actually containing the statue: the rules for which bits to chip away are in some sense encoded within the statue, or something, and the marble block alone doesn't have that encoding.
4) If we take "The Library String algorithm plus the appropriate start-digit and end-digit", this *does* seem to contain the Complete Works of Shakespeare: this is your "indexical knowledge", equivalent to my "marble-chip encoding"
5) Fiiiiinne, this must be the Kolmogorov Complexity of the Complete Works of Shakespeare then, looks like we have a really memory-efficient but processing-inefficient compression algorithm for generating the CWoS
5.1) I really don't like Step 5 at all because it means for any nontrivial string, even a perfectly random one, its complexity is entirely unrelated to its length (ugh!) - but I can't see any argument against it?
6) The Library String thus doesn't "contain" the CWoS until the index is added, and the Library String + Index is exactly equal in complexity to the (compressed) Complete Works of Shakespeare. If we want a structure which contains every possible string, that would be the Library String plus the index of every possible substring - far more complex than any individual substring. Thus, the principle "simple structures cannot contain structures more complex than themselves" is preserved (albeit at the cost of disassociating string-length from string-complexity)
7) I don't have much confidence in the above and I would greatly welcome a correction if I've gone wrong somewhere! Absent such a correction, though, I find that even though I really dislike it, I dislike it less than I dislike "simple structures can contain complex ones"...
5.1 - the string length is still related to the string complexity *on average*, because there's a simple way of generating the library that makes sure to hit all the short strings first but there isn't one that hits all the long strings before the short strings, and the index itself has significant length that depends how big it is. It's only on average because some long strings are easy to describe directly.
6 - the Library String plus the index of every possible substring is simple again because a very simple algorithm could list off all the possible substring indices.
One thing about very simple structures that contain complex structures is they have to be very large. There has to be enough "space" inside them that finding something complex is almost as hard as creating it from scratch. You could imagine a Museum where every possible sequence of chisel strokes has been applied down to atomic resolution to a nearly-endless number of identical 20-foot marble blocks. Probably the ideal interface to find anything in there would be a virtual chisel and hammer you could use to teleport to the next statue, with backtracking. It would still be as hard to find the David in there as to recreate it, modulo the ability to undo.
P.S. Does this get us down to P vs. NP? It would be really hard to figure out the start-index for the Complete Works of Shakespeare within the Infinite Library String - but pretty easy to check the index was correct once we had it...
Actually you need a lot of knowledge about Shakespeare to check the index is correct--as much knowledge as you needed to find it in the first place! So although the effort of the task is much less as you note, the complexity of it is the same.
Similarly, shorter programs appear as prefixes of many longer programs. For every program P that is exactly 100 instructions long, there are many programs that are exactly 200 instructions long, where the first 100 instructions are exactly the same as in P, and the instruction #101 is "HALT".
Or, for any set S of laws of physics, there are many larger sets which include all the laws of S, and describe some new particles that do not interact in any way with the particles from S.
(It is assumed that our universe contains "dark matter" which does not interact with the ordinary matter in any way, except for adding extra mass to galaxies. But this would instead be a "transparent matter" that does not interact with the ordinary matter in any way, not even adding mass. If "dark matter" exists, which I guess is still an open question, I see no reason to assume that "transparent matter" could not. This would drive Popper crazy.)
"I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences."
Please do this. I've never understood why an atheist would care about doing "moral" things if objective morality isn't real.
Personally, I don't care at all about doing "moral" things. Unsurprisingly, most of what feels right to me aligns with common morality. However, I plan to end my life before my cancer starts to inconvenience me too much. For many people that would be against their morality, but for me it is completely fine.
I care about doing moral things the same way a dog cares about eating: It is a useful (necessary, even) strategy for my continued existence and it provides me with reward.
There's no need for 'hunger realism' to be an objective property of the universe for eating to make sense to the eater.
What exactly is at stake in the question of moral "objectivism"? (I am asking, because I don't know: it seems some people seem to think it is very important, and others not)
People are interested in how they are treated and how they treat each other. It isn't clear to me what atheism or theism has to do with that.
If there's nothing objective, it seems like you'd just do what's best for yourself, which wouldn't generally involve donating money to faraway charities afaict
Well, it depends what function this word ”objective” has in your way of thinking. (Compare this quotation from I think some Buddhist monk ”you are real, but not really real.”) Also on what you mean by ”best for yourself”.
Question: What would someone do without morality?
Well that's what i'm curious about, because i wouldn't think it's donate large portions of their money to charity, and yet that's what many EA atheists do.
No, I mean, what would be left if you took someone and removed their morality?
Well, trying to think like an atheist, first you can't take something away that doesn't exist, but to try to answer the question, I think you'd be left with a bag of coordinated chemical reactions
You'd have a Libertarian.
Because I want to. Why do you care about doing the moral thing if there is an objective morality?
Why do you want to? How do you construct a worldview that involves doing "good" things for little or no personal benefit without believing in an objective concept of good?
Why stop at imagining Boltzmann brains? Why not imagine Boltzmann Brahmas or Boltzmann Yahwehs?
If you believe that Boltzmann brains can (miraculously) form from the chance accretion of matter and then would be able to maintain their existence by (miraculously) parasitically gathering the free energy of the universe, well, it's a slippery slope to then imagining Boltzmann Brahmas — who assemble from the random accretion of matter — and who are able to not only parasitically gather the free energy of the universe but also manipulate the free energy of the universe to the extent that they could create their own universes.
Heck, while we're at it — if we can imagine that *anything* can happen in infinite space and time, no matter how low the probabilities, why couldn't the biological systems of a dead Yeshua ben Yosef spontaneously repair themselves for him to rise from the dead and scare the bejesus out of Mary Magdalene?
Of course, rationalists aren't supposed to believe in miracles — unless they're Boltzmann brains.
Boltzmann brains *can't* maintain their existence, or at least, they can with only vanishingly small probability. The whole thing that makes the idea of Boltzmann brains philosophically interesting is that they don't have any special "powers" and only involve things that are known to be physically possible for agglomerations of ordinary matter.
Boltzmann Brains are one of those things I call magical scientism — along with the MWI, the Singularity (aka "the Rapture of the Nerds"), AGIs that will suddenly become malevolent and bring the end times, and the belief that there are no limits to human knowledge or technology.
If you're a physicalist — and most of the people who inhabit ACX Space seem to be physicalists — you believe that the phenomenon of consciousness is tied to a biological substrate.* The biological substrate that supports consciousness is the result of a path-dependent evolution over billions of years. Botzmann Brains are impossible, unless you believe that consciousness is not tied to specific physical systems. Boltzmann Brains would be possible from a panpsychist point of view, but panpsychists don't need them because the universe is already a conscious entity.
*Yes, possibly consciousness could exist on an electronic substrate, but that's mostly magical thinking on our part because we don't really understand consciousness or what its requirements are or how to initiate it.
A Boltzmann brain could be a literal *human* brain, or whatever the minimum necessary amount of a human body is to be conscious if you believe in some of the "embodied consciousness" stuff. It could even be a single planet with a moon and sun lasting (just) long enough for an intelligent race to evolve, most of the arguments go through even if you believe consciousness requires *that much* context to exist.
Also I don't know why anyone should be interested that you call things "magical scientism". You calling them that doesn't in any way provide evidence that they are false or uninteresting.
Despite Scott's snark about Popper, science requires falsifiable propositions that we can put to experimental tests.
For instance, we can't devise an experiment to prove or disprove the existence of God. So, we generally don't consider God to be a legitimate object of scientific speculation. Boltzmann Brains were originally used as a philosophical thought experiment to highlight the absurdity that mere random fluctuations in entropy could create the universe we see today. But then some cosmologists, like Andrei Linde, latched on to them as being possible given hypothetical multiverses and infinite time, yadda yadda yadda. And people started talking about them as if they were no longer a thought experiment.
> You calling them that doesn't in any way provide evidence that they are false or uninteresting.
No, I can't prove that BBs are false. But that's why they're not science — they're unfalsifiable. And, yes, they may be interesting in the way that Anselm's Ontological Argument or Aquinas' Five Ways were for Medieval scholastics arguing for the necessary existence of God.
Plenty of people seem to think BBs can and *have been* falsified, myself included! The fact that our best cosmological theories seem to predict something that seems to be false mean either they are wrong or there is something important that we are failing to understand about them or there is something important that we are failing to understand about how we make observations. This is how science is done. Ignoring predictions that "don't add up to normal" because "thinking about them is weird" is how science stagnates.
"For instance, we can't devise an experiment to prove or disprove the existence of God."
Yes we can. We can, and we have. We have disproven the existence of *many* gods. We can go up to Mount Olympus and look around, after all. The trouble is that "the existence of God" is a phrase that ends up standing in for an *alarmingly* large number of quite distinct hypothesis, so whenever one is disproven, somebody is always ready to try to sleight-of-hand another one in its place. After a few millenia of that sort of thing, the Top Tier Apologists have worked their way down to hypotheses that are *quite* hard to test. But less sophisticated believers are frequently calling on Gods whose existence *can* be falsified[1] and very often make wrong predictions as a result. Not that that stops them.
Now. "Boltzmann Brains can exist" is also a very, very difficult hypothesis to test. To do it with an actual experiment, one would need to wait a rather impractically long time or observe a rather impractically large amount of space. Otherwise the closest you can come is working very carefully through the implications of physical laws, checking to see if anything *precludes* their existence. But science makes all sorts of predictions that are hard to test. That doesn't mean they're *nonsense*. That just means they're...untested predictions. It happens. We also don't know (for example) that nuclear pulse propulsion or antimatter rocketry would work the way we expect. We have pretty decent reasons to *expect* they would, but if the ultimate arbiter of reality is to be experiment, we can't claim to have done the experiments yet[2]. That doesn't mean they're nonsense. It just means they're well-supported, but untested. Boltzmann Brains are rather less well-supported, but they're not a nonsense idea. They're just a weird, maybe-wrong-but-also-maybe-not consequence of our current understanding of physics.
By contrast, a Boltzmann Yahweh is a different creature all together. We *know* brains exist. We *know* roughly what they're made of and how to fit together. We have no clue whatsoever what goes into the making of a Yahweh, or if its the type of thing that even *could* exist. If there's some combination of atoms that fits together into a Yahweh, then Boltzmann Yahwehs are every bit as reasonable as Bolzmann Brains (though unless a Yahweh is very small, they will be much less probable).
"why couldn't the biological systems of a dead Yeshua ben Yosef spontaneously repair themselves for him to rise from the dead and scare the bejesus out of Mary Magdalene?"
They could, of course. That's basic thermodynamics. The more standard example is the un-breaking of an egg. It violates no laws of physics for an egg lying broken on the kitchen floor to spontaneously re-assemble itself and jump back onto the counter. We know this because the laws of physics are time-reversible (at to a good enough approximation for that). But a broken egg is a much higher-entropy state than an unbroken egg, so the reassembly is very, very, VERY improbable. You would have to drop an unimaginably large number of eggs and wait an unimaginably large number of lifetimes to see it happen. And you'd have to re-load the world from the day of the Crucifixion an even larger number of times before you got a spontaneously-reassembling Jesus.
[1] Like the sort that helps your team win a football game.
[2] In fact, we could get really nitpicky and play this game with existing tech. Do we know for sure that the next iPhone to roll off the assembly line won't hit some hitherto unseen physical constraint and totally stop working? Not until we do the experiment!
What is your proof for the non-existence of gods or God? Climbing to the top of Mount Olympus only proves that Zeus and his kin don't live there — not that they don't exist — nor that they may have resided there in the past.
>science requires falsifiable propositions that we can put to experimental tests.
No it doesn't, see https://en.m.wikipedia.org/wiki/Duhem%E2%80%93Quine_thesis
I think you think that the Duhem-Quine thesis says something that it doesn't. D-Q basically claims that scientific hypotheses cannot be tested in isolation because they are always part of a larger web of assumptions and theories. Big deal.
To paraphrase Thomas Kuhn: If Quine’s extreme holism were true, it would be impossible to determine whether one theory is better than another since all theories are interconnected.
And Karl Popper argued that while hypotheses are tested within a web of assumptions, scientists can still subject them to falsification by designing critical experiments that make specific predictions. Although Popper acknowledged that auxiliary hypotheses exist, he insisted that science progresses by deliberately attempting to refute theories and refining them accordingly. For instance, Einstein's Theory of General Relativity presented astronomers with a falsifiable hypothesis. It was tested in 1915 when it successfully explained the discrepancy seen in Mercury's orbit with what Newtonian gravity predicted. This was the first of many confirmations of General Relativity, and GR stands alone as a solid theory, although Quantum Gravity may modify its underpinnings.
For starters, rationalists are supposed to understand how probability works. The more miracles something requires, the *less* likely (exponentially) it is.
Universes with a Boltzmann Brahma that exists for a nanosecond? Sure, they are extremely rare, but why not. Universes where the Boltzmann Brahma exists for *two* nanoseconds? They are extremely rare even among the former, but why not. Etc. The problem with Boltzmann things is that every additional nanosecond of their existence seems to require an additional miracle.
As an alternative, consider a rule-based universe. Yes, its existence is itself a miracle, but... that might be the last miracle you need, if afterwards all it does is follow the rules. One miracle is more likely that zillions of miracles happening in the right sequence.
I agree rationalists are supposed to know how probability works. For instance, the odds that a ball will fall on a particular number on a roulette wheel is 1/38. But the idea that a bunch of atoms will suddenly come together and create a roulette wheel is not within the realm of probability.
"But the idea that a bunch of atoms will suddenly come together and create a roulette wheel is not within the realm of probability. "
Why not? What law of physics is being violated?
First, you've got to deal with the Second Law of Thermodynamics, which states that entropy (disorder) always increases in an isolated system. Life exists on Earth because we're not in an isolated system (we're getting continual energy input from the Sun). Imagining that Boltzmann Brains or functioning roulette wheels will appear out in space from interstellar gas would violate the Second Law.
Second, everything we've been able to observe in our Universe, at least above the scale above quantum mechanics, can be explained by Leibniz's Principle of Sufficient Reason (PSR), which states that nothing happens without a reason or sufficient explanation. Lee Smolin discussed this in his book _The Singular Universe and the Reality of Time_. Basically, you can't get something from nothing — at least as the Universe exists now.
Third: Our universe — at least as it exists now — functions under the Principle of Least Action (PLA), which plays a fundamental role in classical mechanics, Quantum Mechanics, and Relativity. For instance, light follows the path that minimizes the travel time between two points. In Relativity, the motion of objects in curved spacetime follow geodesics that extremize the action (from which Einstein's derived his field equations).And in Feynman’s interpretation of QM, a particle technically explores all possible paths, but the dominant contribution comes from paths that minimize (actually that "extremize") the action. So water isn't going to run uphill. And atoms are not going to diverge from their courses through interstellar space to form Boltzmann Brains nor roulette wheels (of either the US or European variety).
The Second Law of Thermodynamics is only true statistically, Boltzmann Brains live entirely within the gaps this creates, which is also why they overwhelmingly tend to disappear right away and always disappear eventually. The Principle of Sufficient Reason is not a widely accepted scientific law, Lee Smolin is a bit of an iconoclast. The Principle of Least Action is not violated, the atoms are following their courses not diverging from them.
If you want to believe that Boltzmann Brains are popping in and out of existence, go right ahead. If you want to believe in a creator-deity that launched the Singularity that caused the Big Bang, that's your prerogative. If you want to believe that a creator-deity created the planet Earth in seven days, layered the geological strata with fossils, and created the cosmic microwave background to make the universe seem older than it is and fool unbelievers, you're free to do so.
Just don't call any of those ideas scientific.
Science requires hypotheses that are testable. If you can't gather the data because it's out of the range of your instruments or it's too rare to be observed, then the untested hypothesis is just a hypothesis — i.e., a belief without supporting data. Some beliefs may sound more scientific than others because they invoke mathematics or logic, but it's all speculation and belief until someone can prove them or disprove them with data.
Only the Second Law of Thermodynamics is a law of physics, the other two are loose heuristics at best. And the Second Law does NOT forbid such things. It just points out that they're very, very improbable. A clump of atoms of proper number and kind to form a roulette wheel can be arranged in an utterly staggering number of ways, only a tiny, tiny fraction of which resemble a roulette wheel. That right there is basically the *entirety* of the Second Law: disordered states are more numerous than ordered states, and thus more probable.
This is a quite standard part of a decent physics education: as I said elsewhere, the broken egg reassembling itself is a common illustrative example. If you think the second law *forbids* low entropy states, you have not learned it sufficiently well.
Did some creator diety inscribe the Laws of Thermodynamics on a stone and declare "These are my Laws, everything else is Hueristics"?
Laws of physics and heuristics are similar in that they both serve as guiding principles for understanding and predicting outcomes, the laws of physics predict physical phenomena with high precision, while heuristics offer rules of thumb that guide us to probable outcomes in a framework of less precision. Both have an empirical basis and are derived from observation and experience. Physics laws come from experimentation and rigorous testing, while heuristics develop from practical experience and patterns of what we observe.
And BTW, if you're going to discuss Lagranians in Physics, the Principle of Least Action and the concept of Lagrangians are fundamentally interconnected in classical mechanics. The Principle of Least Action is the guiding rule that determines the true path of a system by extremizing this action. Seems kinda like a law when you look at it that way. Just sayin... ;-)
Here's how I see this, as a amateur:
The MUH doesn't "defeat" any actual proofs of God, just like a sloppy sketch of a perpetual motion machine doesn't "defeat" the first law of thermodynamics. To achieve that, you'd have to show that the sketch translates into a working machine, i.e., the hypothesis is true.
Until you do that, at best the MUH provides an alternative to the "proofs of God" that suffer from the same conceptual gaps. Which is better than nothing, because all of the supposed proofs have some metaphysical assumptions, and if you showed that even if those held true it still wouldn't constitute a proof of God, but something else as well (which would then mean there has to be more than one flaw in one of the approaches).
The glaring fundamental flaw in the MUH that Scott acknowledges above, but glosses over, is that you can't define something into being. The Mandelbrot set is actually a nice example. Sure, it exists in the realm of mathematical possibilities, but in a sense, only when people had access to computers to approximate the math numerically did it it take steps from the potential to be instantiated as a fancy poster in a shop for psychedelia. Are there fractals out there in mathland that no living creature has seen? Probably.
Soooo... the "proofs of god" are metaphysical BS, and the MUH is metaphysical BS until someone patches that gap convincingly. And I got to stop now, the guinea pigs are hungry.
On Kolmogorov complexity and compressibility:
· "[...] my impression was that this is a useful hack, but not perfectly objectively true, because there is no neutral programming language, no neutral compiler, and no neutral architecture":
Is this not what a Turing machine is? The simplest theoretically-possible, entirely language-agnostic, entirely universal (if the Church-Turing thesis holds) computing system? I would have said that measuring Kolmogorov complexity using a real-world programming language would indeed be a useful hack, but measuring the Kolmogorov complexity of X on a Turing machine means something akin to "What is the simplest possible rules-based process that can reproduce X in perfect fidelity?"
I think that what real-world programming languages are doing is making the program simpler by moving complexity from the program into the machine. This is frightfully useful for real-world programming where we have to scratch-write each new program but we can copy-paste machines (which are themselves mostly made of existing programs)*, but since transferring complexity in this way absolutely must incur an overhead** it's not useful/relevant for talking about the simplest possible rules-based process for producing any given output, which has to be a Turing machine: anything simpler wouldn't be universal and anything more complex would be reducible to (Turing machine + overhead).
* Which is why real-world programming is done on like a dozen layers of machines built upon machines built upon machines..
**I was far too hungover in my lectures on the subject to be certain, but I'm somewhat confident that this has in fact been proven. I'm afraid all I can offer now is an intuitive not-quite-proof, though: the same program (where "same" means something like "produces an isomorphic output") will always run faster and in less memory in a dedicated hardware arrangement of transistor logic gates than it would in the Assembly language; faster and in less memory in Assembly-on-hardware than in C-on-Assembly-on-hardware; better in C-on-Assembly-on-hardware than C++-on-C-on-Asssembly-on-hardware, better in C++-on-C-on-Asssembly-on-hardware than in Java-on-C++-on-C-on-Asssembly-on-hardware...
· "Since the universe actually has to implement the solution we come up with, there can’t be any ambiguity. But how can the cosmos make an objective cosmic choice among programming languages":
The "the universe has to.." phrasing is confusing me a bit, here! I thought that these mathematical objects were according to the theory never produced and exist-without-beginning outside of the universe, but that we were interested in hypothetical instructions for producing them nevertheless (on an equally-hypothetical Turing machine that has, amongst other things, infinite memory..) because we're curious about the objects' relative complexities and Kolmogorov showed that "length of simplest-possible instructions for producing an object isomorphic to [mathematical object]" is another way of saying "complexity of [mathematical object]"?
· "You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’ ":
I think the bit-length of PotterZip is actually longer than the bit-length of the uncompressed PotterVerse! In order to know _exactly what_ it is supposed to represent by '1'*, PotterZip must contain within itself a structure isomorphic to the PotterVerse**, and it also.. er, "must" (once again I didn't pay enough attention in university to remember the actual proof of this; sorry!) contain a structural overhead. (And even if we did assume no overhead whatsoever, the minimum bit-length for the PotterZip _system_ (machine + program, compressed data + decompression algorithm, however one wants to think of it) would still be the bit-length of PotterVerse plus the '1'.)
This problem doesn't exist with actual (lossless) compression schemes: they genuinely do contain everything you need within them to losslessly reproduce the full version when given the compressed version as input, and for sufficiently-large uncompressed data the bit-length of 'compression machine + compressed data' genuinely is shorter than the bit-length of the uncompressed data.
.....this doesn't mean that PotterZip is useless, though! If you had _multiple_ PotterVerses to compress, the bit-length of the PotterZip system would be as short as as "bit-length of one PotterVerse + ('1' x 'number of uncompressed PotterVerses in the input')", and PotterZip would output for you a handy count of how many PotterVerses you had.
* Or to think of it another way: "In order to know what it is supposed to output when given an input of '1' and set to "Decompress"..."
** Is Harry conscious in there?
Turing machines with a tape infinite in both directions or in one direction? Both have plausible claims to be simpler than the other.
Ooh, interesting question! I think it'd be Turing machines with an infinite tape in both directions, because there would be no overhead in simulating a one-way-infinite tape whereas a one-way-infinite machine would have a slight overhead in simulating a two-way-infinite tape.
(I also think probably you asked the question Socratically, to show me that it's not as straightforward as "Turing machines = simplest for Kolmogorov complexity"? If so - very clever; nice! I do still think that, even if *I* can't define the simplest-possible programmable system, there must nevertheless be some theoretical system that is simpler for K-complexity purposes, and I suspect that asking questions about how much overhead there is in having systems emulate each other might teach us useful things about that theoretical system...)
Consider this computational model:
Your program is some finite sequence of words only containing 0 and 1, for example: 001, 111, 00
The initial state is a word containing only 1s and 0s and a pointer pointing to the first word of the program.
One step of computation is the following: if the leftmost symbol of the current word is 1, then the current word of the program is appended to the current word, and if it's 0, then we don't append anything. In both cases we delete the leftmost symbol and move the pointer to the next word of the program (going back to the first word if we were at the last).
Example:
Program is 001, 100, 00
Initial word: 111
111 -> 11001 -> 1001100 -> 00110000 -> 0110000 -> 110000 -> 1000000 -> 000000001 -> .. -> 1 -> 001 -> 01 -> 1 -> 001 -> ...
These systems are called cyclic tag systems and can compute exactly as many things as turing machines can. Would you say this is more complex than a Turing machine?
This is really interesting and taught me lots; thanks!
I think I would say (just guessing!) that Turing Machines might still be simpler because A) there would be lots of overhead in having a Cyclic Tag system simulate/implement a Turing Machine but not much overhead in having a Turing Machine simulate/implement a Cyclic Tag system, and B) even were the Cyclic Tag system itself simpler to describe than a Turing Machine (which it might-or-might-not be but let's examine the least-favourable case) it probably isn't *orders-of-magnitude* simpler to describe and so the overall complexity of [system + input] is probably still in the Turing Machine's favour when it comes to simulating/implementing the other system.
I think both A and B are required, here; A alone would mean that the Turing Machine is just more powerful, not necessarily simpler, and B alone would mean that the Machine itself was simpler but that this simplicity resulted in far more complex output.
I'm just guessing here, though! I accept your general point that there actually isn't one obvious system we should consider "the simplest" and use for purposes of K-complexity - but I do still think that probably there is some system that is in some sense consistently in some "sweet spot" when it comes to assessing [system + input] length (even if it's not the simplest-possible system by every metric and even if I personally have no idea what actual system it is)
Turing machines are not the simplest possible nor language-agnostic. The Church-Turing thesis also applies to JavaScript, C, Brainfuck, Haskell etc. A better candidate would be lambda calculus or SKI combinator calculus.
Okay, fair enough, I'm sure you know this stuff better than I do. It does seem as though we both broadly agree that there are candidate systems that meet the necessary criteria for fair determination of K-complexity (eg. "shortest possible complete description of [language/system + program]" as opposed to simply "shortest possible description of [program only] irrespective of language/system"), though!
I have no detailed comments except to say that philosophically, I Like the mathematical universe idea. The neatest answer to "why is there something rather than nothing" is that there is not and could not be any difference.
And e.g., running Conway's Game of Life from the same initial conditions twice doesn't feel like making two things happen, it feels like making the same thing happen twice, which I guess means it's just one thing that's already there whether or not I run a program that lets me look at it
(Existentially, were I to think about this too hard it may or may not irk me, since in that case everything that can happen is already happening, good or bad, whatever I do. Though idk maybe there's an anthropic or TDT thing where making good choices means I'm in an omniverse where more probability mass is in universes where good things happen to mes)
> how can the cosmos make an objective cosmic choice among programming languages?
I think there is more hope than you have. The candidate perfect languages we have (e.g. lambda calculus) seem to have better properties than necessary from the theory. For example I think implementing something as lambdas and as a logical formula you would rarely want to first create an interpreter. So we can probably exclude harry-potter-lang.
This may be a yet unknown but learnable thing. For example, we don't even know the precise computational model for our universe. But as we learned physics we learned that the model uses quantum computing, and retroactively we can see that it was mathematically natural all along. So I'd start with solving fundamental physics and maybe at some point we'll get to being sure of what the perfect language is.
I just want to mention that Nevin's a very good philosopher with lots of expertise in both arguments about God and the more technical points of probability theory, so if he has a reply it's probably worth listening to - not many academics have both.
Maybe I'm the "abstract concept of the number 5", care to refute that idea? You can't, because no part of that sentence makes sense - great, it's a "counterexample" to philosophical theory XYZ, let me write a blog poist.
I'm a bit surprised "Tegmark's mathematical universe" is actually something taken seriously. I was bothered by the idea that maybe existence was just "mathematical possibility" about 25 years ago as a teenager but even then I relatively quickly realized that actually there are a lot of pretty strong notions that go into what it means to be a "possibility" and stopped taking the thought seriously as any kind of foundation. After that I went on to learn more about mathematics, now I take it even less seriously - mathematically, there are just so many ways to define things and group things as being "equal" or different or define structure etc that the idea seems even more nonsensical now.
Also: "By existing, you are a random draw from the set of possible conscious beings." is so nonsensical (your intuitition for randomness comes from how "random" things behave in this universe, wtf even *IS* a "random draw"?). Sure, you can define a measure on infinite spaces but it's far from simple - just defining the usual uniform measure on a line segment is an undergrad lecture in mathematics in itself. Speaking as someone who actually understand mathematics - if you don't actually understand it, stop trying to use it in your philosophical theories. You can't just gesture at mathematics and use it to fill logical holes.
I really like Tegmark's hypothesis, but then again, I am a radical mathematical realist with a really strong intuition towards independence of mathematical truths which probably puts me at odds with most readers of this blog.
> I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.
Link, please?
https://www.lesswrong.com/s/9bvAELWc8y2gYjRav/p/LhP2zGBWR5AdssrdJ
And follow the links in the article for details
Thanks!
Re: the points about Plato etc., I think you gave a hostage to fortune by writing that Tegmark's thinking was novel. This of course left the gate wide open for the bull to get out of the field and for all us nitpicky contrarians to go "well ackshully" 😀 The theory may well be novel, but it's not like nobody ever before this guy ever thought anything similar.
" But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword!"
There's going to be a lot of angry atheists angry about this. I get what you're saying but it's not the kind of thing that they like having pointed out.
Thanks for the Mandelbrot set, that brought me back to the 90s.
I used to think the idea of Boltzmann Brains was too weird and, indeed, nuts to even contemplate, but I'm beginning to change my mind. If I *were* something that just popped into existence randomly a few moments before, am currently AI-hallucinating the entire history of the universe, and will pop back out in another few moments - I'm telling you, that would make an *awful* lot of sense regarding my life.
"From the physicist's perspective, both "God did it" and "anthropics did it" communicate the same thing: that investigating why the universe is the way it is, is a waste of time.
Finally, I admit an aesthetic revulsion to the particular way Bentham is using “God” - which is something like “let’s imagine a guy with magic that can do anything, and who really hates loose ends in philosophy, so if we encounter a loose end, we can just assume He solved it, so now there are no loose ends, yay!” It’s bad enough when every open problem goes from an opportunity to match wits against the complexity of the universe, to just another proof of this guy’s existence and greatness."
I agree with both a'yiz on this, but I don't agree that "God created the universe and everything in it" does stifle investigating why the universe is the way it is, and please don't drag Galileo into this. There was room before and after him to work on "why does this thing do this in this way and not that way" without stepping on the toes of the Church, and indeed "God did it, no reason to ask more" was something not well-regarded by thinkers of the time. See, courtesy of the late Michael Flynn who brought this quote to my notice, William of Conches in the 12th century:
https://tofspot.blogspot.com/2012/01/common-misperception.html
"Or more directly, William of Conches in (iirc) the Dragmatikon:
"[They say] 'We do not know how this is, but we know that God can do it.' You poor fools! God can make a cow out of a tree, but has He ever done so? Therefore show some reason why a thing is so, or cease to hold that it is so."
A lot of the disagreement is over those who think of God as coming at the end versus those of us who think He came at the beginning, or indeed is outside of the entire frame of 'beginning' and 'end'. Of course God is perceived as necessarily complex if viewed as arising out of a series of increasingly ordered and complex entities, and then that complexity becomes an argument for 'so God could not have existed at the beginning, because the beginning was the ultimate simplicity' (I think Dawkins used that argument).
Metaphysics is hard, indeed. And when we get out of the empirical, concrete, observable 'kick a stone and stub your toe' universe, into the realm of Forms and philosophy, it gets wild and trackless.
Current status: still believing in God, still happy to read these speculations, still resolutely un-insulted because Scott never does write with intent to insult anybody and does argue all this in good faith.
>"I'm beginning to change my mind."
Sounds like the sort of thing a Boltzmann Brain would say.
The evidence is just piling up!
If anthropics is true , physics is finding out where you are in the multiverse.
Speaking as an angry atheist, I actually agree with you -- answering any question with "God did it by magic" is, philosophically speaking, equally as valid as saying "This happened as the result of natural processes which I can attempt to understand via further investigation". Given that the two answers are equally valid, I choose the one that ultimately enables me to have houses made of stone and warmed by fire, plentiful food, cars, computers, spaceflight, eradication of diseases, and all that other good stuff. I will cheerfully admit that doing so is just a whim of mine.
And yes, you can reframe the later stance as "This happened as the result of divine processes that follow complex yet non-arbitrary rules which God chooses to strictly enforce at all times, and thus I can attempt to understand via further investigation"; but this is just a word-game that you can play, with no significant impact on anything. For the purposes of building cars and computers and all that other good stuff, it doesn't matter whether the Universe operates by some set of invariant rules that are external to ourselves because that's just how universes are; or whether the Universe operates by some set of invariant rules that are external to ourselves because that's how the eternal unchanging God likes to run things. The answer is the same either way.
>People bring up the fine-tuning argument as a proof of God. If I show that other things can create fine-tuning, then God is no longer proven. This doesn’t mean God definitely doesn’t exist. It just means that we’re still uncertain.
This just seems like playing word games around what counts as "proof". If you want to insist that "proof" refers only to arguments that grant absolute logical certainty, then fine, you can call fine-tuning "compelling evidence" for God instead if you want. But "Tegmark's Mathematical Universe Hypothesis shows God's existence is only supported by compelling evidence, not proven" is not a particularly compelling argument for atheism.
If people are happier believing the answer to "why is there something rather than nothing?" is 'because maths' instead of 'because God', let them. We're all going to die one day and we'll know then which is true.
Well, not if you just die. It's pretty hard to know anything when you're dead.
Here's another reason I think Tegmark's Multiverse is mostly untenable.
Ostensibly we're letting out mathematical objects be defined in symbols and calculating K-complexity over such descriptions. However, who decides how these strings of symbols map to the things that "exist" according to Tegmark?
It's trivially easy to just label everything with a unique symbol in a mathematically consistent way, since Tegmark permits infinite sets physical reality. So are all objects equally simple now? All the information content is in the rules that make symbol manipulation, but that information is now shared in the interaction between an object and all others. Mathematically, think parent category vs it's Yoneda embedding.
In practice the map between symbols and meaning grows naturally to be useful and even changes over time, so it's not at all helpful for thinking about supposed
Platonically ideal objects.
Given how you seem to be getting a bit annoyed toward the end of the post, let me write a bit from the silent minority saying how much I enjoyed hearing you defend these arguments on your blog, and give such a cool hypothesis wide coverage.
I remember vividly first reading about the MUH (at first, via fiction in Egan's Permutation City, and shortly after in Tegmark's books and papers). It was like a lightning bolt, supercharging my existing intuive sense of mathematical Platonism into a more concrete sense of *this is the answer*. "The unreasonable effectiveness of mathematics in the natural sciences" had gnawed at me for so long, not because it seemed puzzling, but because I just couldn't imagine any other way for the world to be. And the MUH spells out all the consequences and solves so many problems at the same time.
I also really appreciate your accessible treatment of, and defense of, the subtleties around measures. Both that having one is natural, and that it's weird and unsatisfying to pick a particular one.
I remember reading some SSC post way back where you referred to Tegmark as "the man who figured out why there's something instead of nothing". Ever since then I've had a small secret joy in my heart knowing that you also enjoyed his theory. It's so great to see it making its way to the front page of the blog.
The section on Popper/falsifiability could do with a bit more charity, IMO.
Let me see if I can rephrase: What we really do when debating hypotheses is we *test them.*
Knowledge is meaningful to the extent that it is (or has the potential to be) *prophecy*, that is, to the extent that it can contribute to actual predictions with actionable confidence. Without that, it's "mere explanation."
If Tegmark doesn't actually give us the ability to act on the world any differently than we would without it, even to the degree of developing future models or opening new lines of inquiry, then it's a belief that pays rent by providing emotional satisfaction, no different in principle from believing that the world was created when Odin slayed a big giant.
It's *nice* to have a satisfying explanation for the nature of the world, sure. But it's a bit weird to see ACX of all places discard the prediction criterion in favor of the satisfaction of having a 'right' answer. It's called 'Less Wrong' for a reason!
Knowledge is true to the extent that it tells you what reality is, beyond making predicitons. LW/Yudkowsky style rationality isn't instrumentalism, the prediction only approach, because they care about things like Many worlds versus Copenhagen, and atheism versus deism. No Logical Positivist, He.
If there's a fundamental axiom of rationalism, it's "The map is not the territory."
TMU is a description- a map, in other words. And a map that can't be used to navigate is just a pretty picture.
LW rationalism absolutely has a strong instrumentalist bent- for example, one of the reasons for the early focus on atheism was to get people to internalize the fact that there will be no providential guard rails preventing them from wiping themselves out with AI. Big Yud had a specific (and explicit) reason for advancing that discussion.
"LW rationalism absolutely has a strong instrumentalist bent " That's an argument against "LW is not instrumentalist at all", which is not a claim I was making.
"The map is not the territory".That is not intended to mean "realism is impossible" . Which would make scientific realism impossible.
Would you prefer "rationality is winning", then?
"My mental models of phenomena are lossy representations of a more-complex reality, and I'm going to assess them by their utility rather than declaring them to be sacred and unassailable Truth" is not a rejection of scientific realism. It's a practical, measured, and sane response to the uncertainties of inductive learning.
You seem interested in having satisfying explanations for things, whether or not those beliefs pay rent. I don't mean to tell you not to do that; it seems important to you. But let's not kid ourselves. The reality that hosts your brain is necessarily more complex than your brain, and any explanations you provide are nothing more than words, words, words. If those words are neither functional nor useful, then the explanations they provide are merely decorative.
I was hoping to see a response to my comment or a similar one on the original post. Quote is Scott's, response is mine:
> By existing, you are a random draw from the set of possible conscious beings. You can’t make a random draw from an infinite set, but the accepted solution is some kind of measure weighted by simplicity. So even though every possible mathematical object exists, simpler ones exist more. Most conscious beings exist in very simple universes, ones that (like Life) are just a few short rules which produce surprisingly complex behavior.
This feels like a band-aid solution to questions like, "If every possible universe exists, why wouldn’t we expect something bizarre, like gravity suddenly changing in 30 seconds?" A universe where gravity depends on time doesn’t seem infinitely more complex than one where it remains constant.
Yet, for every universe with constant gravity, we can describe infinitely many that, up until this point, had constant gravity, but will shift to conditions unsuitable for life in the next second. So why should we assume we exist in one of the rare, stable ones?
-----
Just to reiterate, all universes described in this comment are the same at the current moment, and will only diverge in the next second.
> A universe where gravity depends on time doesn’t seem infinitely more complex than one where it remains constant.
It's not *infinitely* more complex, just a lot more. The laws of gravity observed thus far can (at least aspirationally) be embodied in a few short equations, but in the alternative hypothesis you'd have to append:
> ...until time T=264862...[a number with ~60 digits; see "age of the universe in Planck times"]...505292, at which point the laws become the following: ...
This hypothesis is heavily penalized for its complexity, so although we can't rule it out with 100% certainty, we can basically ignore it.
At any rate, this isn't a problem with Tegmark's theory specifically. Even if you only think there's one universe, i.e. the one we live in, we are still in a similar state of uncertainty as to whether the laws governing it include that addendum or not.
> This hypothesis is heavily penalized for its complexity, so although we can't rule it out with 100% certainty, we can basically ignore it.
The problem is that, for every single universe with constant gravity, there are infinitely many universes that don't have constant gravity. Just for our universe, there's a universe where at t+1 gravity will double, one where it will triple, one where it will quadruple, etc etc. Not to mention t+1.0000000001, t+1.0000000002, t+1.0000000003.
With so many infinities to choose from, I just can't see there being a higher number of simple universes. The explanation is more mystic than most religions.
Complexity: For the change in gravity to happen to us it would have to also pin down the time we're on Earth for us to observe it, which adds a *lot* of extra complexity.
----
Anthropic: If gravity swung around a bunch since the beginning of time, that makes it notably harder for life to develop, and so we're less likely to exist in that universe.
----
Low Complexity: Another explanation is that gravity is not an isolated thing, all the forces are pinning each other down to a substantial degree. This makes it less of an independent set of a hundred numbers and closer to a a dozen numbers. (or something, I don't know how I'd calculate that precisely without a lot of effort)
----
Multiverse: One consequence of a multiverse, which we plausibly have even with just QM, is that there could be a decent chunk of universes collapsing. I look around the world, but this doesn't isolate any single mathematical object, but rather a whole bunch of them. Instead of assuming we exist in one of the rare stable universes, we don't. Rather, we lose some amount of measure every unit of time due to reasons as you describe, the universe changes suddenly and we often simply die out.
This then makes so we're on the tracks where it is stabler, such as constant, or at least approximately constant for now.
I don't think any of these completely defuse the argument you give, but I do think they make it less odd.
I don't think it's true that you can't "make a uniform random draw from a set of infinite measure". Here's a way:
1. Make a uniform random draw from the interval [0,1).
2. If the number you draw has infinite digits, reject it and go back to step 1. (Caveat: the probability of getting to step 3 at any single iteration is zero, but it will happen eventually if you have infinite time.)
3. Read and choose the number right of the decimal point. Congratulations, you have picked a natural number at random uniformly.
The paradox Scott mentions is not a contradiction at all: you just get different probabilities if you have different information, is all. (Just like how the probability that a woman with two children has two boys is different if you know at least one child is a boy and if you know the eldest is a boy.) (Also, remember that with infinite sets a probability of 100% does not imply certainty, though the converse is true.)
But also: Occam's razor is a good epistemic tool to favour hypotheses with fewer postulates, but if you postulate it as an essential ingredient of reality then you incur an additional penalty as per Occam's razor! From a more "fundamental" viewpoint: meaningful measures arise from properties of systems, not the other way around. If every conceivable (mathematically consistent) universe exists, and probability calculations over the space of all universes should use a measure based on simplicity, there has to be a reason why that measure and not another is correct, and this requires either the known parts of this setup conspiring to bring this about or a new postulate at some point in the theory, which must be penalised by Occam's razor.
Your caveat in step 2 is fatal. Measure zero is defined already at the level of sigma algebras; the product of countably many sigma algebras still gives zero measure to the set that has a number with finitely many nonzero digits along *any* of the axes.
The procedure obviously must fail in this way; if it didn't, then you could just put the probability distribution it creates on the natural numbers directly and draw against that. But no such probability distribution exists.
It is not fatal. The procedure is merely an example of how this can be constructed, and it still returns an answer within a finite (albeit very long) time.
Really if I wanted to be extra annoying I could have just pointed out that nobody disputes it is possible to make a random uniform draw from the interval [0,1), which has infinite elements. All I did was reduce the size of the set we're drawing from (by _removing_ uncountably infinite elements) to make it countable – if it was possible before, it has to be possible afterwards. That the set ends up with measure zero just means you'd need a different measure to make calculations with it (like [0,1) having zero area doesn't mean you can't make a random uniform draw from it). In practice, most calculations you'd like to make will yield annoying zeros and ones, but you can still see things like the probability of getting an even number being 1/2, etc.
You can't expect an event of zero probability to happen in a finite time, even a very long one.
Depends on what you mean by "expect" there. Given any finite amount of time, the probability it'll happen is zero. However, if you literally never stop it is certain to happen eventually – and that will be after a finite amount of time.
No, it can (and will) actually just never happen, no matter how many times you try.
>"it still returns an answer within a finite (albeit very long) time."
Are you saying that any injective function N->[0,1) must have a number in its range whose decimal represenation terminates? That's clearly false, but I can't see any other way to interpret your claim here.
*can*, not must.
I'm saying that if
a) There is a set A from which it is possible to make a random uniform draw (i.e., such that every element of A has equal probability of being picked)
and
b) There is a set B such that B is contained in A
Then it follows that it must be possible to make a random uniform draw from B.
What is your precise definition for "it is possible to make a random uniform draw from S"? If it means the same thing as "there exists a uniform probability distribution on S", then your claim here is false.
Such that no element is more likely to be picked than any other.
I appreciate we are talking of events with zero probability, so the probability of individual elements in A is the same even if the probability density distribution exists and is non-uniform. However, we can still compare the likelihood of different zero-probability events. Usually you would do this resorting to the probability density distribution, but this doesn't cease to be possible just because this density doesn't exist. (In fact, just like the probability density distribution allows you to calculate probabilities for sets of events which individually all have zero probability, depending on the topology of B you may be able to define different types of generalised measures which allow you to do the same for other types of [infinite] subsets. I'm arguing in this case you could do something like this to compare the likelihood of getting evens vs odds, etc.)
> But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable.
I’m not sure why not, is it the mention of the devil? Or that fossils aren’t enough. Maybe I’m not following any of this.
> Somebody must go tell the Queen!
I was going to suggest that was impossible but I realise the writer supposes that there’s an infinite number of her hanging about somewhere.
I also feel, as a practical man, that not only is it not proven that every practical mathematical object has to correspond to physical reality but it’s impossible. Realise that infinity is merely a mathematical abstraction and the number of universes we get, no matter how large, no matter that they fast being created exponentially is still finite.
Re: Boltzmann brains, how long is the conscious observer in the bag of conscious observer marbles we're drawing from to develop these probabilistic intuitions about mental entities supposed to exist? A whole life? Do people intuit that they are the same conscious observer now that they were as an infant? If you consider your own status as a conscious observer to exist fleetingly, does this disrupt either the mathematical or aesthetic issues around Boltzmann brains? What other entities go in the bag of conscious observers--is it weird that we aren't cockroaches when they outnumber by so many orders of magnitude us on planet earth? What if an observer is the kind of thing you simply can't make countable? Why isn't the bag limited to the kinds of conscious observers that can reflect on Boltzmann brains, which could plausibly exclude them? I guess my intuitions around consciousness, as ill-formed as they are, are so different from the premises here that I'm left just posing questions.
I wasn’t saying this is a retread - I was saying this is original! I was just doing so in a way that some people could put down to the narcissism of small differences.
Anyway, there’s a reason why people said this is Plato or Descartes or Leibniz but no one said it’s Aristotle or Bacon or Hume or Carnap. This is very much a rationalist worldview, and not an empiricist one.
Sometimes the internet rationalists seem to be like empiricists, but this is one of the cases where the connections to historic philosophical rationalists come through.
I think the resemblance to philosophical rationalists is because the MUH implies you can have some sort of a priori knowledge about what is likely and what isn't by just looking at the simplicity distribution. Alas, in practice, this is computationally intractable.
I second the lambda calculus as the natural candidate; it contains nothing that isn't already implied by the existence of universes with the following properties:
- There are things
- There are ways in which things can change
- Sometimes such changes actually happen
That's my own reading of the three rules of the lambda calculus and others might disagree, but it seems fairly reasonable to me.
Arguably there are simpler alternatives like SKI combinators (or just SK, since I is redundant) but in my subjective view, they don't follow naturally from the fundamental properties of universes with time/space/matter the way the lambda calculus does.
Just out of curiosity, did ChatGPT get the answer correct here? (https://chatgpt.com/share/67ba1cd0-0760-8001-a069-59a18e6f4504) - in response to my question: "I am having trouble understanding one sentence from this passage: "Argument from comprehensibility: why is the universe so simple that we can understand it? Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones; therefore, the average conscious being exists in a universe close to the simplest one possible that can host conscious beings." Specifically, I don't understand why "Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones" is necessarily true, or what it means. Can you explain what the author most likely meant by "Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones", and why it would be a true statement?"
The ChatGPT link is login-gated; what does it say?
> Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones
This would be more clearly stated as:
> Because in order for *a probability distribution over* the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones
The text of ChatGPT's reply:
This sentence is making a claim about how we should conceptualize “the set of all mathematical objects” in a way that allows it to be “well-defined,” and it argues that this requires a prior that favors simpler ones. Let’s break this down step by step.
Step 1: The Context—Mathematical Objects and the Universe
• The passage is discussing why the universe is comprehensible to us.
• The underlying assumption is that our universe is just one mathematical structure among many possible ones.
• If all possible mathematical structures exist in some sense (a view associated with the Mathematical Universe Hypothesis), then we need some way to organize and reason about this vast “set of all mathematical objects.”
Step 2: What Does It Mean for the Set of All Mathematical Objects to Be Well-Defined?
• A “well-defined” set is one where we can meaningfully talk about its members and apply probabilistic reasoning to them.
• The problem is that if we treat all mathematical objects as equally probable, we run into an issue: most mathematical objects (in a broad sense) are hugely complex and incomprehensible.
• There is an infinite space of highly intricate mathematical structures, many of which would be chaotic, uncomputable, or resistant to any form of general understanding.
Step 3: Why Do We Need a Prior That Favors Simpler Ones?
• If we do not favor simpler mathematical objects, then our probabilistic reasoning about which universe we are likely to find ourselves in becomes incoherent.
• The idea of a “prior” comes from Bayesian probability: it is a way of assigning initial likelihoods to different possibilities before seeing evidence.
• A prior that favors simpler mathematical objects means that simpler structures are assigned higher probabilities than more complex ones.
• This aligns with the principle of Occam’s Razor: given multiple possible explanations, the simpler one is more likely.
• Without such a bias toward simplicity, the “set of all mathematical objects” would be dominated by unfathomably complex structures, making any predictions or reasoning impossible.
Step 4: How Does This Connect to the Argument from Comprehensibility?
• The claim is that our ability to understand the universe is not a coincidence—it follows from the fact that conscious beings are more likely to exist in simpler mathematical structures.
• If complex universes were equally likely, we’d expect to find ourselves in an incomprehensible mess rather than a universe governed by elegant physical laws.
• By assuming a prior that favors simpler mathematical objects, we explain why we live in a universe that is mathematically structured in a way that allows comprehension.
Why Would This Be True?
1. Mathematical Objects Tend to Be Hierarchical: Simple structures are often building blocks for more complex ones. If we are drawing from a space of all possible structures, there is a natural ordering where simpler structures appear more frequently and are easier to define.
2. Computability and Predictability: More complex mathematical objects are often uncomputable or chaotic, making them poor candidates for a universe that supports stable laws of physics and conscious observers.
3. The Anthropic Principle: Conscious observers are unlikely to emerge in a highly complex, unstructured universe. A structured, understandable universe is a prerequisite for intelligent life.
Summary
The sentence is asserting that in order for the total “space” of all mathematical objects to be meaningfully organized in a way that allows probabilistic reasoning, we must use a prior that assigns higher probability to simpler structures. This is because if we didn’t, we’d expect to find ourselves in an incomprehensible and chaotic universe, which contradicts our actual experience. Thus, the comprehensibility of the universe is a result of the fact that simpler mathematical structures are favored in the underlying distribution of possible universes.
This confirms my stance that there's nothing here for religious believers to even argue with.
> Just as unicorns don’t exist (but we can say with confidence that they have one horn), so humans don’t have any special existence of the sort that requires Secret Fires (but we can say with confidence that they are conscious).
> Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
So you have a lexical entity "existence" with certain semantic and mathematical properties, and those semantic and mathematical properties seem to interface with traditional theological arguments in some way. But this existence is defined in such a way that either unicorns do exist, or (considered another way) humans don't exist. But either way, humans and unicorns have the same degree of existence.
And whether or not something exists has nothing to do with whether you might encounter it. Actually being able to find the thing in the world is irrelevant to this definition of existence. The theory doesn't consider it meaningful.
So that means that this quality "existence" is defined, explicitly, intentionally, in such a way that no living human being could possibly care about it. As the kids these days say, "go off I guess": Go ahead and say God "exists" or doesn't "exist". By definition, it doesn't matter. If you think it does matter, you've mistaken this "existence" for the kind of existence that means you might run into something on the way to the grocery store. It's not that, and nobody is claiming that it is.
Put another way: If somebody finds themselves caring about the results of this theory, they've made a category error; they've forgotten the explicit definition of one of the key terms. This theory can't possibly tell you anything that might impact your life; and if you think it does, it's because you've confused its specialized term "exist" with the common English word "exist". But the two have nothing to do with each other.
You *could* argue that:
- Tegmark expands the circle of existence. Now existence includes unicorns and humans and Boltzmann brains and 4d Mandelbrot demons. Plus everything that traditionally exists also does in Tegmark theory world.
- If God exists given the traditional definition of existence, He should trivially also exist given this expanded definition.
- The traditional proofs of God are rendered moot because the Tegmark theory provides alternate explanations besides God.
- If the proofs for God's existence are rendered moot within a theory that makes it *easier* for stuff to exist than before, then they must also be moot within the old commonplace theory.
(I wouldn't be convinced by this. But maybe something else along these same lines could save the Tegmark theory from complete theological irrelevance?)
Yeah, exactly. On the original Tegmark post, I think I talked some about how you could syncretize Christianity or Hinduism with Tegmark and leave them all pretty much unscathed. Probably fewer epicycles to syncretize it with Hinduism.
Which for me means that they're just talking past each other.
(And I think it's fine that Tegmark-God can't control the multiverse, since the multiverse doesn't exist anyway. Tegmark-God is all-powerful over everything that exists.)
> God (who must exist because everything that’s possible exists in the infinite set)
Unless the relevant definition of "God" contains an inherent contradiction, and thus falls outside of "everything that's possible."
> If somebody finds themselves caring about the results of this theory, they've made a category error; they've forgotten the explicit definition of one of the key terms.
If we're trying to understand MUH on its own terms, it's better to just ignore the concept of "existence" entirely. MUH is not so much an explanation as it is a dissolution, a demonstration that there is nothing to explain.
The problem is that before MUH and its ilk came along, people were already using "exist" in the equivocal way you describe, leading to a sense that there is some mystery about "Why does the universe exist [in the broader sense], when it could easily have not existed [in the narrower sense]?" This question is confused because it's using "exist" in two different senses. But Tegmark et al are forced to adopt this framing because it's what people were already using.
I recall a quote from Scott Aaronson (paraphrasing): "We need good philosophy, if for no other reason than the refute bad philosophy!"
I think we completely agree. Yes, there have been many confused theological proofs. Yes, Tegmark dissolves the question of existence. You probably understand what I'm trying to say just fine, but I'll explain again, because maybe I'll do a better job this time.
Imagine that somebody on the Internet tells me that a safe and effective Covid vaccine doesn't exist. So I tell this person that, actually, according to Professor Tegmark, it's best to ignore the question of existence entirely. And then I set out to demonstrate that everything exists and nothing exists, and unicorns exist equally with rhinoceroses, and the question of whether you might meet something on the way to the grocery store isn't relevant.
I'm obviously failing to address this person's claim. "In some far corner of the abstract algebraic multiverse, a safe and effective Covid vaccine exists," is not a helpful answer.
I'm saying that applying Tegmark to theology fails in the same way. For religious believers, God exists in the same way the Covid vaccine exists (plus also other ways). He's an important normal part of everyday life. So anything that presupposes Tegmark is just talking past us.
(I should give a disclaimer: mathematical theorems are eternal and absolute, and God is also eternal and absolute. For this reason, there's a superficial similarity between Tegmark's theory and theology. The philosophy that merges mathematical theorems with God is called Platonism, which is why we've got a thread full of comments with people going "this is just Plato dude" and annoying Scott. But most religions reject that equivalency as too simplistic in some important way, as far as I can tell.)
In the Tegmark multiverse, is it not possible for unicorns to exist? Maybe not the magical kind, but the 'horses with horns' kind?
"Unicorn" is shorthand for "something that doesn't exist". Perhaps a more philosophically rigorous example would be Nick Fury as portrayed in The Avengers. For Tegmark, Nick Fury and Joe Biden have exactly the same degree of existence. They're both pure abstract mathematical entities. Within the Tegmark theory, it doesn't even make sense to talk about Joe Biden existing and Nick Fury not existing.
Good deconstruction
> Falsifiability doesn’t just break down in weird situations outside the observable universe. It breaks down in every real world problem! It’s true that “there’s no such thing as dinosaurs, the Devil just planted fake fossils” isn’t falsifiable. But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword!
No, it isn't, because one edge actually cuts reality at its joints, and the other one is dull as a spoon. It is true that you can construct any number of hypotheses, all of which rely on some set of unfalsifiable axioms. And yes, you are free to pick whichever set of axioms you choose (by definition of axioms). But it just so happens that (at least) one set enables you to build predictive models of reality, and most of the others do no. If you assume that a nigh-omnipotent Devil planted dinosaur bones, that's it as far as the bones are concerned. If you assume that there exist natural processes that led to those bones being there, you can predict where other bones might be found, as well as many other things; and those predictions will ultimately enable you to control (some aspect of) reality at will.
The obvious counter-argument is, "All right, but what if I just adopt some version of solipsism ? Then I can imagine whatever I want in my head and control reality that way, assuming that what I mean by 'reality' is the simulation in my head". The short answer is that you lack this ability -- if you disbelieve me, try holding your breath as long as you can by an effort of will, while imagining that you do not require oxygen. The long answer is that all of your distant ancestors, all the way down to primitive clumps of primordial cells, picked the sharp edge of the sword; the ones who picked the dull edge all died. And if you choose to retreat into the navel-gazing world of arbitrary abstract constructs, you'll get outcompeted just like they did, and your brilliant-sounding insights will be lost with you.
Even assuming for the sake of argument that a nigh-omnipotent devil planted dinosaur bones in order to mislead us, there's a case to be made that we still ought to thoroughly study them https://www.schlockmercenary.com/2012-02-13 since a lie well-crafted, by someone who knew the truth themselves, often contains clues toward that truth and/or the liar's broader agenda.
Every psychophysical harmony argument defender is doing the “You fell victim to one of the classic blunders” meme in their heads reading about how evolution supposedly debunks it.
"But since the early years are finite and the later years (potentially) infinite, there will be more Boltzmann brains than normal life."
No way. If the later years have enough usable energy to do enough computation to produce a Boltzmann brain, then they have enough usable energy to make a self-replicator. The idea that the density of spontaneously-formed Boltzmann brains in the Multiverse is greater than the density of instances of the spontaneous formation of a self-replicator is utterly, beyond astronomically, absurd. And a timeline containing a single self-replicator surely has an also-beyond-astronomically greater expected number of consciousness-moments of evolved brains than the expected number of Boltzmann-brain consciousness-moments in a timeline without life.
There almost definitely are more Boltzmann self-replicators produced than Boltzmann brains, but they die just as soon as the Boltzmann brains because there isn't enough free energy for them to actually function as self-replicators for long. (Specifically, regardless of how long they've lived so far, they have an overwhelming probability of running out of available external energy and switching from "growing" to "dying" at every subsequent moment).
i still think its pointless and even proofs of it or refuting proofs of it are.
no amount of logical proof for God can make up for lack of experience of Him. this is the catholic fallacy imo, no amount of the priest telling you something can make you into a real believer. its not "i am
persuaded" as opposed to the road to
Damascus.
the flip side is even with an ironclad mental model of atheism, if you have an experience that would be described as conversion-level, you would either need to call yourself insane/unwell and unfit to perceive reality, or the proof would crumble.
i feel like people who don't get religion thinks it starts with the theology, the coherent system and dogmas that exists via proof and authority. theology though is people trying to make sense of the founding experiences. even believers forget the "making sense of" part and how tenuous it is. that we cannot define the mind of God.
to approach religion is accepting "mystery of experience" through faith, and often individually. The modern mind wants control, to the point it turns men into animal-machines to provide that ironclad theology. no mystery allowed. thats sort of inimical to religion; no system can ever explain why God makes a bush burn.
(also religious people need to remember it too, they were the ones who overempathised theology in the first place. Word of Faith people come to mind where they think confession or the correct language to a prayer compels God to act more, "speak those things that are not as though they are.")
its more pro. faith and experience are the basis of religion, logical proofs aren't really of as much value because thats not how we experience it any more than you start with a mathematical proof a specific person will become your friend.
it can be anti in a sense that believers forget that you cannot eliminate faith and intellectually guarantee it: religion is always seeing through a glass darkly, and i think people have to get used to that.
I feel confused by how rarely "anthropic principle" seems to be said in this discussion, given that I assume everyone involved knows the concept. Basically all the load-bearing parts here are independent of Tegmark's Mathematical Multiverse specifically! They're the logical conclusion of the anthropic principle, given a multiverse's existence! Existence, complexity, and the existence of fine-tuning all follow directly. Comprehensibility follows too - intelligence is based on pattern recognition, and pattern recognition evolved because it's useful, because the world on scales anywhere near human is highly predictable. If you have a sea of chaos, you're not going to get Darwinian evolution, even if at any given moment some sort of life is possible.
That said, I think kzhou7 has a point: the anthropic principle is, in general, surprisingly useless. At an absolute minimum it's insufficient: saying "X is true because it's necessary for humans to exist" is *not a scientific explanation*, and digging into why X is physically true is valuable, in terms of how scientific progress has proceeded. Of course, "God did it" is also not a scientific explanation in the slightest.
Since I went from atheist to Catholic with "moral realism" as a major factor, I wanted to take a swing at this:
"I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one."
They don't match that well! And it's clear they don't have to. One of (a number) of strong moments for me on this front was a college evopsych class where we learned it's not uncommon for gorillas to court by infanticide. (~a fifth of gorilla infant mortality in some studies: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0078256)
Evolution does not necessarily get off of local optima. It looks for stable, not absolutely optimal. And there's no guarantee anyway that what is right "wins" in evolution's measure.
I had *hoped* that the good and the evolutionarily sustainable might dovetail better, but it's pretty clear there are a lot of ways to organize a society (human or otherwise) that work *well enough* from evolution's point of view and you need a different tool to figure out which align with the Good.
And... why do you think there's a moral optimum to be pursued in the first place? And more importantly, even if it does exist, why do you think the Catholic church has any idea of what that moral optimum is, if humans were never even optimized for morality in the first place? You're just going to take their word that they speak the will of God?
I don't really understand Scott's comment there, because whatever "objective morality" is, it is clear a lot of human behavior or even belief doesn't match it. But unlike gorillas, human beings can develop abstract ethical principles (like the golden rule, human rights), which is analogous to developing abstractions like geometry. If geometry has some objective ultimate truth value, these moral abstractions can be thought of as having similar value. Atheists don't see knowledge of the one to be any more evidence of God's existence than the other.
Ok. We can think of optima, or ideals, that are better than the de facto ethics. That doesn't mean they are pre existing entities in Plato's heaven; and could mean that the process of imagining them is the first step in constructing them.
But aren't all the standard arguments for the existence of God so facially invalid that equally dubious mathematical speculations are unnecessary? "Why is there something rather than nothing?" That's an interesting question to which we don't know the answer. But "Because God chose for there to be something" wouldn't even be an answer, because God is already something. Fine tuning: again, it's a mystery, but thinking it demonstrates the existence of God is classic Sherlock Holmes fallacy. Tegmark's idea is something you can pull out as an example for someone who can't grasp the idea that an observation may have explanations they haven't thought of, but it's not necessary. And so on.
Oddly, I say all this as someone who at least tries to believe in God. With some success, but a great deal of awareness of the philosophical weakness of my position.
"Why is there something rather than nothing?" That's an interesting question to which we don't know the answer."
At least you admit we don't know. "God is already something" is part of the consideration; the answer is that creation is a gratuitous act of love.
Analogies are the divil to use, because someone always takes the wrong end of the stick, and one should *always* remember "not this, not this, is Thou' but let's take all the people on here arguing over "oh no if UBI came in/if AI took our jerbs, that would be disastrous".
Why? Because people need meaning in their lives and they need work to have meaning. A lot of people would reply "to hell with that, I hate my job and would quit in the morning if I could live on a basic income and do what I liked". But some people really do need work to exist to be able to experience the fullness of living.
Work in itself is not the end, the experience is. By analogy, creating a universe is not in itself the end, having sapient beings with which to interact in the economy of love is the end.
There's people who don't want there to be a God because they don't want an absolute authority, they want to be their own god and make their own choices and create their own values. This can be phrased very stirringly and with great show of principle, and maybe even principle at the base of it. But also some of it "you're not the boss of me, nobody is". I do wonder if they'd accept a God who shares all their values to the nth degree and makes no demands on them. Possibly not even then, that would still be too reminiscent of having your parents around when you're an adult and feeling they are trying to tell you what to do, even if their advice is meant well.
On Boltzmann brains, I think I just realized a contradiction. In short, I that brains that wonder if there are Boltzmann brains should overwhelmingly be those which exist in physical universes where the vast majority of brains that have that thought are not Boltzmann brains. So, in the mathematical universe, the set of simplicity weighted universes should overwhelmingly favor actual minds over Boltzmann brains.
To explain, we're first positing that the laws of physics we observe, which are consistent and coherent and imply the possibility of Boltzmann brains, are approximately correct, then it seems like we start by... assuming that the Boltzmann brains which would exist would have, I guess, randomly hallucinated the same class of physical laws which support Boltzmann brains. Why?
Only the overwhelming and insanely infinitesimal minority of Boltzmann brains should do that! That is, conditioning on Boltzmann brains existing, and having experiences such that they can consider laws of physics that apply to them, I would think that only infinitesimal subset of those brains should think that Boltzmann brains should be possible. (And that's after conditioning on the infinitesimal subset that have thoughts which are coherent at all; the vast majority of random fluctuations giving rise to brain-like states should be incoherent - mostly jokingly, perhaps you're only likely to be a Boltzmann brain when you're first waking up and can't coherently think at all?)
I think the main problem with MUH is that the actual existence of mathematical objects seems implausible.
You could equally well say that the actual existence of physical objects seems implausible due to Heisenberg's uncertainty principle. Just because something can't be fully known doesn't mean it doesn't exist. (EDIT: the above comment was edited to remove "because of Goedel's Incompleteness Theorem". I don't claim this is an exhaustive answer to all reasons to doubt the actual existence of mathematical objects.)
"Equally well" is doing an enormous amount of work here. It's creaking under the massive weight it is bearing.
The actual existence of mathematical objects is implausible because we have no knowledge or experience of them actually existing. You cannot drop one on your foot!
We experience physical objects all the time; I'm tapping on one at this precise moment.
I wrote a post kind of about this idea here https://open.substack.com/pub/afunctioningcog/p/maybe-quantum-mechanics-is-not-fundamental?r=69fun&utm_medium=ios
Jürgen Schmidhuber’s ideas about computational universes address a lot of your points. Here’s the core framework:
1. The Great Programmer Theory
• All possible programs for all possible Universal Turing Machines exist mathematically.
• Every program corresponds to a universe.
• The fundamental “stuff” is program space, rather than physical matter.
• A program can contain nested subprograms that can act as observers or even other universes.
2. The Algorithmic Probability Distribution
• Each universe (program) is weighted by 2^-L, where L is its length in bits.
• Simpler programs (shorter code) naturally have higher probability.
• This distribution explains “why these laws?”—we observe simple laws because simple programs are most likely.
• Since you can sum over programs of length L with weights proportional to 2^{-L}, the total measure converges—unlike a naive “infinite universes” scenario where probabilities become undefined.
3. Observer Moments
• Conscious experience unfolds within computations.
• An observer’s “moment” is just a particular state within some program’s execution.
• We observe computable universes because only those computations that generate observers can yield observation-states.
• Our observed reality is weighted by the algorithmic probability of the programs that produce it.
4. Speed Prior
• Schmidhuber refines the basic simplicity weighting by also penalizing long runtimes.
• Universes (programs) that take “too long” or produce mostly random output get exponentially down-weighted.
• This resolves concerns about “freak” observers (e.g., Boltzmann Brains): a gargantuan runtime producing them randomly is far less likely than a neat, efficient universe that quickly evolves stable observers. Hence, we should expect our experience to involve relatively simple, lawlike physics running on “fast” code.
The way you describe time when talking about brains in the 'past' or 'future' also strikes me as wrong. Time is only a concept inside some of the computations. Outside of it, every program already exists and has already 'run'.
This is by far what most likely is just the fundamental nature of reality.
> But also, you can design a perverse programming language where complex concepts are simple, and simple concepts are complex. You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’. Now the Harry Potter universe is the simplest thing in existence and we should expect most observers to live there.
Note that on a scale from zero to infinity, Harry Potter is still closer to zero than to infinity. So even this perverse language *on average* prefers simple universes over complex ones.
Wrote up half a response but realized chatgpt can do it better than me:
The compiler (or any description language used for encoding a program) is inherently included in Kolmogorov complexity.
Kolmogorov complexity of a string , denoted as , is the length of the shortest program that produces when run on a fixed universal Turing machine . However, the complexity measure depends on the choice of the universal machine, such as a specific programming language or compiler.
The Role of the Compiler:
1. Dependence on the Universal Turing Machine: The Kolmogorov complexity is defined relative to a specific universal machine. If a different universal machine (e.g., a different compiler or interpreter) is used, the complexity might differ by at most a constant , which accounts for the complexity of translating between machines.
2. Compiler as a Transformation: If we write a program in a high-level language and then compile it into machine code, the compiler serves as a transformation step. The compiled program is an encoded version of the original source, and the complexity of the compiled version would be at most the sum of the original source complexity plus the complexity of the compiler.
3. Invariant Up to an Additive Constant: The choice of a different universal machine (such as a different compiler) only changes Kolmogorov complexity by a constant , since any two universal Turing machines can simulate each other with at most a fixed additional overhead.
Conclusion:
Yes, the compiler is part of Kolmogorov complexity indirectly through the choice of the universal machine. However, since Kolmogorov complexity is machine-dependent only up to an additive constant, this does not significantly affect asymptotic complexity analysis.
I wanted to answer to @Melvin but can't reach the send button.
I like his stance, that Scott's discussion with Bentham's Bulldogs, if akin to discussing about angels, focus on the interesting stuff about angels instead of how many fit on a pinhead, but I think it's bad to argue with God provers on their point at all.
Because their point isn't just wrong. Either their whole proving is nothing but them lying in your face (they don't believe it themselves) or it is them being terribly confused ontologically.
And both can have dire consequences for everybody else.
I think one should not argue with them how else then being created the universe could be existing, but that it cannot have a cause for existing.
Consequently, they cannot claim that what ever they think is right or wrong is backed by divine authority. There are authorities, but no divine ones. The dictator -- or mightiest being of the universe, itself a part of it -- might share your values, but still everyone has to find their own.
I definitely want an answer to Oliver's question.
> But we know that it’s one-in-ten-billion and not one-in-two, because in fact you possess zero of the ten superlatives we mentioned earlier, and that would be a 1-in-1000 coincidence if you had a 50-50 chance of having each. So it seems like the universe must be finite rather than infinite in this particular way.
I don't think this argument works out. Consider the "Worlds tallest person" property which you almost certainly don't posses. This is not the only property related to height, we might also mention the properties "Worlds second tallest person", "Worlds third tallest person" ... "Worlds shortest person". If there are 10 billion people in the world, then clearly there is also 10 billion such properties for the measure "height" - and every person in the wolrd will uniquely possess exactly one of the properties. Since the properties are mutually exclusive and equally (un)likely, I don't think you can privilege the "tallest person" property specifically, and use *that* as an argument.
As I've gotten to be an old man, I've come to the realization that it's useful to just remain agnostic about some questions. To just say I don't know the truth about this. And then the important question becomes is this a useful idea (rather than a true one.) In this class I would put, "Does God exist?" and "Do photons exist?" IDK, but I find both concepts useful.
“We live in a possible world. How come we have real conscious experiences? Because the schematic of Possible World #13348 says that the beings in it have real conscious experiences“
Isn’t this the ontological argument for God’s existence, in which one of the stipulations of the concept of God is that said concept exists? (One possible rebuttal to it as a proof being that at least intra-universe one can’t magic unicorns into existence by tacking “and they exist!” onto the definition.)
The tegmark multiverse is a big place. Somewhere out there is a godlike being. Somewhere out there are unicorns. Somewhere out there is a giant flying spaghetti monster. The question is whether or not they exist in our universe.
Well, the quantum/inflationary multiverse is still pretty big. So it's more about whether they exist in our reality branch and past light cone. Which they probably don't.
Will they exist in our future light cone. That depends on what any future humans/AI's decide to create.
As for the discussion of psychophysical harmony, either I'm missing something or I think you are. It's a topic I find intensely fascinating so I'd be very glad for any correction!
The counterfactual of having conflicting psychic and physical events being strange is exactly the point! If you believe in epiphenomenalism, where mental events like qualia have zero causal influence, a misalignment like that couldn't have any effect on evolutionary pressures.
If you're not an epiphenomenalist, e.g, you think qualia do influence your physical actions, then it's trivial to see why evolutionary pressures would select for them, and I would be very surprised if that was the argument. (I don't think I understand your read of the argument, based on your example of the person expressing bewilderment at their inability to cast their hand into fire. The conflict wouldn't allow them to express or even think that thought, any more than it would allow them to repeatedly cast their hand into fire!)
"Lucian Lavoie writes: I think the biggest flaw with Tegmark's argument is that consciousness just doesn't exist."
Unfortunately for Lucian, consciousness clearly does exist. We don't completely understand it, but we experience it; it does exist.
I agree that Tegmark's hypothesis can be reframed to not depend on consciousness.
"... consciousness is the primary challenge to any materialist conception of the universe and that I don’t understand it."
Assuming what you don't understand is consciousness, then that (reasonable) lack of understanding is the very reason that consciousness does not challenge materialist conceptions.
All conceptions of the universe (materialist or otherwise) are incomplete. That's why we do science: to fill in some of that incompleteness. If consciousness happens to fall into the realm of "not completely understood currently" then that is just one of many things still to be worked out. It does not challenge materialism especially.
"What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other [explanation is true]?"
Okay; is "simplicity" a euphemism for "more likely"?
To me, it looks like this Mathematical Universe Theory is what happens when a Materialist bites the Dualist’s bullet.
In the typical “hard problem of consciousness” debate the Dualist will say that physical matter can only correlate with consciousness, but it can’t fully describe consciousness itself.
If a particular arrangement of matter results in the thought “that is a red ball”
Then a computer (or LLM) which read the details of the arrangement would *know* what it’s like to have the thought “that is a red ball.”
If it didn’t than that means the details of the matter-arrangement do not fully describe consciousness. Since the computer has read all the details.
So either there is something else to consciousness, that is not encapsulated by the physical, or the computer, by just reading this description, has now gained the ability to be conscious itself since it can now fully “know what it is like to see a red ball” in the fullest sense.
So it sounds like, if you bite this bullet, you accept that, yes, the computer has now gained consciousness. And similarly if you write down the description on paper the paper has gained consciousness… since the arrangement of physical matter ***is*** the thought,
than any sufficiently detailed description of the arrangement is also a perfect description of the thought.
And since, as the dualist will say, it’s impossible to fully describe a conscious thought to a non-conscious being, that means that any sufficiently detailed description of the arrangement is itself consciousness.
And once you go that far, you don’t even need the matter or the computer or the paper to begin with.
The description of the arrangement exists as a mathematical object. And that’s where consciousness comes from.
"Simplicity weighting" might come from the nature of substructures. Each mathematical object appears not once, but many times in the landscape, because they show up as substructures inside larger objects.
Most mathematical objects are going to be kind of like random noise. Inside a large random noise object, you'll find smaller ordered objects at random, just as if you have an infinite random sequence of 0s and 1s, you can find finite sequences at random within it. By "most" reasonable ways of ordering the multiverse, simpler substructures will be denser in the landscape than more complex ones.
For finding the probability of finding yourself in one world vs. another, you don't need an absolute complexity measure, only a relative one. So maybe it doesn't matter what programming language you use?
Finitely complex substructures infinitely dominate over infinitely complex ones, so you don't need to find the measure of weird sets within the landscape, either, as your probability of finding yourself there is zero.
The simpler the universe, the more often it appears, but more conscious observer-moments gives you more chances of finding yourself there. So there's a tradeoff: adding bits of information to your universe is worth it if those bits more than double the amount of experience (whatever that means). Tegmark's hypothesis seems to make two retrodictions:
1. That the universe will have simple laws. This is likely satisfied if the trend toward simpler laws continues in physics, and if our universe has a simple initial condition (an orderly Big Bang). Tegmark even suggested that our universe might contain almost no information.
2. That these simple laws will produce a large amount of experience (large number of observer-moments). This is satisfied if you take the Many Worlds interpretation.
I observe my own existence, and find myself to be an observer-moment within a sequence of observer-moments, which are collaborating with each other across time to form this thought, and we are in turn embedded within a simple structure with high measure on the landscape that also produces a large amount of such experience, and that is in turn a substructure of a large and likely noisy mathematical object.
But wait! There's no distribution over the whole landscape, so there being a specific top-level world leads to paradoxes of infinity. You can't observe it, either; each conscious substructure appears many times throughout the landscape, and there is self-locating uncertainty about which specific one you are. Need there be an answer to the question, "which, of these many identical copies of my experience, is the one that's me right now?" If we can't select a top-level world, perhaps there is no answer to which top-level world we're actually in. Perhaps we simply are the experience, not a specific instance of it?
The notion of an exact surrounding world is another possible sacred fire that could potentially be banished. You don't find yourself to be a world, but an experience, and if there are many possible mathematical objects that contain that exact experience, why must one of them be right and the others wrong? Think of the world around you as being in a fuzzy state, with a probability distribution over the possibilities. Your current experience exists within different worlds to different degrees.
It's also probably inaccurate to talk of "moments" experience, because consciousness requires time to compute and isn't instantaneous. I'm not sure how to reconcile this with the intuition that I find myself located in time and not spread out. I envision a sort of branching path-identity through these fuzzy probability distributions over worlds, with different probabilities of going down different futures. Even just with quantum branching in one universe, there are worlds where cosmic rays hit you in just the right way to change your experience into almost any other possible mind, so in that sense path-identity makes you of one being with everyone else, just in a distant and unlikely way. The branching paths of conscious experience through the multiverse would be a single fully connected tree, like Yggdrasil.
You could draw a path such that the most likely world that surrounds it at the start is different from the most likely world that surrounds it at the end; something like the world-hopping in Permutation City. However, you don't go down a single path, but all of them, so it's not really meaningful to draw one line.
This is a great conversation that I find fascinating as a graduate student in theology who has studied the classical arguments for God and the counter-arguments to some degree. The Mathematical Universe Hypothesis (MUH) was new to me — although I recognize some of the underlying ideas.
Summing up, it seems that these are the strongest rebuttals to this line of reasoning:
1) Tegmark’s Mathematical Universe Hypothesis isn’t really an argument: It’s a tautology. He assumes: All mathematical structures exist. Our universe is a mathematical structure. Therefore, our universe exists because it is a mathematical structure. But this is circular reasoning. Tegmark isn’t proving we live in a mathematical structure; he’s defining reality in a way that makes his conclusion inevitable.
2) More fundamentally, Tegmark never justifies why all mathematical objects must exist physically. He treats mathematical structures as inherently real without evidence. There’s zero experimental support for this claim.
3) The reliance on cellular automata (Game of Life) as an explanation also falls apart upon closer scrutiny. Game of Life still requires a designer to set its rules. Who set the rules of physics? Plus, no one has ever observed a real-world system starting without a cause. Also, Game of Life never creates biological complexity or self-replicating organisms.
Again, we face the problem that the MUH doesn’t explain why mathematical laws exist, why our universe follows them, or why consciousness emerges. It simply assumes everything exists somewhere and calls that an explanation. Which leads to ...
4) The MUH suffers from the same problem as “God did it.” It stops at a brute fact rather than seeking deeper explanations. Instead of saying, “The universe exists because God willed it,” the MUH just says, “The universe exists because math.” This could be dubbed a "Math of the Gaps" fallacy.
Theists are often accused of using "God of the Gaps" reasoning; i.e., invoking God to fill in gaps in scientific knowledge. The MUH does the same thing with mathematics, treating it as an ultimate explanation, even though it doesn’t actually explain anything.
Commenters have also identified some other problems for the MUH worth recapping here:
5) One major problem is the Boltzmann Brains issue. If all possible mathematical universes exist, most observers should be Boltzmann Brains, or randomly assembled conscious minds floating in chaos. A fine-tuned universe like ours is far less likely than a universe filled with chaotic, fluctuating observers. Since we do not experience chaotic, incoherent consciousness, this is evidence against the MUH and in favor of theism.
One proposed counter to this is claiming fine-tuned universes should still outnumber Boltzmann Brain scenarios. But this is based on made-up probability numbers. Even if this were right, theism doesn’t have this problem at all. A designed universe would naturally contain structured, embodied consciousness.
6) Another problem aptly raised is the issue of induction. If every possible universe exists, then there are infinitely many where the laws of physics change tomorrow. Why should we trust that tomorrow will resemble today? Theism offers an orderly, stable universe because it assumes a rational designer. The MUH does not. The attempt to solve this by introducing a simplicity-weighting system is an ad hoc assumption with no real justification. On that note...
7) Tegmark claims simpler universes should be more common. But what determines which universes are “simpler”? What enforces this weighting? If all mathematical objects exist, why should they follow a probability distribution at all? Tegmark adds this assumption only to prevent his theory from collapsing, but this assumption is just as unproven as the rest of the MUH.
Ultimately, if the question is which is the stronger explanation—theism or the MUH—then I claim theism "wins."
Theism assumes that an intelligent creator exists and established the universe. The MUH assumes that all mathematical objects exist and generate all possible realities. But while theism at least explains order, laws, and consciousness, the MUH assumes universes "just exist" without any justification. In other words, the MUH assumes more things than theism does while explaining less. If you prefer fewer assumptions, theism is the stronger argument.
In summary, it's important to recognize that the MUH isn’t a scientific theory. It’s a speculative philosophical model that assumes what it needs to prove.
Which makes me wonder: Would it be fair to sum up this debate as the one in which theists asserts divine aseity and Tegmark and his supporters assert mathematical aseity and everyone just reasons from there?
Based on his comments in this follow up, I think Scott's claim is intentionally slightly weaker than what you're arguing against (I won't comment on what Tegmark believes.)
You seem to think Scott/Tegmark is arguing from first principles, assuming things such as mathematical realism and simplicity-weighted probability.
I think instead he's just trying to break the theist's modus ponens which says: if conditions of the universe are well-tunded for life, they must be set by a creater god.
He's just identifying a plausible alternative that *could* result in well-tuned conditions, thus a creator god does not necessarily follow from the presence of well-tuned conditions.
I think that's fine as far as it goes, but in order to make this "slightly weaker" claim, Scott also has to weaken the arguments. That is to say, these classical theistic arguments aren't well represented by the "modus ponens" formulations presented. At least: Those are not the most rigorous forms of the arguments.
Take the one you have raised (fine-tuning). Theists don’t argue that fine-tuning logically necessitates a designer. Instead, they argue that the best explanation of fine-tuning is design, meaning it's an inference to the best explanation (IBE) rather than a strict logical deduction.
By treating the theistic argument as if it were a rigid modus ponens rather than an abductive argument, one misrepresents it and makes it easier to "defeat." It's not quite straw-manning, but it is approaching that sort of argumentation.
N.B. To be clearer: An IBE doesn't claim logical necessity (i.e., no other possibility) but rather explanatory superiority (i.e., the best available explanation). Scott’s argument, by merely offering an alternative, doesn’t refute theism. It only suggests another possibility, which isn't the same thing. If that is all he was attempting, then that's fine. But then I'm not sure why he would claim: "Tegmark’s Mathematical Universe Defeats Most Arguments For God’s Existence." Maybe that was just click-bait? (It got me to read it!)
I don't think you need to say that mathematical objects "exist physically" for it to work.
You just need to say that the sub-mathmatical objects that experience conscious (us) perceive the entire mathematical set that they are a part of (the universe) as though it existed physically
But that doesn’t seem to be Tegmark’s position (AFAICT). Tegmark asserts that mathematical existence IS physical existence, meaning that all mathematical structures that can host what you call 'conscious sub-mathematical objects' are real in the same way our universe is real. There is no "as though" about it; everything that can be described mathematically exists in the same ontological sense. Mathematical objects that describe physical laws are not just useful abstractions; they are identical to the physical world.
Your view, by contrast, seems to suggest that physicality is just a phenomenological illusion, something we merely *perceive* rather than something that *is.* But this shifts the argument toward antirealism (denying the independent existence of reality) or even solipsism (doubting whether anything outside one’s mind exists at all). Are you comfortable with those implications?
Also, what do you mean by "for it to work"? As a defeater for these classical arguments for God’s existence? If so, I’d be interested to see you develop that counter-argument more fully. Keep in mind that the classical arguments are, again, inferences to the best explanation (IBE), which are a form of abductive reasoning. To defeat them, you’d need to show that phenomenological illusion or some expression of solipsism is a better (simpler, more coherent, more explanatory) hypothesis. That seems like a hard case to make.
Right. All the conscious sub mathematical objects would exist in the same sence that the universe exists in.
Namely the mathematical sence.
Like I don't know if this is exactly what Tekok holds but I could imagine the position that nothing exists in any sense other than the mathematical.
What we call "physicall existence" is really just mathematical existence.
So unlike in The Matrix where you can take the "red pill" and wake up in a flesh-and-blood world ... it turns out there is no flesh-and-blood world? The Matrix is all there is, and the universe is, at base, something like that streaming code?
I think it's not hard to show that it's simpilar.
Classically people need to assert the existence of both the physical world and the mathematical structure of the physical world.
Saying that only the mathematical structure exists is self evidently simpler since it assumes the existence of one less thing.
The only question is can it explain the evidence, namely the evidence of consciousness. Like can mere mathematical objects be conscious?
Sure that does seem surprising. But as a dualist myself it doesn't seem that much more surprising than the claim that physical matter could be conscious.
But this assumes that mathematical structures are ontologically independent rather than merely descriptive. Sure, IF you assume that, then it's parsimonious to apply Occam’s Razor and conclude that a purely mathematical universe is "simpler." But, again, you have to assume that, which means you're begging the question against realism rather than proving the assumption itself.
Also, just a reminder that simplicity is only one criterion in an IBE. Explanatory power and coherence matter just as much. So even if we grant that ontologically independent mathematical structures are simpler on your view, we still have to answer crucial questions (like the one you yourself raised about consciousness) in a coherent and satisfying way. And that, it seems, isn't quite so ... simple..
N.B. I should also point out that if you accept dualism (mind and matter as distinct), then your view is no longer Tegmarkian, since the MUH is explicitly monist (holding that only mathematical structures exist). But I think you have already said you aren't defending Tegmark's view but rather a modified version of your own.
Tegmark's theory is very interesting and remarkable in itself, way more so than its use as ammunition on suddenly-revived internet God arguments.
My recommendation for today is leave the God fights aside, read this great explainer on Tegmark's cosmology, and make up your own mind: https://alwaysasking.com/why-does-anything-exist/
Okay, because I was initially pretty sceptical of Tegmark's theory and commented in line with that, I feel it's my responsibility to note that now I believe it's much more likely to be true. Here's an outline of what changed my mind (not arguments, just actions): reading the wikipedia article on the topic, thinking about the notion of "exist" a lot, thinking about what mathematics beings in a cellular automata might come up with, and reading the comments here (FeaturelessPoint's were particularly helpful). I might write a more detailed comment on my thoughts in the future, but now I don't have time, good luck to everyone!
Yes, please do write a more detailed comment. Or just explain what your insight is regarding the notion of "exist", I'm curious.
I'm not the best at explaing things or at English in general, so it's unclear how much value you will find in my explanation, but I can't say no to such a direct ask, so I will at least attempt to explain the "exist" part.
So, first off: I'm assuming all the usual sensible things: no god, no human-independent, objective moral facts, no dualist consciousness. If you believe in these, I don't think you will find much value in the following, so you should look at other sources first to deconfuse yourself. (Also, some theory of computation knowledge is probably needed, in particular you need to be able to intuitively understand how general a concept computation is)
Let's start with assuming this reality exists only (the usual stance).
Let's simulate on an ordinary (but more powerful, than any actually available rn imo) computer an appropriate cellular automata (for simplicity) with intelligent life. They talk with each other, fight, think about consciousness, etc.. So far there is nothing special here. If you accepted the stuff in the prev paragraph, then you agree that they do actually exist (it doesn't matter whether you are implemented with carbon or with silicon).
You pause the running program. (This step is prob not necessary, but with it, its simpler to explain). Have they stopped existing? You can get another program which, by reading the memory of the simulator program, shows you their cities. They are simply frozen, but clearly still exist.
Here comes the trick: Notice that it is simply your interpretation (and the interpreter program's too) that the bit string in memory is cities, beings, etc.. You could invert every bit in memory and write a new program which when showing you the cities interprets the memory inverted. Or another program which before interpreting flips the first bit in memory and then, if you flip the first bit in memory, that program too will still have the same output.
Reality has no way to ensure that a given bit string in memory should be interpreted in a specific way. Okay, but then you could even interpret the number in memory as any other number, you have a completely general N->N map. But that means, that the freeze frame picture of their cities, could easily be something completely different, even a freeze frame from completely different automata, or the previous step of the same automata.
As we accepted that one of them exist, and there is no objective way to differentiate them, all of them exist, every possible step of every automata.
Furthermore, the electrical charges in the memory of the computer also don't have a sign put on them by reality that they are numbers, only we interpreted them as a big natural number! we could easily interpret a carbon atom as that specific big 'N' number, it has exactly as much objective connection to 'N' as the electric charges.
So, this means, that every possible turing machine's computation exist if we just have something, absolutely anything in our "starter" universe, because we can interpret that as any possible natural number.
Now, assuming our universe is also computable, we too can currently be in one of these simulated universes instead of us being the "starter" universe, we have no way to tell. So we've got (CUH+at least one featureless point) as our conception of existing stuff. I think the point can be removed, and that is achieved by inspecting what happens in your brain when you think 'exist' from a third person view, but I really should sleep now, so that's homework.
EDIT: Ah, this isn't correct unfortunately. My plan was that while assuming the previous existence concept, I transform the reader's view of reality to a state where it's easier to see the other existence concept, unfortunately I did not realise that I used the new existence concept in the derivation, so it fails. (From the view of the old existence concept I incorrectly assumed the existence of the incorrect map of a being existing in my map) A wholly new method is needed to illustrate why the new existence concept is desirable, if I ever come up with a linearization of my intuition about it, I will write a new comment.
Thank you!
I reply now to thank you but I will have to reread your message a couple of times before I can claim I understood. I mainly wanted you to be aware that some people read your comments.
I share your assumptions, for what it's worth (I don't think you meant that comma between "human-independent" and "objective moral facts").
I'm glad to hear my comments here were helpful to someone, whenever I comment here it's very easy to feel like my comments aren't communicating successfully and the only people who get what I'm intending to say are people who already know it, which is tiring. This comment makes commenting feel worthwhile, at least on this post.
His mention made me reread all your comments on this post! I genuinely admire your (and others) dedication. I usually start writing a comment, find it very tedious to explain all I need to explain, and quickly give up, thinking that nobody will read it anyway. I find it very difficult to change anybody's mind and hardly worth the cost. But I'm totally wrong on this. This entire discussion is evidence of it.
This whole conversation seems about as productive as humanities scholars arguing about whether a plane on a treadmill can take off, without bothering to learn the physics behind powered flight. Without the right mathematical grounding, armchair philosophy on a deeply mathematical proposition is going to lead to some wacky and incorrect conclusions (and not saying this post necessarily is wrong, but the *danger* of being wrong is exceptionally high). I'm reminded also of endless early 2000s internet threads on whether 0.999... = 1.
> If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined.
There's a similar argument that you can't pick a real number uniformly at random from an interval (if you pick a random number between 0 and 1, any number you could pick will be less likely than 1 in a billion) - why is exactly one of these arguments persuasive?
Let's go back from the beginning, because I think people are not quite grasping the profundity of this Tegmarkian view. The basic postulates of this cosmological view are:
- self-existing mathematical structures are *already* encoding every possible computation, including the one that simulates this very universe's physical laws, as well as every other possible computable universe. (the "self-existing" part is an axiom of the theory; this universe being computable is a common hypothesis of physics; the rest is just a basic theorem of computation)
- this is enough for the biological organisms in these 'simulated' universes to be conscious of themselves and their surrounding universe, as long as they have the (simulated) physical feedback loops necessary to sustain individual sentience.
- this 'simulation' operates in the manner of an Universal Dovetailer. The Universal Dovetailer ("UD" from now on) is a computational structure (a "program" for simplicity) that enumerates and executes every possible program, step by step, in an interleaved way. See https://alwaysasking.com/why-does-anything-exist/#The_Universal_Dovetailer for details.
Now, this is not a claim that there is a big computer at the bottom of everything (in which case you'd rightly wonder who wrote the program, and why it's coded in perl, etc.). Rather, the claim is that the UD is a self-instantiating logical fixed point. Since the UD simulates every possible computation, it also simulates itself, and every possible variation of itself. But you can easily prove that, within the UD's sense of time (an ongoing computation creates a structure of time within itself), longer programs will be simulated exponentially slowly, simply because there are exponentially more of them, and they share a the UD's finite computational capabilities. You simply cannot write a Universal Dovetailer in any programming language or mathematical representation that fails to simulate longer programs exponentially slower than shorter ones. (Also, the UD's finiteness need only be local; it can possibly be globally infinite and do a spatially infinite amount of calculations at each step, while still doing those interleaved steps one by one.)
So the simplest possible UD becomes a logical fixed point, because it's the one that gets simulated the most in *every* UD, including non-simple ones with useless epicycles. This is enough to ground the structural unicity of the simplest UD. If any logical structure has the capacity to be self-instantiating and infinitely generative, the UD is it, and due to any UD's bias towards the simplest instructions, the simplest UD wins out. From here on "UD" means this simplest UD.
This gives you a good basis to ground (accursed) anthropics. The outermost, most objective reference point you can find are not conscious observers within universes, or classes of them, or instants of observer-consciousness within worlds, but instants where the UD is simulating a conscious being. The steps where the UD is simulating insert matter like a stone obviously don't count for anthropics, because the stone (presumably) doesn't have the physical feedback loops that would allow it to sustain an individual subjectivity, let alone wonder what kind of world it finds itself in. And since the UD is (here, simplisticaly) modeled as a single-threaded computation, you can do stats on those instants, and ground anthropics on them.
So if you find yourself as a conscious observer within this Tegmarkian multiverse, the question becomes, what kind of world can you expect to observe?
The answer is simple: the mathematically simplest one that is still complex enough to give rise to a rich physics that can sustain evolutionary feedback loops. That's where the vast majority of computation steps instantiating consciousness are found in an UD.
And that happens to be a great match for the world we find ourselves in! Including discoverable physics based on relatively simple mathematical laws, with apparently arbitrary constants fine-tuned to make complex life possible. Everything that your theistic hypothesis explains, this explains too.
So let's compare this kind of theory to the philosophical theism that BB defends. We can define the theistic hypothesis as postulating an entity ("God") which is 1) self-sufficient, 2) generative, and 3) good. As BB has explained in detail, this predicts many of the features we find in the world, except for the bit where we'd (naively?) expect the world to be *more* good than it appears to be.
Now, self-sufficiency and generativeness are needed features of *any* possible cosmological foundation. You can't have a foundational theory without them. In this Tegmarkian view, the UD is also self-sufficient and generative. But in the theistic view, God is self-sufficient and generative *by fiat*. Here we actually have some structural justification for the UD to be self-sufficient (it's a logical fixed point), and generative (it's literally specified as a maximally generative program).
Note that God's goodness is not only problematic (see the epicycles of theodicy), but also an extremely onerous hypothesis. The sense of 'good' is one of the most complex things in human culture. Attempts to formalize it simply are also full of problems, pace Bentham and his defenders, human and canine alike. Like Scott hinted in one of his replies, the sense of "good" is an evolutionary product, the accumulated result of millions of years of biological evolution, plus ~100k years of much faster cultural evolution, of trying out and integrating strategies of collaboration/competition. To put *goodness*, of all things, at the cosmic level, as a fundamental self-existing element of reality is the most onerous hypothesis I can think of. Literally anything else would be simpler, including a tiger jumping at you right now out of nowhere.
And that's why the Tegmarkian view wins. It can do the whole job, including predicting a fine-tuned, comprehensible universe within which complex conscious beings develop a sense of the 'good'. And it only requires some maximally general postulates - pretty much the bare minimum to have anything at all.
Finally, I'll argue that the Tegmarkian view is *optionally* compatible with a spiritual, near-theistic view too. But that's optional; it's not an necessary part of the original theory.
For that, we start with the fact that biological life is compositional. You have life within life many layers deep — organelles within cells within organisms within superorganisms, and possibly higher (entire ecosystems are also not quite unlike organisms). Since bio life is basically feedback loops within feedback loops, and so is consciousness at the informational level, it's reasonable to think that consciousness may also be compositional, so there's consciousness within consciousness. All it requires, in both cases, is the combination of strong feedback loops with a clear enough demarcation between the (super) organism and the rest of the universe. So maybe individual cells are (very weakly) conscious; maybe brain hemispheres are; maybe an entire culture has an associated common super-consciousness; maybe even a tightly bound couple has one; maybe the entire Earth as an ecosystem does (that's the Gaia hypothesis). On the other hand, unnatural combinations like "my neighbor Peter and Donald Trump together" are not a conscious entity, because there's no strong natural demarcation that would put them together without including a lot of other stuff too.
If we assume something like this, then on a Tegmarkian view, there's no reason why the UD itself couldn't be seen as fulfilling the conditions for a global consciousness of its own. After all, the logical loops which make the simplest possible UD stand out within every possible UD's simulation of every other possible UD, are a bit like atemporal feedback loops too. But the UD is all about manifesting all possibilities without excluding anything, so beyond a sense of simplicity, it would be a completely neutral consciousness, quite unlike what we know as bio organisms.
And within that almost inimaginable whole, each computed universe stands quite separate from any other, being causally closed, so maybe each separate universe could have a something like a cosmic consciousness to it too.
This would match pretty with the reported experiences of spiritualists and mystics of all kinds. Not the part where they claim to break the laws of physics, that would be still impossible here, but the part where they report *feeling at one* or somehow entering into experiential contact with higher planes of consciousness, or God or gods. After all, "I" am a literal part of all levels of consciousness above me, from the smallest coherent human group I'm a part of, all the way up to this universe, and to the entire UD too. It's the stuff we're made of; in Indian terms, "atman = brahman". It's only a matter of tuning in, a question of attention. Hence the weird depths of meditation, which are hard to explain on a purely materialistic evolutionary basis.
Even more interestingly, this would match pretty well with the *varity* of types of spiritual experience, which individual religions usually have trouble accommodating. Shamanistic, Earth-based spirituality would be more like being in touch with Gaia; the historical God of the Tetragrammaton could be the spirit of the Jewish tribes at the time. And mystics of all religions have reported both personal and impersonal experiences. For personal experiences of the higher type, the relevant god appears as a great entity with a flow of its own and possibly advice or orders for you; that could mean you made contact with a larger collective consciousness, anything from the Earth all the way up to this entire universe. And pure impersonal, neutral experiences of pure spacious openness would mean you made contact with the UD-consciousness itself.
Again, to clarify, I like this and personally think something like this is true, but it's not a necessary part of the Tegmark view.
skaladom
I read your comment twice so I could minimize my misunderstanding. I know that does not guarantee success, but I made the effort.
"The basic postulates of [Tegmark's] cosmological view are: ... self-existing mathematical structures are *already* encoding every possible computation, including the one that simulates this very universe's physical laws, as well as every other possible computable universe. (the "self-existing" part is an axiom of the theory; this universe being computable is a common hypothesis of physics; the rest is just a basic theorem of computation)"
Declaring "self-existence" an axiom does not make it true or real; it just attempts to put it beyond question. Such axioms MUST be questioned because the elaborate conceptual structures built on them are meaningless if the axioms are untrue. Axioms serve legit purposes--we cannot know everything--but Tegmark's axiom seems too carefully built to be taken on faith.
There is ZERO evidence that Tegmark's postulate (above) describes anything real. It is indistinguishable from any other fantasy.
Nothing real can be grounded on fantasy ideas. Not a "Universal Dovetailer" (UD), not anthropics, nothing.
Tegmark's view cannot defeat the various arguments for God because the proponents of those arguments (theists) generally believe their deity is real and active in the world. Their arguments cannot be defeated by fantasies.
"But in the theistic view, God is self-sufficient and generative *by fiat*."
Huh. As is the UD.
Fortunately, theistic arguments can be defeated without resorting to unmoored views like Tegmark's.
You know, I don't really disagree with that. It's also my attitude most tuesday afternoons. You'll find comments from me elsewhere in these threads making exactly this argument, that sometimes the sane say is to say "I don't know" rather than choosing among a bunch speculative options.
It's just a question of how far you're willing to speculate. If you want the safest attitude, you can stay up to date on the consensus view in physics (the Standard Model, General Relativity, the Big Bang with inflation, black holes, etc.) at whatever level of detail is appropriate for you, and call it a day. Anything beyond that, we just don't know. Why is there anything at all? We don't know. Why do we have consciousness? Same answer. Is math discovered or created? Same. I'm sure you're aware that there is a pretty big list of important questions that we don't have definitive answers for.
But if everyone took this view and stopped asking questions beyond the edge of knowledge, then knowledge would never advance! So people speculate. They make up possible theories that cannot be tested, or are not even very precise in their formulation, but sound neat and have some explanatory power. Some are paid to do that as scientists or philosophers or whatever, and they do it at a high level. People even make up meta-theories about how we come to know anything, and then argue about them. The frontier between proper research material and purely speculative ideas is not water-tight. Many, if not most successful and accepted theories started their life as speculative ideas with weak support. Over time, some ideas gain traction, others don't and are forgotten.
And many of us on forums like this enjoy watching from the edges, reading stuff to keep up to date with the latest ideas and how they are faring, and discussing such things here. You get to hone your understanding of complex ideas, and test it against live opponents, what's not to like?
So yeah, I 100% agree that you don't need to bring the full heaviness of Tegmark in, to say you don't believe in God. It's perfectly reasonable to just look at the idea that the world was made by a conscious, moral agent and say, "no, I don't buy that".
But I also find Tegmark's thing interesting in itself. Can't say at this point that I 100% believe in it, but it's the only speculative, vaguely scientific theory that goes all the way down to a self-supporting foundation, and that makes *some* sense to me. So I welcome the opportunity to spend an afternoon re-reading about it and getting up to date with some people's objections, and making up my mind as if it mattered. And if Scott brings Tegmark up to reply to BB's litany of arguments, I'm happy to chime in in support.
I am untroubled by speculation; as you say: it's necessary as well as useful.
But I am troubled by unacknowledged speculation. In these threads on Tegmark's ideas, the purely speculative nature of these ideas seems unacknowledged, ESPECIALLY since these threads began under the title "Tegmark's Mathematical Universe **Defeats Most Proofs Of God's Existence**". (**emphasis** added.)
Those proofs of God's existence are rarely regarded by their proponents as speculative; they generally believe their God is quite real. To offer pure speculation to "defeat" non-speculative claims is foolish. It cannot be done. So the very title implies that these comments are not speculation.
This is why I come to the belief that many on these threads don't think they are engaging in speculation. You might be, but it seem others are not.
So, speculate away! I only seek clarity: speculation should be expressly acknowledged, otherwise one can reasonably regard these comments as non-speculative. And defective thereby.
Given mathematical omniscience, seems to me a conceptually straightforward (albeit inconvenient for us mere mortals to check) way to prioritize simplicity would be to
1) work out the smallest finite state machine which produces an unambiguous description of the mathematical object in question as its output, and nothing else, then halts,
2) check how many states that machine contains, and finally
3) give the mathematical object in question probabilistic weight inversely proportional to the Busy Beaver number for machines of that size.
This could conceivably be implemented by a jumble of individually primitive parts with no hierarchical, or even coherently causal, organization. Consider a FSM built like a ribosome and string of RNA, protein being the output tape. Within some higher-dimensional manifold those ribosome-analogs end up loosely sorted by relative size, perhaps by a stochastic mechanism similar to what gives beaches and other sediment beds locally consistent fineness of sand, silt, gravel, or clay, so they're mostly competing with same-size peers for input materials. Such materials would be fragments of solidified lower-dimensional spacetime, selected and installed based on compatibility with the relevant ribosome-analog's local rules, sorta like trying to solve a 4-dimensional jigsaw puzzle by grabbing blindly from the set of all possible pieces and checking them against a https://en.wikipedia.org/wiki/Go/no-go_gauge
Turing machines that don't halt are statistically irrelevant from an anthropic standpoint, since they only ever extrude one universe each. Busy Beavers, on the other hand, gobble up all the locally available copies of some specific type of event, while requiring relatively few end-cap pieces. If that event is, say, a proton decaying (or some zigzagging decay / reform cycle), nearby alternate universes with only stable or near-stable protons are mostly unaffected, but if it's something like "lone photon continues unperturbed," any cosmology within diffusion range involving lots of wide-open spaces between light sources might be hindered by the resultant shortage of that critical piconutrient.
Tegmark's mathematical universe feels pretty natural, and I have had thoughts along this line previously when showed some existence proofs for god, but it feels like when I try to formulate it carefully things go a bit wrong. If I have some finite alphabet to describe physical laws and weight the universes to favor lower complexity, and "who I am" is chosen between some set of possible conscious beings in the universes, it seems like with probability 1 I will be in a universe with infinitely many conscious beings to choose from if any such universe is possible. (additional complexity should be penalized by a finite multiple to probability given a finite alphabet, so that our probability measure sums to 1). My possibly wrong understanding is that our universe appears to be finite (say in terms of how much energy was present in the big bang). So my observing this universe appears to be a probability 0 event. Does anyone have good ways to resolve this?
Some possible attempts at resolution:
1) there are no possible universes with infinitely many observers and a finite description. This seems obviously wrong. If Conway's game of life is Turing complete and then it is likely possible to create an observer in it. Now just have the rule that the initial conditions are periodic on an infinite grid and you have infinite observers.
2) there could be other big bangs far away but in the "same universe" following the same physical laws such that this universe is actually infinite and so are the number of observers I am drawing from in this universe. (for instance our universe could have periodic initial conditions with a period far longer than the lightcone we can see.) Seems possible but if we our application of occam's razor gives different answers depending on if there are "duplicate big bangs" far enough away that we can never observe them this seems quite bad.
3) Something something multiverse interpretation of quantum mechanics gives infinite Everett branches in this universe with infinite observers total. This seems like it might be technically true in some sense, but I don't think it resolves the problem. Would need to think about what counts as a different observer to draw from. Seems like you should naturally be integrating over the Everett branches not summing in which case you are still 0 probability compared to a universe with infinitely many observers in each of a collection of Everett branches.
Tegmark's mathematical universe seems unnatural because there is no evidence that it is more than imaginary. Many imaginary worlds have been conceived (LOTR, Star Trek and Wars, Hogwarts, the Matrix, etc) for which there are no reasons to believe in their actual existence.
Our universe is likely to be finite, and the probability of any one of us existing was very small, and yet, here we are. It's not zero. Obviously.
If every possible universe exists, most will be infinite (adding a place where to stop, add some complexity).
Almost all states of an infinite universe is at almost maximum entropy (there will still be an infinite number of states at low entropy, entropy will sometimes decrease a lot).
And there is infinitely more beings in a infinite universe than in a finite one.
So, there is still a Boltzmann brains problem, but I agree it is a problem even without assuming every possible universe exists.
In fact, it is just like the MWI of quantum physics, but at a more meta-physical level:
If we assume Solomonoff induction is the correct way to do induction, every problematic conclusion we get from assuming the probability of particular universe, is instead a "measure" of how much this universe exists, we also get if we assume it is just a probability of existence.
Also we can't really know if we are a Boltzmann brains or not. If we could, it would not be a problem, we would just see that at posteriori, we are not, even if it is much more probable a priori.
>But I would also add that we should be used to dealing with infinity in this particular way - it’s what we do for hypotheses. There are an infinite number of hypotheses explaining any given observation. Why is there a pen on my desk right now? Could be because I put it there. Could be because the Devil put it there. Could be because it formed out of spontaneous vacuum fluctuations a moment ago. Could be there is no pen and I’m hallucinating because I took drugs and then took another anti-memory drug to forget about the first drugs. Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor).
What is your evidence that this is how people actually reason in this way, and what would it mean in principle for there to be a correct hypothesis about this? People reason this way because they adopt social behaviors that others deem worthy enough to accomplish their goals. Metaphysics is not part of the ordinary life of reasoning, and I see no epistemic reason whatsoever to accept whatever you deem worthy of calling a hypothesis that is bound by Occam's Razor vs a maximally inflationist ontology that accomplishes the same goals - because metaphysics never played a role in our reasoning processes in the first place.
I don't understand why you write this
>What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, faked OJ’s appearance, faked his DNA, then vanished into thin air? Does this depend on the police having some piece of evidence left in reserve which they haven’t told the theory-crafters, that they can bring out at a dramatic moment to “falsify” the latter theory? No. Perhaps OJ’s defense team formulated the second-killer theory so that none of the evidence presented at the trial could falsify it. Rejecting it requires us to determine that it deserves a complexity penalty relative to the simple theory that OJ was the killer and everything is straightforwardly as it seems.
when you just wrote this
>Every observation has an infinite number of possible explanatory hypotheses. Some of these could be falsifiable - but in practice you’re not going to falsify all infinity of them.
In practice, juries aren't instructed to deliberate with respect to Occam's Razor. Neither the lawyers nor the police operate with it in mind. Philosophers of science/espitemologists aren't brought in for expert testimony where they try to preach to the jury about theoretical virtues and falsifiability and simplicity and other competing models of explanation/scientific theories. The jurors simply act in accordance with human practice of behaving in the various social contexts that encompass the hearing. While these posthoc appeals to simplicity and falsifiability can influence someone's reasoning or aesthetic prefrences in reasoning, it's not clear at all that they are involved in, let alone essential to, deliberation. (Note that there is no universally accepted account of simplicity or falsifiability, so even applying those principles to e.g. a jury deliberation is going to display huge variability in what the subjects permit to encompass those concepts and what they rule out. It's even possible [if not probable] that the subjects will stick to the judgments they issued earlier and operationalize the principles so as to defend their prior commitments without using the principles as tools to update their reasoning.)
Why even give credence to Tegmark here by arguing about it?
Math exists WITHIN the universe, the end. The fact many things inside what WE perceive as the universe contains math is not what defines it. The fact you watch a movie on a screen does not means what you just saw at the screen is real NOR DEFINES all the screen can show you.
Moreover, if the universe was merely mathematical, we can then code god, which would make Tegmark into some sort of cyberpunk christian at best. So.much.nonsense.
I know this is an "old" post by this pointm, and that my gripe was probably debated in the comments already, but I think your notion of "falsifiable" is wrong, or at least not what people (among which are the relevant philosophers who coined and usef the term) mean then they say falsifiable.
You give examples of things you call unfalsifiable in the sense that its probably very hard to prove them (a conspirator doing the oj killings, infering things about an archeology question where all the pots and what not have been dug up already). That's not unfalsifiable, unfalsifiable means there isn't even a theoretical way, given whatever resources and compute you want, to answer the question (or more accurately, to differentiate IN ANY POSSIBLE WAY between the wrld in which the hypothesis is true to one in which it is false).
A world in which the devil magically put dinosaur bones in the ground is completely falsifiable from a world where there is not devil and dinos existed. In the former world there's a devil, you can presumably talk to him, he can show you how he creates bones out of nothing and puts them in all sorts of places. In the latter there aren't. There are experiments you can hypothetically hold, or at least happenstances that can hypothetically occur, that prove to you that you're in world A and not B or vice versa. "Ufalsifiable" would be there aren't such experiments or happenstances, even theoretically.
If a theory it unfalsifiable it means its tautological (logically the two terms mean the same thing), and as far as I know you (Scott) don't like tautological arguments.
The idea of falsifiability was designed to combat badly defined questions that mean nothing. If a person claims animals "dont have a soul, but humans do", and you keep asking them what does it mean to have a soul to begin with - if they don't have a well defined model it can be shown that there's know hypothetical way to differenciate between a universe where humans have this thing called a soul to one that they dont. Its a sanity test you can run on a hypothesis to check that its not a nonsensical tautology.
Souls are an easy example, but another could be highschool level debates on determinism vs free choice. In advanced philosophical debates there might be some sort of way to define what that even means, but when you're arguing about it in middle school or high school, nobody could answer the question "wait, what possible experiments can you hold that will tell you if you are in a universe with free choice or not? assume you have every resource you want. In fact, what could even happen that will prove to you one or the other?".
Realizing this has no answer (when you're the average nerdy school student) should then inform you the debate itself is ill defined, even though you have a strong intuition its a real and important question.
Falsifiability is, in fact, a must for any theory. It doesn't break down in every day life.
The commenters stating that this entire debate isn't falsifiable are also wrong since you need to ask the question "falsifiable from what?". Is Tegmark's theory falsifiable from the existence of God as a magical dude you can talk too? definitely, in a similar way the devil example from above is. Is it falsifiable from "the way we understand the universe now"? Maybe, I'm not sure there's a cannonical baseline theory that adresses the concerns Tegmark's theory does.
“I admit that in rejecting this, I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.”
Fwiw, I would be very interested in reading this!