[Original thread here: Tegmark’s Mathematical Universe Defeats Most Arguments For God’s Existence.]
1: Comments On Specific Technical Points
2: Comments From Bentham’s Bulldog’s Response
3: Comments On Philosophical Points, And Getting In Fights
Comments On Specific Technical Points
Nevin Climenhaga writes:
Tegmark's Mathematical Universe theory faces similar problems to more standard physical multiverse hypotheses as a response to the fine-tuning argument. First, it predicts that most observers would be "Boltzmann Brains".
It's not right that, as the post suggests, "a conscious observer inevitably finds themselves inside a mathematical object capable of hosting life." Although most mathematically possible universes have parameters that don't allow for complex life to evolve in the way we think it did in our universe, that doesn't mean there are no observers at all in those universes. Even in a universe at a state of thermal equilibrium (maximum entropy), there should be very infrequent chance fluctuations that lead to Boltzmann Brains: particles that have organized themselves into a functioning brain in a sea of chaos surrounding them. And while these fluctuations are very infrequent, since a fine-tuned universe is so unlikely, in the space of all possible universes, there are still vastly more Boltzmann Brain observers, most of whose experiences are a jumbled mess, than there are observers with highly ordered experiences as of a fine-tuned universe.
So if we are random observers in the space of all possible universes, it's vastly more likely that our experiences would be a jumbled mess than that they would be of the ordered kind we actually have. (How much more likely will depend on how we sort out the simplicity weighting, but I don't think any principled weighting will avoid this conclusion.)
On the plausible assumption that it's more likely that our experiences would be ordered if the universe was created by God, our experiences are then evidence for God over all possible universes existing.
Boltzmann brains are a problem for even a single universe - the classical “Boltzmann brain” paradox assumes the universe will have some amount of normal life in the “early years” when stars and galaxies will still form, and then only (spectacularly rare) Boltzmann brains in the later years after all matter has decayed. But since the early years are finite and the later years (potentially) infinite, there will be more Boltzmann brains than normal life.
I think of this as one of many paradoxes of infinity. But I don’t think there’s an additional paradox around fine-tuning or the multiverse. Among universes still in their “early” phase of having matter and stars, Boltzmann brains are less likely than real universes that got the fine-tuning right.
I’m having trouble finding any “official” calculation of the exact likelihood of Boltzmann brains, but Wikipedia cites an unsourced calculation that our universe should get one every 10^500 years. Since our universe is about 10^10 years old, that means a 1 / 10^490 chance of a Boltzmann brain during our universe’s history so far.
Suppose there are about 10^10 observers per “real” inhabited universe-lifetime (this is probably a vast underestimate - it’s about the number of humans who have ever lived, so it’s ignoring aliens and future generations). This suggests you need 10^500 universe-lifetimes to create enough conscious observers (via Boltzmann brain) to equal one “real” universe.
But the most-cited estimate for the fine-tunedness of the universe is 10^229, so observers in fine-tuned universes should still be centillions of times more likely than Boltzmann brains.
Both of these numbers are extremely made up, but this is the calculation you’d have to do if you wanted to argue that Boltzmann brains were counterevidence to the multiverse. In the absence of someone doing this calculation convincingly and showing it comes out against the multiverse, I don’t think the counterargument really stands.
I think people think it’s devastating because they’re confusing it with an older argument, from back before Big Bang theory, when people thought maybe the entire universe arose as a Boltzmann fluctuation. Here people objected that it’s more likely for a single brain to arise as a fluctuation than for the whole universe to do so. But Tegmark’s theory doesn’t claim that universes arise as Boltzmann fluctuations, so it’s possible for universes to be more likely than Boltzmann brains.
Another commenter, Gabriel, links a paper questioning whether Boltzmann brains are possible - though remember that if we’re positing a multiverse then the borders of “possible” have to expand beyond our current laws of physics.
Xpym writes:
I think you're conflating two things - mathematical objects are logically necessary in the abstract game we play within our minds, where initial axioms and rules of inference are accepted by fiat. But MUH posits that math "exists" independently of our minds, which is far from uncontroversial, let alone logically necessary.
I agree this is a strong attack on MUH, but I also think you can sort of just . . . sidestep it?
Tolkien has a prologue where all of the archangels sing of the universe, and then God decides He likes it and gives it the Secret Fire that transforms it from mere possibility into existence.
I think of MUH as claiming that there is no Secret Fire, no difference between possibility and existence. We live in a possible world. How come we have real conscious experiences? Because the schematic of Possible World #13348 says that the beings in it have real conscious experiences. Just as unicorns don’t exist (but we can say with confidence that they have one horn), so humans don’t have any special existence of the sort that requires Secret Fires (but we can say with confidence that they are conscious).
Isn’t this crazy? I think of the Mandelbrot set as a useful intuition pump. A refresher: the Mandelbrot set comes from an extremely simple rule - watching how the function z^2 + c diverges in the complex plane. Make some artistic design decisions, and the graph looks like this:
Where did all of that come from? It was . . . inherent in the concept of z^2 + c, I guess. Somehow lurking latent in the void. Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
If some weird four-dimensional Mandelbrot set somehow encoded a working brain in it somewhere, is there something that it would be like to be that brain, looking out from its perch on one of the spirals and gazing into the blue depths beyond?
Lucian Lavoie writes:
I think the biggest flaw with Tegmark's argument is that consciousness just doesn't exist.
That's not a fatal flaw though; it's easy enough to just say any given object must exist within a universe complex enough to allow for its generation and subsistence. No experience necessary, and including it only muddles the conversation.
Lots of people had opinions about consciousness here, but I used it only as a shorthand. I think you can reframe this theory without talking about consciousness at all. Imagine a world where some bizarre process produced intelligent robots without any consciousness. These robots might have been imbued by the random process that created them with some specific goal (like creating even better robots), and in service of that goal, they might exchange messages with each other to communicate their insights about the universe (without “understanding” these messages in a deep way, but they could still integrate them into their future plans). These messages might include things like:
“It seems like our universe is sufficiently fine-tuned that robots can come to exist in it.”
“We find ourselves on planet Robonica VII, rather than as Boltzmann brains floating in the void. It seems like it’s not wildly impossibly uncommon for beings to exist in this way.”
“Consciousness” is a useful shorthand for discussing these insights so that we don’t have to talk about planets full of robots every time we want to have a philosophical discussion, but I don’t think anything in this discussion hinges on it.
dsteffee writes:
Why can't you make a random draw from an infinite set?
I messed up my terminology here, although luckily most people figured out what I meant. The correct terminology (thanks /r/slatestarcodex commenters) is that you can’t make a uniform random draw from a set of infinite measure.
Imagine trying to pick a random number between one and infinity. If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined. Reddit commenter elliotglazer has an even cuter version of this paradox:
» “The contradiction can be made more apparent with the "two draws" paradox. Suppose one could draw a positive integer uniformly at random, and did so twice. What's the probability the second is greater? No matter what the first draw is, you will then have 100% confidence the second is greater, so by conservation of expected evidence, you should already believe with 100% confidence the second is greater. Of course, I could tell you the second draw first to argue that with 100% probability, the first is greater, contradiction.”
When I said you could do this with some sort of simplicity-weighted measure, I meant something like how 1/2 + 1/4 + 1/8 + … = 1. Here, even though you are adding an infinite number of terms, the sum is a finite number. So if you can put universes in some order, let’s say from simplest to most complex, you could assign the first universe measure 1/2, the second universe measure 1/4, the third universe measure 1/8, and so on, and the sum of their measure would be 1. Then you just draw a random number between 0 and 1 and see which universe it corresponds to (eg if you got 0.641, then since this is between 1/2 and 1/2+1/4, it corresponds to universe #2).
EigenCat writes:
But there are objective measures of simplicity! They come from information theory. It's the information content of the rules and initial conditions in bits, or else their Kolmogorov complexity (how many bits you need for a program that generates these rules and initial conditions). Of course there's still the question of which *exact* measure we use, but that's very different from saying we don't have an objective simplicity metric at all. (And yes, God has much more complexity based on this metric, because you'd need to fully specify the God's being - basically fully specify a mind, in sufficient detail to be able to predict how that mind would react to *any* situation, and that's way more complex than a few rules on a chalkboard.) Anyway, the bigger question for me is WHY does in need to be weighed specifically by simplicity (of all possible criteria) in the first place : )
I am really out of my depth talking about information theory, but my impression was that this is a useful hack, but not perfectly objectively true, because there is no neutral programming language, no neutral compiler, and no neutral architecture.
Kolmogorov complexity of statements is sometimes regarded as language-independent, because there’s a low bound on how much language can matter. But even this practically-low bound is philosophically confusing: since the universe actually has to implement the solution we come up with, there can’t be any ambiguity. But how can the cosmos make an objective cosmic choice among programming languages? This is weird enough that it takes away from the otherwise-impressive elegance of the theory.
But also, you can design a perverse programming language where complex concepts are simple, and simple concepts are complex. You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’. Now the Harry Potter universe is the simplest thing in existence and we should expect most observers to live there. This is obviously a ridiculous thing to do, but why? Maybe because now the compiler is complex and unnatural, so we should penalize the complexity of language+compiler scheme? But without knowing what the system architecture is, it’s hard to talk about the size of the compiler - and in this case, we’re trying to pretend that we’re running this whole thing on the void itself, and there is no system architecture!
All of this makes me think that although Kolmogorov complexity gestures at a solution, and makes it seem like there should be a solution, nobody has exactly solved this one yet.
kzhou7 writes:
Though nobody can disprove this hypothesis, there's a reason a lot of physicists dislike it: if it were actually seriously believed, at any previous point in the history of physics, it would have stopped scientific progress.
1650: why does the Earth orbit the Sun the way it does? Of course, because it's a mathematically consistent possibility, ellipses are nice, and we'd be dead if it didn't! What more is there to say? But actually it was Newton's law of gravity.
1875: why has the Sun been able to burn for billions of years, when gravitational energy would only power it for millions? It must be because otherwise, we wouldn't have had time to evolve! But actually it was nuclear energy.
1930: why is the neutron so similar in mass to the proton? Obviously, it is because otherwise complex nuclei wouldn't be stable, so you couldn't have chemistry and we wouldn't exist. But actually it was because they're both made of three light up/down quarks.
1970: why don't protons decay? You dummy, it's because otherwise the Earth would have disintegrated by now! But actually it was because baryon number conservation is enforced by the structure of the Standard Model.
From the physicist's perspective, both "God did it" and "anthropics did it" communicate the same thing: that investigating why the universe is the way it is, is a waste of time.
I think this is false. Tegmark's version of the anthropic principle says things should be as simple as possible, preferably fit on a chalkboard. If you tried to put "Earth orbits sun in an ellipse" to something that on a chalkboard, you'd run into trouble defining "Earth" and "Sun", and if you tried to do it rigorously you would end up with something like gravity. Or even if you didn't, explaining orbits and tides with the same thing would be simpler than using an equation for both of them.
The anthropic principle weakly suggests that somewhere there might be things that can't be fully explained in terms of other things, but the alternative (everything can be explained in an infinite regress, so that for each level there's always a lower one) is absurd.
Comments From Bentham’s Bulldog’s Response
Bentham’s Bulldog wrote a response, Contra Scott Alexander On Whether Tegmark’s View Defeats Most Theistic Arguments.
He starts by listing some proofs of God that MUH doesn’t even pretend to counter. I agree I was sloppy in saying MUH defeated “most” proofs of God’s existence, since proofs (like universes) are hard to enumerate and weigh precisely. I think it defeats a majority of the mentions of proofs that I hear (that is, each proof weighed by the amount it comes up in regular discourse), but that could be a function of the discourse more than of the state of apologetics.
Bulldog mentions consciousness, psychophysical harmony, and moral knowledge as proofs he especially likes which MUH doesn’t even begin to respond to. I agree consciousness is the primary challenge to any materialist conception of the universe and that I don’t understand it. I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one. I admit that in rejecting this, I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.
Psychophysical harmony is in the in-between zone where it’s interesting. The paper Bulldog links uses pain as its primary example - isn’t it convenient that pain both is bad (ie signals bodily damage, and evolutionarily represents things we’re supposed to try to avoid) and also feels bad? While agreeing that qualia are mysterious, I think it’s helpful to try to imagine the incoherence of any other option. Imagine that pain was negatively reinforcing, but felt good. Someone asks “Why did you move your hand away from that fire?” and you have to say something like “I don’t know! Having my hand in that fire felt great, it was the best time of my life, but for some reason I can’t bring myself to do this incredibly fun thing anymore.” And it wouldn’t just be one hand in one fire one time - every single thing you did, forever, would be the exact opposite of what you wanted to do.
It sounds prima facie reasonable to say qualia aren’t necessarily correlated with the material universe. But when you think about this more clearly, it requires a total breakdown of any relationship between the experiencing self, the verbally reporting self, and the decision-making self. This would be an absurd way for an organism to evolve (Robert Trivers’ work on self-deception helps formalize this, but shouldn’t be necessary for it to be obvious). Once you put it like this, I think it makes sense that whatever qualia are, evolution naturally had to connect the “negative reinforcement” wire to the “unpleasant qualia” button.
(why think about this in terms of evolutionarily-controlled wires at all? Consider people with genetic pain asymbolia. “What, did the hand then of the Potter shake?”)
But aside from these, he also had some objections to Tegmark in particular:
One thing that Scott did not mention but could have is that the Tegmark view explains the anthropic data. On the Tegmark view, the number of people that exist would be the biggest number of people there could be! That gives you enough people to explain the fact that you exist (if, as I suggest, you’re likelier to exist if more people exist, and should thus think the number that exists is the most that it could be, the Tegmark view accommodates that). But I think the Tegmark view has various problems and cannot explain most of the evidence favoring theism.
The biggest problem for the view is that it collapses induction (a while ago Scott and I had a lengthy back and forth about this). On the Tegmark view, there are unsetly many people with every property: because there are infinite mathematically describable worlds like ours until one second but that turn to jello or a pile of beans one second from now. But there’s no reason to think we’re not in such a world. There are infinite in each case.
Now, the reply given by proponents of the Tegmark view is that the simpler worlds exist in great numbers (I’m about to plagiarize myself FYI—I’m funky like that!). The problem is that it doesn’t make much sense to talk about greater numbers of worlds unless one is a bigger cardinality than the other. The way infinities are measured is by their cardinality—that’s determined by whether you could put the members of the infinite set in one to one correspondence. If you have five apples, and I have five bananas, they’re sets of the same size, because you can pair them 1:1.
Often, infinities can be the same cardinality even if one seems bigger than the other. For instance, the set of all prime numbers is equal in size to the set of all natural numbers, because you can pair them one to one: you can pair 1 with the first prime, 2 with the second prime, 3 with the third prime, and so on.
Crucially, even if deceived people are rarer and non-deceived people are common, the number (measured by cardinality) of deceived people will be the same as the number of non-deceived people. To see this, suppose that there are infinite galaxies. Each galaxy has 10 billion people who are not deceived and just one person who is deceived. Intuitively you’d think that there are more non-deceived people than deceived people.
This is wrong! There are the same number. Suppose the galaxies are arranged from left to right, with a leftmost galaxy but no rightmost galaxy. Imagine having the deceived people from the first 100 trillion galaxies move to the first galaxy (containing 10 billion deceived people). Next, imagine having the next 100 trillion galaxies move to the second galaxy. Assuming you keep doing this for all the people, just by moving the people around, you can make each galaxy have 100 trillion people who are deceived and only 10 billion who aren’t deceived. So long as the number of deceived people is not a function of where the people are located, it’s impossible to hold that there are more deceived people than non-deceived people based on the fact that deceived people are rarer than non-deceived people. How rare deceived people are can be changed just by moving people around.
That is, suppose that there are one billion real people for every Boltzmann brain. If there are infinite universes, then the ratio becomes one-billion-times-infinity to infinity. But one billion times infinity is just infinity. So the ratio is one-to-one. So you should always be pretty suspicious that you’re a Boltzmann brain. The only way you can ever be pretty sure you’re not a Boltzmann brain is if nobody is a Boltzmann brain, presumably because God would not permit such an abomination to exist.
I’ve talked about this with Bulldog before, and we never quite seem to connect, and I worry I’m missing something because this is much more his area of expertise than mine - but I’ll give my argument again here and we can see what happens.
Consider various superlatives like “world’s tallest person”, “world’s ugliest person”, “world’s richest person”, etc. In fact, consider ten categories like these.
If there are a finite number of worlds, and the average world has ten billion people, then your chance of being the world’s richest person is one-in-ten-billion.
But if there are an infinite number of worlds, then your chance is either undefined or one-in-two, as per the argument above.
But we know that it’s one-in-ten-billion and not one-in-two, because in fact you possess zero of the ten superlatives we mentioned earlier, and that would be a 1-in-1000 coincidence if you had a 50-50 chance of having each. So it seems like the universe must be finite rather than infinite in this particular way.
But both Bulldog and I think infinite universes make more sense than finite ones. So how can this be?
We saw the answer above: there must be some non-uniform way to put a measure on the set of universes, equivalent to (for example), 1/2 + 1/4 + 1/8 + … Now there’s a finite total amount of measure and you can do probability with it again.
This isn’t just necessary for Tegmark’s theory. Any theory that posits an infinite number of universes, or an infinite number of observers, needs to do something like this, or else we get paradoxical results like that you should expect 50-50 chance of being the tallest person in the world.
So when Bentham says:
The simplest version of the Tegmark view would hold simply that all mathematical structures exist. But this implies that you’d probably be in a complex universe, because there are more of them than simple universes. To get around this, Tegmark has to add that the simpler universes exist in greater numbers. I’ll explain why this doesn’t work in section 3, but it’s clearly an epicycle! It’s an extra ad hoc assumption that cuts the cost of the theory.
… I disagree! Not only is it not an epicycle artificially added to the Tegmark theory, but Bulldog’s own theory of infinite universes falls apart if he refuses to do this! The fact that everything with Tegmark works out beautifully as soon as you do this thing (which you’re already required to do for other reasons) is a point in its favor.
But I would also add that we should be used to dealing with infinity in this particular way - it’s what we do for hypotheses. There are an infinite number of hypotheses explaining any given observation. Why is there a pen on my desk right now? Could be because I put it there. Could be because the Devil put it there. Could be because it formed out of spontaneous vacuum fluctuations a moment ago. Could be there is no pen and I’m hallucinating because I took drugs and then took another anti-memory drug to forget about the first drugs. Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor). When we do the same thing to the infinity of possible universes, we should think of it as calling upon an old friend, rather than as some exotic last-ditch solution.
Finally, I admit an aesthetic revulsion to the particular way Bentham is using “God” - which is something like “let’s imagine a guy with magic that can do anything, and who really hates loose ends in philosophy, so if we encounter a loose end, we can just assume He solved it, so now there are no loose ends, yay!” It’s bad enough when every open problem goes from an opportunity to match wits against the complexity of the universe, to just another proof of this guy’s existence and greatness. But it’s even worse when you start hallucinating loose ends that don’t really exist so that you can bring Him in to solve even more things (eg psychophysical harmony, moral knowledge). If there is a God, I would like to think He has handled things more elegantly than this, so that we only need to bring Him in to solve one or two humongous problems, rather than whining for His help every time there’s a new paradox on a shelf too high to reach unassisted.
Comments On Philosophical Points, And Getting In Fights
Adrian writes:
I don't get it. What's the point of this? Is any of that even remotely falsifiable? Does this hypothesis make any predictions that can ever be observed? If not, it's not a theory, merely intellectual navel-gazing, and it cannot tell us anything about the nature of our reality.
Joshua Greene writes:
Are there any falsifiable predictions from this approach? I'm not talking about meta-level ("no theist will be convinced.")
People need to stop using Popper as a crutch, and genuinely think about how knowledge works.
Falsifiability doesn’t just break down in weird situations outside the observable universe. It breaks down in every real world problem! It’s true that “there’s no such thing as dinosaurs, the Devil just planted fake fossils” isn’t falsifiable. But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword! The reason you believe in dinosaurs and not devils is because you have lots of great tools other than falsifiability, and in fact you never really use the falsifiability tool at all. I write a bunch more about this here and here.
Every observation has an infinite number of possible explanatory hypotheses. Some of these could be falsifiable - but in practice you’re not going to falsify all infinity of them. Others aren’t falsifiable even in principle - for example, you may be dealing with a historical event where archaeologists have already dug up all the relevant pottery shards and all other evidence has been lost to time.
What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, faked OJ’s appearance, faked his DNA, then vanished into thin air? Does this depend on the police having some piece of evidence left in reserve which they haven’t told the theory-crafters, that they can bring out at a dramatic moment to “falsify” the latter theory? No. Perhaps OJ’s defense team formulated the second-killer theory so that none of the evidence presented at the trial could falsify it. Rejecting it requires us to determine that it deserves a complexity penalty relative to the simple theory that OJ was the killer and everything is straightforwardly as it seems.
Falsifiability can sometimes be a useful hack for cutting through debates about simplicity. If the police had held some evidence in reserve, then asking OJ’s defense team to predict it using the second-killer theory might strain their resources (or it might not - see the garage dragon parable). But when we can’t use the hack, we can just hold the debate normally.
Tup99 writes:
There's one very important point of clarification that is missing, which has thrown me off from understanding the point of this post.
The title suggests that Tegmark has defeated most proofs of God. But AFAICT, it's actually more like: "If Tegmark's hypothesis is true, then it defeats most proofs of God." And doesn't mention any evidence for this hypothesis (that existing in possibility-space is enough for a being to in fact be experiencing consciousness) being true.
You can defeat a proof with a possibility claim. For example, if you claim to have proven that all triangles are greeblic, and I show out that you only demonstrated this for equilateral triangles, but forgot to demonstrate it for isoceles triangles, then your proof fails. I don’t have to prove that isoceles triangles aren’t greeblic for your proof to stop working.
People bring up the fine-tuning argument as a proof of God. If I show that other things can create fine-tuning, then God is no longer proven. This doesn’t mean God definitely doesn’t exist. It just means that we’re still uncertain.
(and your exact probability should depend on which solution to the fine-tuning problem etc you find more plausible)
Ross Douthat writes:
Okay, but earlier this month, Ross published an article, My Favorite Argument For The Existence Of God, where he talked about how the multiverse objection to the fine-tuning argument failed because it didn’t explain why physical law was so comprehensible. But Tegmark’s mathematical universe hypothesis does explain why physical law is comprehensible. In the original post, I described this as:
Argument from comprehensibility: why is the universe so simple that we can understand it? Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones; therefore, the average conscious being exists in a universe close to the simplest one possible that can host conscious beings.
I don’t understand how someone writes an article saying that multiverse can’t answer the comprehensibity objection, reads someone else explain how a version of multiverse answers the comprehensibility objection, and then gets salty because they’ve already heard of the multiverse theory. If you already understood Tegmark’s theory, why did you write an article saying you didn’t know of good answers to the question which it was designed to answer?
I’m not even claiming to be novel! I don’t even know if Max Tegmark claims to be novel! Mock us all your want for being boring and stale and unfashionable, just actually respond to our boring/stale/unfashionable points instead of continuing to act like they don’t exist!
Shankar Sivarajan writes:
Yeah, this is basically Plato.
Michael L Roe writes:
2010? I’ve recently been asking DeepSeek about René Descartes and Gottfried Leibniz. Someone could have said most of that in 1710…Why is there something rather than nothing? Is straight out. Of Leibnitz’s Principles of Nature and Grace, which we can now read as being about Artificial Intelligence.
Oliver writes:
Should we refuse to eat Beans?
I find this kind of thing annoying too, sorry. “Oh, this new idea is basically just reinventing Plato. And also Descartes and Leibniz. And Pythagoras. All of whom were just reinventing each other, or whatever.”
If anything to do with the Ideal reminds you of Plato, and anything to do with the Real reminds you of Aristotle, then you can dismiss any idea as either “just reinventing Plato” or “just reinventing Aristotle”. This is the intellectual equivalent of those journalists who would write articles on Uber saying “These Silicon Valley geniuses don’t realize that they’ve just reinvented the taxi!”
Kenny Easwaran writes:
It’s a lot like David Lewis’s modal realism (from his 1984 book On the Plurality of Worlds) and has something in common with Mark Balaguer’s plenitudinlus platonism (from his 1998 book Platonism and Anti Platonism in Mathematics) but it’s a bit different from either. I suspect some of the medievals and ancients had some related idea. But until the development of 20th century logic there wasn’t a clear conception of what “every consistent mathematical theory” means, and it would likely take an analytic philosopher to endorse such a blunt view that this is everything that exists.
Whatever, I give this one a pass, at least he picked someone other than Plato and Aristotle.
Rob writes:
Love your blog, love the content, only superficially considered the arguments, but I agree with commenters saying there are pretty odd assertions in here.
My goodness! Odd assertions? In an ACX post? What a disaster! Somebody must go tell the Queen!
Share this post