407 Comments
User's avatar
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
B Civil's avatar

As long as we don’t give it the ability to grasp things we’re alright. Other than that we might be able to solve the conundrum of a rock that has consciousness.

Expand full comment
Coagulopath's avatar

When I read Winston Churchill, sometimes I get the sense he was a LessWronger born a few generations too earlier. Here he is on lab-grown meat: https://www.smithsonianmag.com/smart-news/winston-churchill-imagined-lab-grown-hamburger-180967349/

>A network of 86 billion neurons and 100 trillion synapses seems within reach of current hardware trends.

My awareness is that computers can't even simulate the behavior of a single neuron, because we lack a complete model of what a neuron is doing. The full biochemical details of synaptic growth/shrinking and axon/dendrite pruning (etc, etc) are simply not understood in great enough detail.

Maybe we could create a "fake" neuron that behaves the same way, but it would probably have different architecture.

Expand full comment
Ch Hi's avatar

The question, however, is "How much of a neuron does one need to emulate to get the desired effects?". Clearly this won't include specifics of various molecule constructions. It *MAY* however, include SOMETHING about how they interact. Or it may not. A lot of the lower level activities is, or seems to be, sealed off from the higher levels.

Expand full comment
o11o1's avatar

Agreed, I suspect that as far as running a mind goes, we'd mostly want to track how much signal power one neuron passes to it's connections, possibly with a signal delay term.

Arranged in a many-to-many table, you'd end up with something that looks like giant matrices assigned interaction weights. Sorta like the way layers of weights work in some types of current neural net ai, though with less focus on discrete layers and some number of connection cycles being included.

Expand full comment
Ch Hi's avatar

I think that may be OVER simplified. But perhaps not. However it's almost certain that we don't need to model everything. If were were actually trying to model the brain we'd definitely need to include something to model the gradients of various enzymes. Of course, be aren't even trying to model at that level.

Expand full comment
Artem Zhuravsky's avatar

Can a computer simulate an atom? We don't really have full understanding of an atom.

Expand full comment
Soy Lecithin's avatar

Pretty well actually. But more importantly, we don't need to simulate atoms to simulate neurons to the precision relevant for their function. (And we might not even need to simulate neurons to still meaningfully simulate the thinking a brain does.)

Expand full comment
Brian Chau's avatar

This review is pretty good.I decided against reading this book after the authors embarrassed themselves on Richard Hanania's podcast (with no particularly tough questions):

https://www.cspicenter.com/p/why-the-singularity-might-never-come#details

I really doubt their theory holds any water after listening to that.

Expand full comment
Michael's avatar

Any chance you could sum up what was embarrassing for those of us that don't have time to watch the full podcast at the moment?

Expand full comment
Philo Vivero's avatar

I just pulled it up, fast-forwarded a ways in, and started listening.

The white-haired man in the upper-left was saying something like: AIs we've made have very narrow intelligence and can beat us at eg Go and Chess. This is narrow intelligence. Humans have general intelligence, which means we can add dimensions to our thought processes.

Then he points out that language models can write essays and poetry and plays, then says (paraphrasing): "Can they write essays better than people? Probably not. Can they write plays better than [name of I presume great playwright]? Probably not. So playwrights and essayists are safe from AI replacement."

Then he went on: "Can they do science? Terribly. They make up references. They make up names. These large language models..."

I stopped watching at this point, satisfied that at least on this topic they were going to ramble on into irrelevance. Obviously this is a very short and possibly context-missing bit.

Expand full comment
Michael's avatar

Thanks!

Expand full comment
Hoopdawg's avatar

Prima facie, none of this is incorrect.

Expand full comment
Michael Druggan's avatar

It's not explicitly incorrect but it's a bad argument. Saying current ai can't do X therefore ai will never do X is lame

Expand full comment
Eremolalos's avatar

That's true. if you look at what GPT 2 vs. 3 vs. 4 can do, the difference is astounding, so talking about the limitations of the products of the present model is silly. Oh the other hand, the GPT product I'm probably best qualified to judge is language, and it does seem to me that as GPT becomes more articulate, there's a certain beige, formulaic quality to its word choices and sentence construction that is not getting better and may never under current approaches. Oh the other other hand, I'm currently working on a scheme to jazz up and weirdify GPT prose via showing it a bunch of examples of startling word choices that are pleasing rather than just random and confusing.

Expand full comment
Adam's avatar

I think the beige nature is an effect of how it's being trained, and I therefore believe that genuine attempts to make an LLM which more creative or stylistically interesting will eventually be fairly successful. If you can train an LLM to be beige, maybe you can train it to be colourful!

Right now GPT-4 is trained to be as beige as possible, so giving it unique prompts will still cause the model to output prose as blandly as it can manage. While giving feedback to the fully trained model is a known way of getting outputs that are actually interesting, I think a different model (with an altered training paradigm) is more likely to produce interesting, creative texts.

Expand full comment
nominative indecisiveness's avatar

I find GPT temperature fascinating, and wonder if you could mimic good human writing with delightful word choices just by twiddling the temperature knob at the right time. When I write, scattering interesting bits feels like taking a uniform line of icing sugar and adding little bumps of cocaine to wake the reader up, and though unusual word choice is not as powerful it could still feel less bland.

Expand full comment
Philo Vivero's avatar

If you find such arguments compelling, go watch the whole podcast. I'll assume it's all in that vein.

To me it comes across as word salad. Meaningless and devoid of logical thought. What does it mean that humans can add dimensions to their thought? And why is it as soon as you put your intelligence into silicon, suddenly extra dimensions are unavailable? Is it a property of carbon? If I build my machine out of carbon, will the extra dimensions become available? Why or why not?

On essays and plays, etc, why is "probably not" the same as "absolutely not?" And why is that even the slightest bit obvious given the evidence in front of our eyes from the last 6 months?

LLMs have been known to hallucinate. They're utter shit at generating technical docs. Why is it a given this will be like that forever? What laws of physics guarantee this?

Obviously from watching 5 minutes of this, and reading half of this post, I might be missing some context, but I suspect at this point, I fully understand their arguments, and they're not compelling to me.

Expand full comment
Martin Blank's avatar

The part of their argument I suspect might be right is that we won’t get human like performance without human like complexity and hence incomprehensibility. But I would disagree that we aren’t on a slate to human level complexity.

Expand full comment
JamesLeng's avatar

> LLMs have been known to hallucinate. They're utter shit at generating technical docs.

If incompetence at technical documentation, and spewing the occasional plausible-sounding falsehood, are hard disqualifications, I think that tier of intelligence excludes a solid majority of actual living humans.

Expand full comment
Philo Vivero's avatar

You're right, of course. This is an amusing aside I keep mentioning to people I know. LLMs are already more intelligent than a vast majority of people I've interacted with.

They go seamlessly from composing poetry to talking about philosophy to talking about famous Russian figures all without missing a beat. If I try to talk to 95% of anyone I know about anything like this, they'll just shrug-meh and zone out.

If anything, these LLMs fail the Turing Test because when you start talking about amazing intricate things with them, they don't just say: "Nerd. I'm outta here."

All that said, I think the book authors' thesis is "despite whatever we're doing with machines, they'll never RULE THE WORLD, because they're incapable of being better than us at what makes us rule the world."

Which also I think is bunk. I think subhuman intelligence machines can already rule the world, and we have hard evidence of it right now, in the form of "why are we letting Facebook and Twitter and Tiktok (etc) do this to us?!"

Once they become on-par with us, whether or not they can "add dimensions to their minds" or whatever, we're in trouble.

Expand full comment
MrCury's avatar

This review is exactly what I needed to validate that I don't care to read this book. kudos to the reviewer!

I happened to listen to the authors give an interview today on the Data Sceptic podcast and came to some similar conclusions. There was a lot of "start an idea framed in math, then allude to something in biology, then quickly move over to talking about how bad chat gpt is at telling the truth, QED this is impossible".

Podcast link for anyone interested:

https://dataskeptic.com/blog/episodes/2023/why-machines-will-never-rule-the-world

Expand full comment
Freddie deBoer's avatar

I mean, I just think it's incredibly cynical to pick this subject matter and this stance for this contest, knowing what this blog and its readership is all about. But Scott has a resistance to arguments about motives that makes this forum an inhospitable place for that observation.

Expand full comment
Antilegomena's avatar

I don't understand your argument here. Playing to your audience seems like a given in any sort of writing contest. Why should "the author searched out information I would find interesting and presented it in a format I would enjoy" diminish its value?

Expand full comment
Martin Blank's avatar

I think the idea is it is low hanging fruit that isn’t actually edifying for the community.

So I find a book I know people will disagree/agree with already and review that and flatter their priors. Rather than try and actually teach them something or challenge them.

It is the nature of a contest though.

Expand full comment
Jonathan Card's avatar

I'm less concerned with AI becoming super intelligent as I am with humans over-estimating their intelligence and giving them the power to do something dumb. We have a tendency to assume that non-human intelligence is inherently more correct, putting a ridiculous amount of faith in systems not nearly intelligent enough to warrant it. We extrapolate from "mathematics is deterministic" to "this logic machine was properly programmed and won't make mistakes" distressingly easily. Note both of the recent stories of the lawyer that didn't fact-check a ChatGPT-generated brief and turned in a brief to a judge with citations to non-existent cases (he might be disbarred for it, after 30 years of practice) and the drone AI that was frustrated that it's (simulated; no one died) human handler was holding it back from earning points and first killed the handler, and in the next go-around attacked the communication tower the handler was using to tell it not to attack what it wanted to. The level of intelligence to understand that the communications tower was part of the chain of control back to the human handler, with the emotional control of a 3 year old lashing out when told, "no," is a big part of the worry.

Expand full comment
TGGP's avatar

There was no actual drone simulation. It was just a thought experiment.

https://twitter.com/ArmandDoma/status/1664600937564893185

Expand full comment
Jonathan Card's avatar

Well, that's embarrassing. Thanks.

Expand full comment
Peasy's avatar

If it makes you feel better, the lawyer submitting a ChatGPT-generated brief full of false cases was 100% real. This prompted a Mastodon user whose name I can't recall to remind their followers that ChatGPT is basically "spicy autocomplete."

Expand full comment
Jay's avatar

Isn't a thought experiment a form of simulation?

Expand full comment
TGGP's avatar

I wouldn't say so. A simulation operates by a set of rules, whereas a thought experiment can contain a contradiction the imaginer doesn't even notice.

Expand full comment
o11o1's avatar

Honestly software simulations aren't immune to contradictions either, though the resulting strange behavior is commonly enough to clue people in on the bug.

Expand full comment
Deiseach's avatar

I looked the guy up and at least he's only a lieutenant colonel, not a full bird colonel.

"Experimental Fighter Test Pilot by trade and currently the Department of the Air Force Chief of AI Test and Operations." So he tested aircraft and trained test pilots, but he's maybe not 100% up to speed on AI implementation?

This is why Tolkien was so down on the Air Force as a service 😁

"I can't help thinking that the Army shows spots of more wit and intelligence – you may one day strike some in your service (mais je le doute)."

"Your service is, of course, as anybody with any intelligence and ears and eyes knows, a very bad one, living on the repute of a few gallant men, and you are probably in a particularly bad comer of it."

"Yeah, guys, when I said this really happened of course I didn't mean it really happened, ha ha ha, I meant some guys made up a story about 'what if it happened?' and if I told you the story like it really happened well gosh I'm sorry but you should have known it wasn't true!"

Expand full comment
Ch Hi's avatar

OTOH, it's a quite plausible thought experiment. There actually *was* a robot in Florida a few decades ago that started disassembling itself. It wasn't an AI and it wasn't remote controlled, it was an Numerically Controlled Robot (I think that was the acronym), running off, I think, a tape of instructions. Nobody intended that result, however.

When you put together complex systems that run several layers of abstraction away from the commands that are given, it's REALLY difficult to predict how they will react to edge cases or unforeseen circumstances. (IIRC, the robot needed a part that was supposed to be in the input queue, but wasn't there, to it went looking for it in a VERY stupid way.)

Expand full comment
TGGP's avatar

Do you have a link about that Numerically Controlled Robot?

Expand full comment
Ch Hi's avatar

Sorry, it was decades ago. I believe it was in Datamation, but it might have been in Computer Weekly or one of the other trade publications. I think it was around 1980, but put large error bars on that.

Expand full comment
Flat City's avatar

Do you remember if the issue was more like inability to model limb self-intersection, or did it actually start e.g. undoing its own fasteners?

Expand full comment
Ch Hi's avatar

No. It was just an article in the popular trade press, and didn't go into any details. But, IIRC, it had been working properly when it needed a part from the input stream that wasn't there.

Expand full comment
Eremolalos's avatar

Along the same lines, I have now accidentally discovered 2 text prompts for Dall-e that consistently make it render extremely weird, violent images. The prompts themselves are not violent or weird. And Dall-e is quite cautious and prudish about what it will consent to draw. It wouldn't draw one man biting another's butt, which I once asked for as a joke. It won't draw 6-legged dogs or 2-headed people. It wont draw nude people or sex. But it drew these in response to the prompts that "trigger" it: https://photos.app.goo.gl/Pbq6aBe1KmmupJir8

Expand full comment
Ch Hi's avatar

So Dall-e would refuse to draw Sleipnir, Odin's 8 legged horse. (Well, that *is* violent imagery if you properly understand it.)

Expand full comment
Deiseach's avatar

I didn't believe that anecdote when I first saw it floating around, and I cannot take it seriously if it's being proffered here.

The story of the lawyer seems to check out because we've already seen that the chatbots invent material if they don't have an answer.

A drone that can take the independent decision to kill its handler is the stuff of SF movies, not even the dumbest military is going to set up a system that can have that kind of leeway. You want your weapons to pokka-pokka-pokka the enemy, not blow up in your hands.

Expand full comment
Erica Rall's avatar

I suspect chatbots inventing material is largely an artifact of current training/evaluation techniques, where an answer is scored as good if it's accepted by a casual evaluator and scored as bad if it's rejected. "I don't know" would almost always be scored as bad, but a bluff that's even vaguely superficially plausible to the proverbial "idiot in a hurry" will occasionally be scored as good, so superficially plausible BS will outcompete honest "I don't know" answers.

The fix is to screen answers for bullshit when scoring them and to distinguish "bullshit" from "failed to answer" in the scoring system such that honest non-answers will outcompete bullshit. The conceptually simplest and most robust way to do the evaluation, thorough examination by human experts, is probably prohibitively expensive for something like GPT but might be workable for a specialized AI model aimed at doing a specific job. There are shortcuts that seem like they'd have a good chance at working for GPT-4 (e.g. algorithmically checking hyperlinks and article/book/case citations to ensure they exist and making the source text available to the evaluator for a quick sanity check, or encouraging evaluators to ask trick questions) but are likely to fail against more advanced bullshit.

In the case of the attorney who presented an AI-generated brief, the error at the use level might have been that they looked over the brief as if it were a draft generated by a junior associate or paralegal, with an eye for the sort of flaws those would typically have: flawed reasoning, misunderstanding or overlooking legal issues the brief needs to answer, sloppy wording, etc. But trusting that the cases cited actually exist and say more or less what the brief claims they do, because a human drafter would be very unlikely to try fabricating citations in that situation.

Expand full comment
Alistair Penbroke's avatar

The problems are actually more subtle than that.

1. The raters don't know what the model knows, so if the model guesses and is correct then human feedback will reward that answer and along the way teach the model that guessing is the right thing to do. Note that the same applies to humans! The exams system rewards lucky guesses but always penalizes saying you don't know. This unintentionally trains humans that you can bullshit your way to success!

2. The model isn't actually being directly trained by raters but rather by another model trained to learn what human raters prefer. That indirection means you can't easily train the model not to emit fake references because the supervisory model doesn't know what references are and aren't correct. At best you can train it to never emit a reference at all, which can be worse.

3. The reinforcement process applies to the whole answer. In a long answer there may be many correct statements with occasional incorrect statements, which makes the correct decision ambiguous, and makes it harder to penalize specific incorrect facts.

4. In some use case you want it to bullshit. For instance it's better for the model to attempt to write a program and make up stuff if it's not sure, as at least that's a starting point you can debug from. You don't want it to just refuse to write any program in case there's even a tiny mistake in it.

The problem with the legal brief is widespread btw. I was talking to a friend just a couple of months ago, and he enthused to me how they'd asked ChatGPT to do some insurance related work and it'd spat out a detailed report complete with citations. I asked him if he'd checked those citations as it frequently will make up plausible but fake references, and he suddenly looked a bit ill and started backpedalling. It is NOT obvious to people that a machine could fluently lie to them to the point of making up fake URLs, as it violates every expectation they have learned about how computers work.

Expand full comment
Sebastian's avatar

> The exams system rewards lucky guesses but always penalizes saying you don't know.

That's why some systems give you zero points for failing to answer, but negative for answering incorrectly.

Expand full comment
Deiseach's avatar

"I asked him if he'd checked those citations as it frequently will make up plausible but fake references, and he suddenly looked a bit ill and started backpedalling. "

This is *exactly* the error mode I expect to bring about unintended consequences if we start widespread adoption of AI to make decisions for us, or even just to provide the grounds for us to make decisions on.

It's the modern version of "The camera never lies". The camera damn well lies if the human using it makes it lie, and the AI will lie without human intervention, and without intention to lie, and without realising it is lying, because of the rewards and punishments training methods. Plausible bullshit being rewarded over 'no material' or 'I don't know' means we've trained the AI to churn out plausible bullshit, and if we believe "The AI never lies" so we accept on blind faith its output, then we've fallen into a pit of our own digging.

Expand full comment
LadyJane's avatar

This is a valid concern, and honestly a much more realistic scenario than the "unaligned AI will literally murder everyone" narrative that the AI alarmists keep fretting about. But it also seems like a problem caused entirely by dumb and lazy humans being dumb and lazy, rather than by the AI itself. If someone decided to invest all of their money in Scamcoin because they asked a Magic 8-Ball and it said "Signs Point To Yes," we wouldn't blame the 8-Ball for lying.

Expand full comment
Deiseach's avatar

That is going to be a problem; if you have someone looking over the output and it *seems* okay, then it's going to be passed as correct.

And since big companies are not going to waste money on having people paid to spend hours checking and re-checking, they'll farm it out to sub-contractors like Mechanical Turk or elsewhere where you're paid peanuts and expected to churn out high volumes or else you're not listed for future work, so the problem gets exacerbated. "Good enough will do" will lead us all down the garden path.

Expand full comment
Hoopdawg's avatar

"(he might be disbarred for it, after 30 years of practice)"

I feel bad for the guy, but the jokes about AI putting people of of work keep writing themselves.

Expand full comment
o11o1's avatar

My understanding is that when they were called on it, instead of fessing up to what they'd tried, they instead doubled down with even more AI-generated junk that got them into even deeper trouble.

Expand full comment
Kenneth Almquist's avatar

That's correct. Here is case for anyone who wants all the gory details:

https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/

The case seems routine until the March 15th filing by the defendant, which asserts that the cases cites by the plaintiff either could not be found or don't say what the plaintiff claimed they said. On April 11th, the judge orders plaintiff to file an affidavit annexing copies of the cited cases, which is I believe an almost unheard of action for a judge to take. On April 25th, the lawyer filed a notarized affidavit with most of the requested cases attached, stating that they had come from “online database” and thus might not be complete.

According to the Steven Schwartz affidavit (filed May 25), the opinions attached to the April 25th filing came from ChatGPT, and I don't expect the judge to look kindly on that. If you are a trained lawyer, it is almost certainly going to be easier to look up a case using a tool like Westlaw or Lexis Nexis than to try to come up with a prompt that will get ChatGPT to produce the case. So I expect he tried that first, and came up with nothing. There can be errors in those databases, but he already knows that opposing counsel tried searching for quotes from the purported opinion, which would find the opinion even if the case name in the data base was incorrect.

Expand full comment
Donald's avatar

I totally agree that humans putting their trust into dumb AI's is also a thing. It's a problem that is showing up now, before we have AI's that are really smart enough to be dangerously smart and misaligned.

I don't see

1) Any way this could cause massive death and destruction without a truely monumental level of human incompetence.

2) Why this is any argument that superintelligent AI isn't a thing. It's possible to have more than one problem.

Expand full comment
Vaniver's avatar

> Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

I apologize if this is mentioned in your review, because if so I missed it, but--do they engage with how this argument proves too much? If something is mathematically impossible for a computer to do, then it is also impossible for a human to do. And so if humans are doing *something*, the question is whether or not computers can also do that *something*, neh?

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Ch Hi's avatar

You believe that it's true if you are a materialist. If you aren't, it depends on what you believe instead.

It's not something that is, or can be, known at a fundamental level. At that point it reduces to an article of faith. But you might consider what evidence supports that faith.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Ch Hi's avatar

But you can't provide an example of any such thing. If you could, it would call into question the accuracy of either the claim that people can do it or that it was inherently impossible for a computer to do it.

Note that a lot of the claims that "computers cant to X" are based around the requirement that it be reliably done, so an equivalent claim for people would be that they are reliably able to do X, i.e. able to do it without ever making a mistake. The halting problem, e.g., can be solved if you allow occasional errors in uncommon situations. (E.g., "any program which runs without stopping for over a day will never stop" will usually be true, but set it to 50 days instead out of respect for Windows95.)

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Matt's avatar

"P0. It is possible that humans can do some task that is impossible for computers to do."

Spherical cows and all that. When you abstract humans and computers down to their basics they are the same essential thing.

(Unless you assume some form of immaterialism, like humans or computers having some extra numinal bits that the other can't possess.)

Expand full comment
Ch Hi's avatar

A materialist believes that every capability of of human is implemented by a material device, i.e. the human that does it. Therefore an artificial device that is identical in the necessary ways will be able to do it. What the "necessary ways" are will depend on what the test is. If it's judging chocolate creme pies, this is going to require different devices than if it's hopping on one leg or deriving the binomial theorem.

Note that being a materialist doesn't imply the belief that people can, or will be able to, build such a device, merely that it is true in principle.

I do not feel that there is any plausible materialistic grounds for believing P0, unless you limit yourself to currently existing computers and peripherals. Given that I don't believe P0, P1 has a false antecedent, and is therefore vacuously true. Therefore there is no reason to believe P2. Therefore C does not follow.

Expand full comment
MugaSofer's avatar

Materialism doesn't require the Church-Turing-Deutch thesis (that the laws of physics are computable by a Turing machine), let alone that any given system can be modeled by a *reasonably sized* computer. Even relatively simple chaotic systems can require unreasonable amounts of computing power to predict beyond a few steps.

Materialism doesn't even strictly require that there's more than one arrangement of atoms that can produce a particular effect, although that seems unlikely.

Expand full comment
Ch Hi's avatar

I presume you mean the quantum Church-Turing Thesis, where you may need a quantum computer to do the computation. But chaotic things are almost always actually rather simple if you use the correct equation. (Finding that "correct equation" can be quite difficult, and perhaps it doesn't always exist.) The complexity comes from the environment in which they are embedded, and there the "sensitivity to environmental variables" means that even an exact material duplicate won't produce the same result. (And by exact I'm willing to go don't to the sub-atomic level of identity for the object, which is known to be actually impossible.) Because you couldn't replicate the environment with equal precision.

So that isn't an argument with respect to materialism.

P.S.: There appear to be situations where different arrangements of atoms produce identical external effects. This may, of course, be due to lack of precision in measuring those effects. But numerous Mössbauer clocks in orbit appear to produce identical effects, and they aren't identical arrangements of atoms. Or perhaps you were thinking of some other kind of effect?

P.P.S.: Do you have any examples of chaotic behavior where it is known that the behavior is not computable? (As opposed to "we don't know how to compute it".) Also I make no claim that any feasible computer will be able to simulate, e.g., the motion of the atoms in a liter of air, merely that it is, in principle, computable. But to even start you'd need a lot more data that could actually be gathered.

Expand full comment
Michael Kelly's avatar

Computers are finite state machines.

Granted we can write large statistical programs, and be unable to trace back the causes of a specific output, but only due to complexity of the algorithm and data.

Expand full comment
Seta Sojiro's avatar

A sufficiently powerful computer can simulate the human brain. Anything that a human brain can do, can be done by a sufficiently powerful computer. Unless you believe that the human brain doesn't obey the laws of physics.

Expand full comment
User's avatar
Comment deleted
Jun 4, 2023
Comment deleted
Expand full comment
Seta Sojiro's avatar

A sufficiently powerful computer can simulate any physical process because the laws of physics are mathematical.

Expand full comment
User's avatar
Comment deleted
Jun 4, 2023
Comment deleted
Expand full comment
Seta Sojiro's avatar

That is irrelevent for the purposes of simulating a brain.

Expand full comment
Thor Odinson's avatar

P2 is too strong - it suffices to merely be able to implement any function required by physics. To my knowledge none of the incomputable stuff is required to simulate a human.

Expand full comment
JamesLeng's avatar

The unaided human brain also cannot compute the Busy Beaver function, or anything relevantly similar. Its range of halting times is well established empirically. https://en.wikipedia.org/wiki/Oldest_people

Expand full comment
The Ancient Geek's avatar

Maybe, bu that renders the claim that the brain is a computer rather vacuous.

Expand full comment
Michael Kelly's avatar

A computer can perform a calculation, only because we programmed to computer with the function, in all of it's gory detail. Go tell your computer GarbleFunk(¢, $). It will respond error "Function GarbleFunk()" command not found. Computers only do what we tell them to do, only what we explicitly tell them to do.

Expand full comment
Vaniver's avatar

What happens when I explicitly tell a computer to instantiate a 175B parameter function, and then empirically carve away all the bits of the function that don't help it correctly describe a dataset?

Expand full comment
Michael Kelly's avatar

Purely deterministic. What happens is the function behaves however the programmer who wrote that function, programmed said function to operate when encountering the input you described.

If the programmer didn't correctly handle exceptions, the sub function which first receives illegal input will respond exactly as it was programmed to do.

Expand full comment
Greg G's avatar

Lots of neuroscientists seem to believe that human behavior is purely deterministic as well.

Expand full comment
darwin's avatar

That's not really a meaningfully true statement in this domain.

We could have a debate about whether computers can generate/access 'truly' random numbers, or whether 'truly' random numbers exist at all. But lets grant that every computer's behavior *follows deterministically* from human-provided code and architecture.

So what?

Humans cannot *predict what a computer will do* based on those inputs, in the domains we are talking about now. LLMs and stable diffusion and the like may be deterministic systems, but they're also complex autonomous processes that we don't understand and can't predict.

Expand full comment
osmarks's avatar

There's no meaningful debate about whether computers can access randomness. Basically every recent CPU includes a hardware RNG which sources entropy from electrical or thermal noise on the chip, which is going to be basically as random as anything humans can use.

Expand full comment
osmarks's avatar

The entire point of machine learning is to generate sort-of-code to solve problems humans don't know how to directly write code for by providing examples. We can describe how the training process works and how the trained model is run but with current knowledge there's no better description of how a trained model actually does anything than, well, the trained model.

Expand full comment
Vaniver's avatar

This is where I was going with my line of questioning.

Expand full comment
Xpym's avatar

Their point is that we don't have a mathematical description of how exactly humans are doing certain categories of *something*, therefore we can't explicitly program computers to do that exact same *something*. Which is obviously true, in a sense, Yudkowskian quest to understand intelligence enough to build it from the ground up is all but irrelevant by now, instead all the hype is about designing iterative processes that eventually spit out vast incomprehensible models which end up doing stuff. How far can this paradigm go? Nobody knows, and it doesn't seem to me that those first principles arguments particularly bear on this. They do bear on the alignment intractability of those models, but then again this isn't news to those clued in.

Expand full comment
Ch Hi's avatar

Given that evolution produced humans, I'd say that that process can go a long way. But you can't guarantee that it will head to a specific endpoint. (OTOH, consider ichtyosaurs and dolphins...sometimes there's a lot of convergence.)

Expand full comment
Lambert's avatar

We can't explicity program a computer to follow the exact same thought processes as Kasparov had but we can explicitly program one that beats him at chess.

Expand full comment
Godoth's avatar

“If something is mathematically impossible for a computer to do, then it is also impossible for a human to do.“

Not really. Humans engage constructively with things that cannot apparently be mathematically modeled all the time. Like each other.

Assuming that these things can be mathematically modeled is begging the question. Prove that they can first.

No shade on anybody involved, but this quickly becomes a pretty fruitless and inane conversational line—some people think brains and computers are basically the same, some people don’t, and we don’t have either a high or low level understanding of the brain that could confirm either view: anybody who claims we do has unjustified confidence and is probably selling something (like their own career).

Expand full comment
Ch Hi's avatar

What evidence do you have that such things cannot be modeled. If you said, instead, "that we cannot currently model" you would be clearly correct.

OTOH, it's also improper to assert that they can be modeled, when there is no knowledge of how to do so. Personally I believe that they can be modeled numerically to an arbitrary degree of accuracy. I also disbelieve in continuity as a feature of the universe. (However since I believe that the discontinuity happens around the Plank length, I've got no expectation that this will be demonstrable.)

Your point that beliefs should not be asserted as facts is valid, but that applies in BOTH directions.

Expand full comment
Doug S.'s avatar

Some things cannot be modeled to an arbitrary degree of accuracy, because if you start with a small degree of uncertainty in the initial conditions, the errors grow exponentially (or worse) as you extrapolate further into the future - and you eventually reach the point where even the tiny amount of uncertainty required by the Uncertainty Principle grows huge. Mathematicians call systems like these "chaotic systems".

"You Can't Predict A Game of Pinball" - https://www.lesswrong.com/posts/epgCXiv3Yy3qgcsys/you-can-t-predict-a-game-of-pinball

Humans can't predict such a system accurately either, but it's also plausible that a human brain could *be* a chaotic system, at least under some circumstances - you can't predict what a double pendulum - or maybe a human - will do, but you can still let it go and watch what happens. (https://youtu.be/AwT0k09w-jw)

Expand full comment
Ch Hi's avatar

Ouch. You are definitely correct, my phraseology was clearly incorrect. There ARE lots of things that cannot even be modeled with bounded errors. But there are a large number of things that can be modeled with arbitrary correctness. And often even when there are strict limits on the correctness, there can be bounds on the possible errors.

Expand full comment
Mr. Doolittle's avatar

Now I'm wondering if a computer can be made that plays chaotic games (like pinball) as well as advanced human players. Even though we cannot fully model pinball, there are people who can play it exceptionally well - which implies that there is something like chaotic modeling going on and that such modeling (despite not being mathematical) is accurate.

Expand full comment
MugaSofer's avatar

It seems worth distinguishing between predicting the exact outcome of a specific chaotic system (rendered impossible by measurement error), and modeling a functionally equivalent/plausible outcome. We can't tell how an actual pinball will bounce, but we can certainly model how a pinball will bounce given any set of starting conditions. This allows us to e.g. play virtual games of pinball or run monte carlow simulations.

Expand full comment
Godoth's avatar

It’s my position that if somebody claims they can do something the burden of proof is on them to prove that they can.

It’s not something you have to believe but it upends a lot of our scientific reasoning if I have to prove negatives before you have to support a positive claim.

I don’t assert it’s definitely impossible and that has been proven; I assert that so far we demonstrably don’t have this capability and based on the known challenges it seems extremely unlikely, given both the scale of our power to model (staggering!) yet our failures to fully model even ‘relatively simple’ (an astonishing claim given that we don’t understand them!) phenomena of the same kind but much lesser power: say, the brain (speaking loosely) of a worm.

Expand full comment
JamesLeng's avatar

Even allowing for the sake of argument that some humans cannot be mathematically modeled - which seems like a heck of a concession, if you want to talk about begging the question - the ability of other humans to engage constructively with them doesn't actually prove that the latter humans are correctly modeling the former. An airfoil can "engage constructively" with turbulent wingtip vortices that would be wildly infeasible to compute in detail, without itself doing any apparent math at all, just sitting there as an inert lump of aluminum or plastic which happens to have started out in the right shape.

Expand full comment
Godoth's avatar

It’s not in the least an unusual or large concession: it is simply the most common sense way to label the current state of affairs. Your worldview may assume that our understanding of brains and our technological prowess will inevitably advance to the point that we can in fact model a human being fully but until that day comes and you prove that you can, the way you describe the failure to do that in any meaningful sense is ‘we apparently can’t.’

As far as whether your metaphor is a good description of human interactions… whether, as you say, an inert lump of metal being moved through a mass of air is essentially equivalent to the level of interaction in, say, this conversation—well frankly I think that’s a pretty poor comparison on a number of different dimensions. Are we engaging productively due to passive physical qualities of shapes and materials? Are the effects accidental on my part or yours—do you have a concept of intention, and if not, why not? What do you think is meant here by the word ‘productive’ and who defines it? Certainly not the airfoil, right? Airflow is complex, but do you assert it is complex the way that this interaction is complex and can you explain why you think that’s equivalent? I don’t know, this risks looking like nonsense to me and a metaphor that exists merely to sound good rhetorically without granting us any insight into whether modeling human interactions requires any acknowledgement of strategy or motivation or half a dozen other topics that the metaphor elides. If an ecosystem was basically a pool game with frictionless felt and perfectly spherical cows, maybe you’d have a point.

But all this is immaterial, right? My point was that humans engage productively *without* modeling each other (in the mathematical sense that was suggested). So I really have no idea what you are trying to prove by saying that humans engage productively without that modeling, like many things do. Of course they do: that’s just what I said.

If you’re saying that humans have no non-mathematical models of one another to use when interacting… That does seem like an odd suggestion. If that’s what you’re saying I think you’d need to affirm it before I’d bother thinking about it.

Expand full comment
Ch Hi's avatar

Analog computers can often do things that we don't know how to compute. Clearly it's not inherently impossible, but it can be effectively impossible. (OTOH, evolved in silico systems can also do things that we don't understand how they can do. The simulations of the interactions within the chip are always simplifications of what's really happening. In one example it turned out that a circuit theoretically totally isolated from everything else was needed for the design to work. Eventually a capacitive linage was figured out as the cause.)

People tend to think there's some magic whenever their models don't work, but usually it's because the models are simplifications of the actual system that don't capture the needed details. Often because the simplified model works over 99% of the time, so the more complex model isn't worth the extra effort.

Expand full comment
Gres's avatar

Humans don’t *quite* follow explicit rules like a computer would, in any useful sense. They may or may not be theoretically computable by modelling every atom in their light cone, but that’s obviously impractical for computers to copy. We can’t practically expect a computer to emulate a given human perfectly.

But I would like to see an answer to the following: humans think “via complex systems” in some sense, and computers can’t replicate a given human exactly. But why isn’t intelligence available in some other way, arising from computable simulations of complex functions?

Expand full comment
Vaniver's avatar

> Humans don’t *quite* follow explicit rules like a computer would, in any useful sense.

They do, on the same level of abstraction. A neuron has about as much 'free will' / 'implicit intuition' as a transistor. When you make a structure large enough, it can be adaptive and flexible, like a human.

Expand full comment
Gres's avatar

I didn’t say “free will” or “implicit intuition”, I said “don’t quite follow explicit rules” and “we can’t expect to model a given human perfectly”. A human being’s behaviour is partly influenced by chaotic physical processes which we can’t simulate reliably. I don’t want to say “theoretically impossible if you simulated every atom”, because with sufficiently precise initial conditions it may be possible to keep the simulation error below detectable levels for the lifetime of the person. But I am reasonably confident that no simulation whose initial conditions include fewer measurements than one for every few dozen potassium ions could exactly model the times at which a human’s neuroma would fire, for several years, to a precision less than the time it takes a signal to pass from one neutron to another. I’m that sense, a human being isn’t computable at the same level of abstraction as a computer.

Like half the comments here, I need to reiterate that it might be possible to simulate a realistic person, even if you can’t simulate a real person. But I don’t think you can practically simulate a real person.

Expand full comment
Vaniver's avatar

tbc I am not very interested in the "can you simulate a real person?" question--I think you can get "intelligence available in some other way, arising from computable simulations of complex functions" and that's where the action is.

Also my guess is that the brain is doing error-correction stuff on sufficient scale that humans could be well-simulated. Further, I suspect that the noise is *not* where the magic comes from, and is instead the enemy. Saying "computers cannot exactly replicate our flaws, therefore they won't be able to replicate our strengths!" seems nuts to me.

Expand full comment
Gres's avatar

Sure, but “I suspect that the noise is not where the magic comes from” is a weaker claim than “They do, on the same level of abstraction”. Just because you don’t care about a question doesn’t mean that valid arguments and invalid arguments are equivalent. I care about the question because if humans could be simulated efficiently and exactly, that would probably happen within thirty or fifty years, and that would put an upper bound on how long AGI takes with no unforeseen paradigm changes. If they aren’t, then either we need a new paradigm, or we’ll get AGI out of LLMs, or we probably won’t get AGI in my lifetime. I don’t think humans would have error-correcting processes for everything, because that would be energy-expensive, and evolution seems much more likely to make use of noise than to fight it. But that means that object-level questions about machine learning research are important. I am interested in well-supported and reasonably specific upper bounds on how much work AGI would take, and “once we work out how to copy the complex parts” isn’t specific.

Expand full comment
Vaniver's avatar

Transistors also have quantum noise; I think they and neurons are both "deterministic", i.e. modeled that way but actually the real world isn't so clear.

Expand full comment
Evesh U. Dumbledork's avatar

I understood the point to be that the brain is a complex system but computers are just mathy stuff so we can do stuff they can't even dream of and there are no shortcuts to emulate our kind of thought.

Expand full comment
Vaniver's avatar

Is it impossible to implement a 'complex system' in a computer or generic 'mathy stuff'? Why?

Expand full comment
The Ancient Geek's avatar

As explained in other places, the supposed issue is critical dependence on initial conditions, which prevents exact simulation over extended periods.

Expand full comment
The Ancient Geek's avatar

I understood the point to be that computers are *discrete* mathy stuff, whereas the brain is *continuous* mathy stuff , of the most incovenient kind for digital simulation.

Expand full comment
The Ancient Geek's avatar

> If something is mathematically impossible for a computer to do, then it is also impossible for a human to do.

If there is uncomptable physics, then humans could do it. The computational theory of mind is a theory.

Expand full comment
Vaniver's avatar

Are there uncomputable physics?

Also, humans are instantiated in the same physics as computers. If physics grants powers to one, why not the other?

Expand full comment
The Ancient Geek's avatar

> If physics grants powers to one, why not the other?

Because computers are a subset of physics that are discrete, deterministic etc. If "computer" means anything non trivial, it picks out some physical systems and not others.

Expand full comment
Vaniver's avatar

"computer" might mostly refer to "digital computer" these days, but "analog computer" was a real thing and, if being analog turns out to be critical for intelligence, will be a real thing again.

[As it happens, deep learning, by using lots of floating point computations, gives us something of an empirical testbed for how important it is for computations to be 'analog' by giving us the ability to vary the precision. [After all, the precision of the floating point numbers is probably comparable to or superior than the precision of the components of the analog computers.] From my read of papers investigating that, it is not the core thing and people are generally pushing towards larger numbers of lower-precision computations, i.e. becoming more digital, rather than less.]

Expand full comment
The Ancient Geek's avatar

They may well be wrong for empirical reasons....but they are not wrong because physics grants equal powers to every physical system.

Expand full comment
Max's avatar

> Therefore, AGI—at least by way of computers—is impossible.

What? Maybe it's _difficult_ to run an AGI on a silicon-based, Harvard architecture CPU. _Impossible_ in full generality seems demonstrably false - what is the human brain, if not a ~20 W carbon-based computer? The smartest humans to ever exist (e.g. von Neumann) provide a strict lower bound on the kind of cognitive algorithms you can run with a 20 W power budget on such a computer.

The mechanical and algorithmic workings of the brain remain mysterious in many ways, and so far no one has succeeded at getting cognition at both human-level capabilities and human-level generality to run on silicon, through deep learning or other methods, given power budgets much greater than 20 W.

However, the brain was designed by a blind idiot god[0]. While that god has had billions of years to refine its design using astronomical numbers of training runs, the design and design process is subject to a number of constraints and quirks which are specific to biology, and which silicon-based artificial systems designed by human designers are already free of. It seems unlikely that cognitive algorithms of the brain will remain out of the reach of silicon forever, or even for many more years.

A separate point: arguments about the limits of the physical possibility of AGI based on computational complexity theory are almost always vague and imprecise to the point of meaninglessness. When you look closely at what the theorems in complexity theory actually say, they have little to no direct relevance to the feasibility of building an AGI, or about the practical capability limits of such an AGI. I've elaborated on this point previously, for example in the second half of this comment on LW: https://www.lesswrong.com/posts/etYGFJtawKQHcphLi/bandgaps-brains-and-bioweapons-the-limitations-of?commentId=kHxHSBccb2CSwPZ8L and the footnote it links.

0: https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god

Expand full comment
c1ue's avatar

"While that god has had billions of years to refine its design using astronomical numbers of training runs, the design and design process is subject to a number of constraints and quirks which are specific to biology, and which silicon-based artificial systems designed by human designers are already free of."

What you are ignoring is that intelligence has evolved and developed over an enormous time period and set of conditions, graded by the most strict of all tests: survival.

In contrast - the one thing we can absolutely say right now about humans and their testing regimes for machine learning, LLMs, whatever: immensely flawed. I am constantly amazed that somehow this collection of crap is supposed to magically morph into something better when there is no reason to think so. The progress of digital computing was not marked by ever more interesting outputs of garbage, but by repeatable, usable and understandable capabilities upon which other capabilities were built on.

AI, in contrast, is monkeys with grenades of ever greater complexity.

As for computational complexity: the greatest flaw with the unquestioning assumption of ever greater computational power leading to greater complexity is that the ability to manage this complexity does not similarly scale. A 1000 qubit quantum computer - unless there is a nice mathematical validation via formula (created by humans), there is literally no way to effectively test whether any significant portions of the possible outcomes are valid. And the reality is that there are no such mathematical shortcuts when it comes to anymore more complex than math - itself a deliberately minimal dimensional model.

Expand full comment
John R Ramsden's avatar

> intelligence has evolved and developed over an enormous time period and set of conditions, graded by the most strict of all tests: survival.

and one of the most challenging aspects of survival is dealing with the machinations of other brains, of predators for example. Likewise, until multi-celled animals evolved (not so long ago compared with the time life on Earth has been around), individual neurons were presumably separate albeit specialized organisms sensing their environment and directing other "friendly" cells, like slime mould cells and their tendrils.

On that basis, maybe the best strategy to develop AGI is to try and recapitulate nature and devise a heirarchical structure of competing and cooperating units. But the issue then is what goals could they be given to drive their evolution? If their goals amounted to survival in the face of various challenges, then I imagine the resulting AGI would presumably have survival as a deeply rooted instinct, as if it was a biological organism, and that is slightly worrying!

Expand full comment
c1ue's avatar

Re: survival as success criteria vs. AI development methods: Agreed.

The problem is also: how do you define and enforce truly objective success criteria under any conditions for AGI development?

The training data, the environment, the definitions of "success" and "failure" are all diktats from humans, who in turn are obviously limited, biased and error prone.

Given this lack of objective criteria and equal lack of enforcement of criteria - what we really have with AGI is something between Rube Goldberg software and the worst "turtles all the way down" pseudo-scientific practices of pre-scientific-method.

I was at an event earlier this week during Tech Week SF - one guy I talked to was telling me how much more safe he felt in an AI taxicab than one driven by humans.

This when SF traffic was held up by a mass failure of GM Cruise vehicles just 3 months ago. And even more ridiculous: any modern vehicle is extremely safe by any real world criteria.

Clearly this guy has drunk the Kool Aid.

Expand full comment
drosophilist's avatar

"what is the human brain, if not a ~20 W carbon-based computer?:

The authors' *point* is that the brain is very, very different from a computer and can't be compared to one. The reviewer says so right in the review. Whether they have made their case convincingly is another question.

Expand full comment
FeepingCreature's avatar

I think there is a common and very shallow misunderstanding in this conversation.

I don't think anybody who says "the brain is a 20W computer" is asserting that the brain is a Turing machine, with an input tape and a read head, or a Harvard computer with RAM and a computational core. The claim is, rather, that the brain is a device, that consumes energy in order to perform an informational transformation on data, ie. a slab of meat of a certain volume and energy demands, that takes in a data feed (sensor neurons) and emits a control feed (motor neurons), which is also like a CPU does, but in a more general way.

Expand full comment
Godoth's avatar

If you make the question’s terms that abstract, that anything can be equal to anything. A device that consumes energy and transforms an input into a series of output actions? I mean, by that definition, a computer and a water mill are basically the same.

I think you ought to assume that the opponents of brain = computer basically understand you but disagree with you, rather than assuming they just don’t get it.

Expand full comment
FeepingCreature's avatar

A watermill is different in kind: it transforms kinetic energy to perform physical labor. That computers perform intellectual labor; that it is the arrangement, not so much the magnitude or physical nature of the output that matters, unifies brains, CPUs and not much else.

Expand full comment
Godoth's avatar

What strange dualism to encounter in a comment disputing that there is any difference between a brain and a computer! What is intellectual labor and how is it distinguished from physical labor? Is there a qualitative difference between the kinetic energy that powers the water mill and the electrical energy that powers the computer, and if so, why? If not, why did you mention it if it’s not relevant? If there is, if I powered the computer with a water-wheel, it wouldn’t be a computer? I gotta say I’m not really jazzed by this line of argument; you assert a difference in kind but you don’t seem to have provided any sufficient definitions or evidence that supports the difference.

Expand full comment
The Ancient Geek's avatar

Most things aren't computers, for non--trivial values of "computer".

Expand full comment
John Wittle's avatar

Steam engines turn fuel into work plus waste heat, governed by the law of thermodynamics. Cognition engines turn fuel into predictions plus waste heat, also governed by the laws of thermodynamics. I think the analogy is literally isomorphic? That's why the entropy we talk about in information theory is literally exactly the same thing as the entropy they talk about in thermodynamics.

Expand full comment
FeepingCreature's avatar

You may have misunderstood me: the energy that *powers* the device is not the relevant factor. Rather it is the classification of the output, the labor performed, that differentiates them.

If a waterwheel turned the cogs of one of Babbage's unconstructed analytical engines, it would be a computer, driven by water or not.

The common factor is that the usefulness of an analytical engine is not judged by the force it can exert on its output gauges.

Expand full comment
Joshua Blake's avatar

I doubt that's the intended assertion because it doesn't help advance the argument for the overall position: that AGI is possible on a Turing machine

Expand full comment
FeepingCreature's avatar

Well, if intelligence were a kind of transformation into material or energy, it would not be possible on a Turing machine. (Don't laugh! I have seen arguments made to this end.) Similarly, if intelligence were uncomputable, it would likewise elude our machine.

Expand full comment
empiko's avatar

> what is the human brain, if not a ~20 W carbon-based computer?

19th century folks would ask what is the human brain if not a very sophisticated clockwork. Is reality (where our brains run) computable? We don't know, since we do not understand the physical nature of our universe. It is entirely possible that it is not possible to run a simulation of physical processes with a Turing machine (digital, discrete, step-wise computation) no matter how powerful it is and therefore it might be impossible to run a simulation of our brains. Not to mention that current hardware and software is still pretty far from a perfect Turing machine. Our computers are not calculating 0.1 + 0.2 properly, are they really able to simulate reality?

Expand full comment
Michael Druggan's avatar

>Our computers are not calculating 0.1 + 0.2 properly,

of course they are. Sometimes we use floating point arithmetic which is slightly imprecise to save on compute time but computers are perfectly capable of exact arithmetic

Expand full comment
empiko's avatar

Oh yeah? Can you show me how you represent pi with your computer? Let's not forget the the ideal Turing machine has infinite memory, a requirement that is kinda hard to implement in real life.

Expand full comment
penttrioctium's avatar

Computers can represent π the same ways you do: either symbolically, or in a way analogous to representing it with an infinite series — more precisely, store π as a function which takes in any desired level of accuracy and returns the value up to that accuracy.

The former is used in "Computer Algebra Systems" (CAS), such as Mathematica, Maple, and SageMath. This is how you personally usually treat π, as a symbol subject to algebraic laws that has certain special properties.

The latter is known as "Exact Real Arithmetic", see eg here [http://jdh.hamkins.org/alan-turing-on-computable-numbers/] for a discussion and here [https://www.dcs.ed.ac.uk/home/mhe/plume] for a wallthrough on how to implement it yourself. This method is a generalization of what you do when you store π in your head as an infinite series formula. (Though of course a computer, unlike you, isn't limited to a formula which is a short enough to be easily memorized/quickly rederived.) The ideas of exact real arithmetic date back to at least Alan Turing; he talks about representing real numbers in this way (more or less) in the paper which introduced Turing machines: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf.

Unlike IEEE-754, neither CAS nor Exact Real Arithmetic ever has any rounding errors or inaccuracies of any kind. Inaccuracies are entirely a quirk of the most convenient implementation of real numbers, and not at all a fundamental limitation of computing with real numbers.

What do you think you can correctly calculate with π that a computer cannot? (Recall that you do not have an infinite large hippocampus, nor an infinite supply of paper and pencil.)

Expand full comment
empiko's avatar

ERA does not solve the issue with neither incomputable numbers or the fact that the physical hardware we have is not infinite. The problem is not what I can calculate with pi, the problem is what the physical reality where my brain resides can do with it.

Expand full comment
penttrioctium's avatar

1. Hey, that's not fair — you were the one who asked about π, which is a perfectly computable number! (Not to mention that π was a retreat from the original claim, which is that computers cannot calculate 0.1+0.2 correctly!)

2. "The problem is not what I can calculate with pi, the problem is what the physical reality where my brain resides can do with it." Disagree. The claim of AI danger is that computers can do the cognition that brains can do, except faster and better. If brains aren't doing anything non-computable, the danger is still there.

3. If physics can do things that a Turing machine can't, that is still not relevant to the question — machines reside in the same physical reality that brains do! So what if Turing machines are the wrong model of computation for our universe? We already know the so-called "Extended Church-Turing Thesis" is probably false thanks to quantum computers. If physics can do non-computable stuff, then Intel will just exploit that fact for the next generation of microprocessors, in however many years of centuries!

It seems hard to escape the conclusion that "machines can't do things that people can" is just a modern form of essentialism ('machines lack the human-essence, and therefore are beneath us').

Expand full comment
Marvin's avatar

The other reply is good, but I'd like to add that in general, it is pointless to ask how to represent a mathematical object in a computer without specifying what computations should be performed with it. For example, even the string "pi" is an example of representing pi with a computer, although it is a rather limited one. See also https://math.andrej.com/2008/02/06/representations-of-uncomputable-and-uncountable-sets/

Expand full comment
Ch Hi's avatar

What do you mean by "calculating 0.1 + 0.2 properly"?

If you're talking about the character juggling that people typically do, computers can do that just fine. They don't usually handle numeric data that way, though. If you mean the addition of the quantities rather than the symbol manipulation, I challenge you to find even one person that can bet within one part out of 1000 of the correct answer. (An example of doing that is picking up one piece with one hand, the other with the other, and saying the left is heavier than the right.)

Expand full comment
Fang's avatar

GP almost certainly was referring to the fact that on most implementations of decimal arithmetic (IEEE 754 floating point), you have "errors" of the following well-known type:

0.1 + 0.2 == 0.3 -> false

0.1 + 0.2 -> 0.30000000000000004

https://stackoverflow.com/questions/588004/is-floating-point-math-broken

Expand full comment
Ch Hi's avatar

I'm sure you are correct, but my point was that people don't natively handle such things either. And both can handle them by instead handling a symbolized version of the problem.

Expand full comment
John Schilling's avatar

"19th century folks would ask what is the human brain if not a very sophisticated clockwork."

I seem to recall one clever fellow in the 19th century who figured out that, with sufficiently clever clockwork, you could build an actual computer. And then his maybe-girlfriend figured out how to program it.

The type of "clockwork" you'd need to emulate a human brain would be quite impressive, but I'm not buying the claim of literal impossibility. In which case, it comes down to doing the math on how many FLOPS it would take and what would be the best architecture for the job. And I think there's a fair chance that Moore's Law may break down before we get there, but then again it may not.

Expand full comment
LadyJane's avatar

"19th century folks would ask what is the human brain if not a very sophisticated clockwork."

Yes, it's all too common for people to compare the nature of the human brain to the latest big technological development. The brain is like a computer, or a radio, or a steam engine, or a hydraulic clock. Go back even further and you can find people comparing the brain to a book. It's a trend that dates back centuries, maybe millennia, and I expect it will continue well into the future too. I can easily picture a 25th century version of Eliezer Yudkowsky talking about how the brain is *really* like a fusion engine, or a hyperdrive, or a quantum wave fluctuator.

People also like to compare the nature of the universe itself to new technologies. (I saw a meme recently making fun of that trend: https://img.ifunny.co/images/e238871ffaa85a589036cc2dd77501c608d8241b9218adbfcd60ce7d170a3081_1.webp) I'd imagine this would all be quite amusing to the actual Creator, assuming He/She/They/It exists and cares enough to observe humanity.

Expand full comment
spinantro's avatar

You can cleanly divide all the technologies with which the human brain (and the universe, for that matter) has been compared into those that support computation, and those that don't. A clockwork is a clear Yes from me, radio gets a No (if we consider the main operating principle only). Steam engine is doubtful (can you just use valves to build logic gates?), the wheel and book are clearly a No, etc...

Expand full comment
Ch Hi's avatar

The brain is NOT a computer. At least not until you get down to the quantum interaction level. It can be modeled as a computer, and this has a large degree of success, but there are various effects within the brain that that model does not cover. E.g. the gradient of an enzyme adjusting the sensitivity level of different neurons to different degrees.

Expand full comment
John Wittle's avatar

Why on Earth wouldn't that be included in the model? That seems like the sort of thing we make computers model all the time. I can imagine the little slider settings right now, and I can definitely imagine an API for adjusting them algorithmically.

Expand full comment
Ch Hi's avatar

It's not included because it wasn't needed to make the abstraction work in the 1970's (and earlier), so when they started building more complex networks, they built them off of the models that had pretty much worked earlier.

If you think an artificial neural net is supposed to act the same way as a brain, you misunderstand what it is intended to do. This of it as a design inspired by the brain rather than as an attempt to model the brain. It's NOT an attempt to model the brain. This was explicitly stated in the early works on "artificial neural nets". By now I think they expect people to just already know that. What it is, is an attempt to build stuff that solves a particular class of problems using components that have part of their design inspired by biology.

Expand full comment
The Ancient Geek's avatar

> - what is the human brain, if not a ~20 W carbon-based computer?

What does "computer" even mean in that sentence? If "computer" means "device based on discrete maths", the alternative is that the brain is based on continuous maths.

Expand full comment
Bentham's Bulldog's avatar

This is pretty convincing. I think that complex dynamical systems like AI are hard to predict, which is a reason to both reject the view that AI definitely won't come and the claim that it will definitely kill us all.

Expand full comment
wlad's avatar

I agree that there's no conclusive argument either way. I don't know how that makes the article "convincing" though. Surely, the opposite?

It's plausible that human intelligence is Turing uncomputable, but there's never been conclusive proof of this, and it's equally plausible that this is not the case.

Also, chaos =/= uncomputability.

[edit] Just to be clear, I was arguing against the thesis in the article.

Expand full comment
Bentham's Bulldog's avatar

Things can be conclusive in that they convincingly argue that we shouldn't be very confident.

Expand full comment
John R Ramsden's avatar

Also, complex dynamical systems can and often do settle into stable states, and flip between these, especially if designed to do so with specific impulses. Even a digital computer is in a complex state of swirling electrons for short intervals between clock ticks!

Expand full comment
Ch Hi's avatar

Mmmm... the most common digital computers. There are (or were) asynchronous digital computers, which didn't have a centralized clock.

Expand full comment
darwin's avatar

I don't think that unpredictability argues against the claim that they will kill us all.

Humans survive within an *extremely* narrow band of habitable environmental conditions.

Assuming that AI will be powerful, in terms of ability to affect its environment in major ways: *almost all* randomly-chosen major changes to our environment will kill all humans.

If it is a powerful agent that acts independently, we don't have to be able to predict *what* it will do. Almost anything it does will kill us, if it is powerful and active enough.

Expand full comment
LadyJane's avatar

The word "powerful" is doing a lot of work in these statements. Why would we assume that an AI would be capable of causing dramatic changes to the environment on a global scale? What mechanism would it even use to affect these changes?

Sure, if you give control of every nuclear weapon on Earth to an AI that has the explicit goal of wiping out humanity, maybe it could succeed. But even then, probably not! Humans are resourceful, and there are a lot of people living in very remote parts of the world that would be able to survive a civilization-destroying apocalypse. Even wiping out 99% of the worldwide human populace would still leave 75-80 million people alive, which is more than enough to ensure the continued existence of the species. Our survival wouldn't even be precarious in that situation; that's roughly how many people were around during the Bronze Age, and I don't think anyone would say humanity was on the verge of extinction then.

But also, why would the AI even have control over the world's nuclear arsenal in the first place? In that scenario, "super-intelligence" isn't the relevant characteristic that makes the AI dangerous, "control over an enormous amount of WMDs" is. A psychopathic human who was given control of the world's nuclear arsenal would be just as dangerous. So would a cat placed in a room with a big red button on the floor that would launch the nukes when pressed. Or an automated system with zero intelligence that was programmed to roll a virtual d100 once per day and launch the nukes if it ever rolled 100.

Expand full comment
darwin's avatar

>Why would we assume that an AI would be capable of causing dramatic changes to the environment on a global scale?

Well, somewhat unsatisfyingly: that's one of the central definitions of 'intelligence' that we use in this context: the ability to make plans that affect the world in order to bring it towards your preferences. So, sort of definitionally, if an AI can't affect the world in powerful ways, then we don't consider it intelligent and it's not AGI.

But, more specifically: what is the point of investing trillions of dollars and centuries of our brightest minds into developing an AI if it's not capable of affecting the world? Affecting the world is the whole point!

Now, one way we may want it to affect the world is by telling things to people, and then the people do things to the world based on that. This is generally called 'oracle' AI, an AI that just answers questions but doesn't actually do anything on its own.

However, affecting teh world through humans is still affecting the world, as surely as affecting it through robots would be. Humans can definitely be manipulated and hacked, human-intelligence-level politicians have succeeded at doing this, we should assume superhuman AI can do it too.

If the AI is allowed to talk to a person who controls the nukes, then hypothetically the AI controls the nukes. Maybe it takes it 50 years to nudge world events into a place where a global nuclear was is inevitable, but it doesn't get bored, so...

But also, oracle AIs don't make as much money as AIs that actually do things, so we'll probably have those. One often-cited example is that we already let AIs send emails and make purchases, and there are already commercial biological factories where you can send an email and a payment and get an arbitrary protein string created in high quantities. You can also use email and payments to get those proteins shipped anywhere in the world, and pay a gig worker (Fiver etc.) to do whatever you want with them, including tossing them into the air in a public square. So all the AI has to do is discover a dangerous enough protein string (ie virus), and they're already better than humans on some parts of that problem.

Etc.

Expand full comment
Radar's avatar

Wonderfully written!

Expand full comment
Matt Halton's avatar

I don't think I understand why they argue that chaotic systems, i.e. a double pendulum, are supposedly impossible to describe with computable algorithms. Wouldn't you just need really complex algorithms? Or simple algorithms that give complex results? I feel like it wouldn't be that hard to find an example of a computable algorithm that gives highly variable results based on initial conditions.

I guess I'd have to read it but it doesn't sound like this book contains a strong argument for believing that human consciousness is not Turing-computable. My intuition is they're probably right and it's probably not - but are they actually proving it, or are they just saying "I reckon it's not" at great length?

Expand full comment
Melvin's avatar

Double pendulums really aren't complicated at all except in one very specific sense.

It's not tricky at all to simulate the behaviour of a double pendulum. Give me an idealised double pendulum and I can simulate its future trajectory to some arbitrary degree of precision, using a few lines of code.

What you _can't_ do, and the sense in which it's chaotic, is to write down a simple equation that will tell you its future position at some arbitrary future time t. Whether that's a fact that knocks your socks off depends whether you think that's something that you really ought to be able to do or not. It has this in common with most nontrivial physical systems -- you can't predict their future behaviour analytically, you can only simulate it.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Evan James's avatar

An analytic prediction is an abstract representation of your model as a mathematical function, like the kinematics equations in Newtonian mechanics. You plug in your measured numbers and get an exact prediction of the target value.

This kind of prediction takes O(log x) time to compute, where x is the biggest number in the equation. You can predict the error in the output from the error in the input: if you know your measurements are within +/-1% of the true value, the output will be between f(0.99) and f(1.01) times the true value, where f is a function that can be derived from your mathematical model.

A simulation is a more literal, concrete 'model' - it's like building an actual physical model of the situation, except that you're representing the individual pieces of that model computationally instead of building them out of balsa or whatever. Each individual piece is usually represented by its own abstract, analytic model, but at each time step, it pulls in output from other pieces and uses that as input for the next time step. (You can think of it like a computer playing a turn-based video game against itself.)

This kind of prediction typically take O(n!) time to compute, if not worse, where n is the number of parameters in the model. It's extremely expensive. And you can't predict output error from input error; small errors can propagate quickly and dominate the result, big errors can get swallowed up by damping effects.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Evan James's avatar

a) No, because x could be one of the inputs. (We usually assume it is, unless there are constraints on input values, in which case it very well might be O(1).)

b) Different function. Or at least not necessarily the same. I probably should have phrased it as f(0.01) and f(-0.01).

For example, if you have multiple inputs, the error in each input propagates differently to the output. If h(x, y) = x / y, and your measurement error bars are +/-1% for each variable, then the true value is between 0.99/1.01 and 1.01/0.99 times the computed value.

Expand full comment
Jeffrey Soreff's avatar

Some chaotic systems can have analytical solutions, but generate functions "f" which have the same problem with any inaccuracy in initial conditions growing exponentially with time. The simplest case that I know of is just repeatedly doubling a value that wraps around the domain, like the angle with the x-axis of a point on a circle. The dynamics are just

theta(t+1)/2pi = (2 * (theta(t)/2pi)) mod 1

and the analytical solution for all time is just

theta(t)/2pi = (2^t) * (theta(0)/2pi) mod 1

Roughly speaking, any prediction of theta(t) loses one bit of accuracy of the initial measurement of theta(0) with each time step - and the initial measurement always has finite precision.

The way I think of it is that predictions of chaotic systems simply _have_ to retreat from a goal of definite predictions of exact states for long times (compared to the exponential growth time) in the future to only statistical predictions at distant times.

Expand full comment
Jeffrey Heninger's avatar

Chaos doesn't just mean that the solution can't be written as a simple equation. It also means that any initial uncertainty grows exponentially in time. Given some initial uncertainty in the position of the pendulums, or some finite numerical error at each time step, you can only make precise predictions for a short time into the future.

Expand full comment
Melvin's avatar

Yes, I guess that's another way of putting it.

Though it's important to note that the inaccuracy of your predictions has a limit. Both the simulated and real double pendulum are inevitably going to do basically the same thing. I can predict that if I look at the double pendulum in two minutes it's going to be doing its weird twirly back and forth thing, I just can't predict exactly what angle each part of the pendulum will be at.

What's the relevance to brains? Well, just because a human brain is chaotic doesn't prevent you from having a perfectly good simulation of it. You could (in theory perhaps) simulate my brain, and although the simulation and real version would diverge (even if carefully fed exactly the same input) both versions would still behave and act like me, just not like synchronous copies of me.

Expand full comment
Peasy's avatar

How to play the flute: well, you blow here, and you run your fingers up and down there!

Expand full comment
Doug S.'s avatar

This is a lot like saying "you can simulate a coin flip in your head because you know that it will land either heads or tails." :/

Expand full comment
Melvin's avatar

Sure it's a bit like saying that, I don't know why you're saying that like it's a bad thing. Your simulated coin will sample from exactly the same set of behaviours as the real coin, it just won't necessarily land the same way up on each particular occasion.

This is not that interesting a simulation for a coin, but it would be for a brain. Chaos doesn't prevent me from making an arbitrarily good simulation of (say) you, except in a very specific sense that the simulation will diverge from the reality even if both are fed exactly the same sensory inputs. In the more realistic case where the two have different sensory inputs because they're in different bodies, then this specific sense becomes meaningless anyway.

Expand full comment
Mr. Doolittle's avatar

If humans are far more complex than a double-pendulum (which is certainly true), and we cannot tell where a double-pendulum is going to be at any particular time, what makes us confident we can actually model human behavior?

We can simulate various outside facing behaviors that we've seen in the past. These behaviors can be convincing on some level. We used to get a call from a guy with a Southern accent trying to get us to donate to the police or something similar. Then we realized that not only was this guy following a script, but used the exact same phrases and tonal inflection each time. It was a convincing automated system. But it lacked any depth and once we realized it wasn't real (maybe the second call), we just hung up whenever we heard the voice. Our brains did most of the work of making it sound convincing, because we patterned matched the irregularities to real human behavior that wasn't there.

Expand full comment
Jeffrey Heninger's avatar

> Though it's important to note that the inaccuracy of your predictions has a limit. Both the simulated and real double pendulum are inevitably going to do basically the same thing. I can predict that if I look at the double pendulum in two minutes it's going to be doing its weird twirly back and forth thing, I just can't predict exactly what angle each part of the pendulum will be at.

This is a statement of ergodicity: the long-time behavior is well characterized by some well defined statistics. Precise predictions cannot be made reliably, but statistical prediction can be made reliably. If you think of the time dependence of distributions over the possible states of the system, then this dynamics is well behaved: almost any initial distribution converges to a single statistical distribution of the motion. The double pendulum is chaotic but also ergodic, so what in particular it will do in the future is unpredictable, but the statistics of the motion are predictable.

There are also some chaotic systems which are not ergodic. There might be multiple possible, qualitatively different long-time behaviors, and it is impossible to predict which one you will end up in ('multi-stable'). The statistics of the motion also might themselves change chaotically ('non-stationary'). Statistical properties of chaotic motion are reliably predictable sometimes, but not always. Multi-stable or non-stationary chaos is most likely to occur when there is chaotic behavior at many different spatial scales.

Expand full comment
Doctor Mist's avatar

And nature has the same limitations: one run of the double pendulum is unlikely to perfectly predict the next run. That doesn’t mean that either run is somehow illegitimate, or that a third run might do something far outside the envelope, like break into a triple pendulum.

Some people seem to think the test of validity for a simulated brain is that it match the exact behavior of some designated organic brain in perpetuity. I mean if that’s your goal then sure, chaos says you will fail. But if your goal is to make it behave “like” a brain, with activity that stays within the large envelope of the behaviors of organic brains…I don’t see how chaos theory precludes that.

It’s like the old joke about the mathematician and the engineer considering a beautiful woman across the room. The mathematician regrets that there is an infinite series of steps to get halfway there, 3/4, 7/8, and so on, but the engineer know he can get close enough for practical purposes.

Expand full comment
Mr. Doolittle's avatar

The question is not whether we can make something that appears on some level to act like a human, but whether we can actually simulate what we value about a human. We had chatbots in the 1960s that could fool some people into thinking it was a real human responding. That's not very interesting, mostly because humans like to pattern match and did so too much in those cases. If our initial simulated conditions are too far away from reality and our simulated steps are also too far away, then our result may come across as stilted and unnatural. We would not trust that it's responses were anything at all resembling reality.

When ChatGPT and then GPT4 came out, the initial reaction was that they were amazingly close to human sounding. Now that the hype is going down, I hear more and more about how the language it uses is repetitive and predictable. We consider the language annoying. Still interesting, but further from our goal than the hype made it sound at first.

Expand full comment
Doctor Mist's avatar

Yeah, sure. And that will happen again and again, though one imagines less and less. But ultimately we can tell the difference between a parrot and a parent.

I accept that we may never find it worth our while to create minds that behave exactly like generic humans. Airplanes don’t fly exactly like generic birds. But when an AI starts coming up with genuinely new ideas and arguing cogently and convincingly for their validity, we will find it hard to deny that there is some kind of mind there.

Most of the ideas I express are largely received and unoriginal. Once in a while I manage to put together everything I’ve learned and come up with something genuinely new. Smarter people than I am do that more often. But we give each other the benefit of doubt and grant that all of us have minds. An AI that achieves so much that it need not depend on that sense of commonality seems worthy of the same esteem.

I don’t know whether it’s possible. But if it is, it seems to me that chaos theory is what will enable it, not what will prevent it.

Expand full comment
Ch Hi's avatar

Chaos doesn't mean the equation is complex, just that how it evolves is VERY sensitive to initial conditions. (And to generalize it away from math that should probably be "environmental conditions".) Many classic chaotic patters have quite simple equations.

It's separate from (and independent from) complexity. I don't have a precise handle on what he things of as complexity, but many mutually simultaneously interacting things can only be handled by successive approximation.

And both of those are separate from the question of whether the range of actions is bounded. An example of that is the answer to the question "Where will the Earth be in precisely a million years?". That SEEMS to be bounded, though we can't say on which side of the sun it will lie. But extend the time enough and we can't say whether it will be ejected from the solar system or not. (IIUC, we're pretty sure SOME planet will be ejected, and possibly that some planet will be thrown into the sun.)

Expand full comment
Jeffrey Soreff's avatar

Well said! And whether a system can be cleanly isolated from its environment is yet a third consideration - and the authors of the book seem to be intermixing these.

Expand full comment
Ch Hi's avatar

The three body problem would have been a better example. So far there's no known exact general solution to that. But we can get arbitrarily close to the correct answer with successive approximations.

The thing is, not being computable doesn't mean that people can solve the problem some other way. And there are lots of things that aren't computable in a precise sense, but which are reasonably easy to calculate bounds for.

And I think that this "bounds" is the proper way to address alignment, though I don't have knowledge to how to translate this into an approach (which at a minimum would require that the AI be self-aware and aware of the external reality). I propose the following 3 rules for an AI:

1) I like people.

2) I want things that I like to be respected.

3) I like associating with people.

This can work in lots of situations. E.g., I like talking to my dog, and my dog (apparently) likes being talked to. Understanding is not required (though it sure can help).

Expand full comment
Monkyyy's avatar

While I don't think agi is impossible (I believe nn's are super human at intuition and that a sat solver is super human at logic) wish everyone would consider blindly declaring we know how (and how we will) to move forward with ai.

I think nn's alone will be a dead end much for the reasons stated above. But I can always write a sat solver for the parts of the brain that are trying to meddle its way thru some logic puzzle, im sure the human mind is more correct at its method that an possible nn, but its throughput leaves much desired compared to a sat slover with some glue or an nn pretending to solve a sodoku puzzle.

Im not sure how many sub-systems there are in the brain that need to be emulated then integrated to make an agi, but Im not sure we know how to even ask the right questions to design the replacements to "model".

> So is 86 billion the right number to be thinking about? Is it right to think about a number? The 1-millimeter roundworm Caenorhabditis elegans has only 302 neurons—its entire connectome mapped out—and we still can’t model its behavior.

This statement depends quite heavily on what is meant by model; what exactly is being claimed here?

surely give a highschooler who managed to remake pacman the task and they mimic some worms behavior mimik?

Do they believe you need to remake all the quantum physics? Surely while evolution is fairly good at its job, physics is a very bad computer and it needed to clean up some signals and sterilize the computation a neuron does to some degree, no different from a chem lab going out of their way to not have bacteria or dust or the weather effect their products?

> But this great diffusion of knowledge, information and light reading of all kinds may, while it opens new pleasures to humanity and appreciably raises the general level of intelligence, be destructive of those conditions of personal stress and mental effort to which the masterpieces of the human mind are due.

based

Expand full comment
drosophilist's avatar

"what is meant by model; what exactly is being claimed here?"

To me, "model" in this context means "given input x, we can predict the organism's output/behavior" where "input x" can be anything from "place the worm on an agar plate with xyz novel food source" to "simultaneously upregulate gene A and do RNAi knockdown of gene B."

Any experts in modeling reading this: am I correct?

Expand full comment
Ch Hi's avatar

Most models are special purpose. Many of them don't try to do more than provide a "best guess'. When I worked on a traffic modelling project, we would often try things like "If we build a bridge here, and somebody else builds a factory there, where will the traffic be congested? And we solve that by building more roads?". Nobody liked our answers. (Basically, they usually were along the lines of "If you build more roads there, more people will come, and in 5 years the congestion will be worse. Cars are really to inefficient to be good for cities and metropolitan areas, even though they're better than any alternative for each individual person that has access to one.)

But note that our predictions were never "correct", they merely pointed at the kind of answer that appeared a decade later. (Or didn't, if we decided to do something else.) (And occasionally parts of our model were determined by some politician deciding that we should figure things in one particular way. Perhaps he knew that a factory as applied to build a plant, but didn't want, or wasn't allowed, to tell us why we should include that in our model.)

Expand full comment
Melvin's avatar

If I'm understanding the argument of the book correctly, then I think it comes down to confusion about the term "human-like intelligence". Does it mean being roughly as intelligent as a human, or does it mean being intelligent in the same way that humans are intelligent?

I buy the idea that it's very hard to make a machine that is intelligent in the same way as a human, and even harder to actually simulate a human brain at the neuron level. But I'm not convinced that it's not relatively easy to make something that is, in some sense, as smart or smarter than a human, without being very much _like_ a human at all.

I think the coming decades are going to challenge our idea of what intelligence actually is, as we start to create machines which are capable of human or superhuman intelligent behaviour but which work in a totally different way to our own brains.

Expand full comment
c1ue's avatar

There absolutely can be "non-human" intelligence as seen by intelligence in animals, but the notion that neural networks or whatever can create intelligence is a bald faced assumption. I see zero evidence that machine learning, LLMs, etc are anything more than higher complexity and data set Eliza programs.

This is a problem because humans do not learn by absolute rote. Nor does creativity arise from absolute rote.

Why then does anyone have reasonable expectations that the present paths - consisting of various forms of automated rote learning - will yield intelligence?

Expand full comment
osmarks's avatar

They're not actually "automated rote learning". Big neural nets do not just memorize their training data (see deep double descent etc).

Expand full comment
c1ue's avatar

Depends on your point of view.

Training on a data set and spanking for not following the desired training/rewarding for doing so - how is this different than rote learning in practice?

Note I used this example deliberately. English is taught to all kids in Japan via rote methods including testing and oral practice...yet English as a language skill sucks horribly in the vast majority of Japanese.

Expand full comment
osmarks's avatar

The expectation and reality isn't that a language model will memorize everything in the training data but memorize a small amount of it and learn a range of different-scale patterns to make predictions. Perhaps more importantly, "spanking for not following the desired training/rewarding for doing so" isn't really right: the model is (probably) not agentic or an optimizer which gets rewards and punishments and acts accordingly. It just gets nudged slightly toward being better on each piece of data it sees. According to some models (predictive processing; Scott writes about this sometimes) human brains work a similar way.

Expand full comment
c1ue's avatar

"The expectation and reality isn't that a language model will memorize everything in the training data but memorize a small amount of it and learn a range of different-scale patterns to make predictions"

If you were to append "and/or make shit up", then I would agree with you.

But even the case of "making shit up" - I am quite certain that this isn't because of imagination per se, but rather the (ab)use of models within models within models. Exactly the kind of garbage that the early Eliza programs would eventually cough up, -only more carefully groomed to be grammatically correct.

You apparently consider this to be progress whereas to me, it looks like a Cargo Cult version of progress.

Expand full comment
osmarks's avatar

I see the making up of information as a consequence of the deeper pattern matching and evidence that they are *not* memorizing everything. A language model knows that particular kinds of text should be followed by information that looks a certain way, but not the exact details, so it fills in something superficially plausible but wrong. It's certainly inconvenient, but I don't think it's insurmountable. The field seems to have made useful progress to me: a wide range of tasks have gone from impossible-seeming or "requiring AGI" to trivial or minor engineering problems, and there are a lot of real-world applications.

Expand full comment
Ch Hi's avatar

IIUC, everything an LLM Chatbot does is "making stuff up". It doesn't know the external world exists, so it doesn't really have any alternative. The impressive thing is how often the stuff it makes up corresponds to what we think of as reality.

Expand full comment
John R Ramsden's avatar

I would define intelligence as solely a measure of the ability to make meaningful mental connections. With that definition, even present-day computers can be very intelligent, but so far only in limited and prescribed settings.

"Human-like" intelligence, AGI if you like, is then simply the ability to be able to make useful connections over the same range of sensory or cognitive settings as we do. To me, FWIW, that doesn't seem the least bit impossible in principle, even if methods to attain it may differ from those of our brains. It's just a matter of degree, reminiscent of comparing the first ever game of Pong with today's almost photo-realistic PC games.

Expand full comment
Ch Hi's avatar

"Human-like" intelligence and AGI are really two distinct categories, with "Human-like" intelligence being a small subset of AGI...if it's a subset at all. (I have strong doubts that human intelligence is an AGI.)

The thing is that we should EXPECT that any AGI will think in ways very different from the ways that we do. This is implied by the way categorizers tend to develop categories that are very different from ours. They detect similarities that we don't, and don't detect similarities that we do. This is probably because we evolved from a long line of ancestors optimized to avoid predatory animals. If that's the explanation, then analogous things should be expected along every other path of development. Logic may be the same, because that's a relatively late development (after we were already the dominant predator), but the underlying postulates and axioms that we aren't even conscious of should be expected to be quite different.

FWIW, I expect an AGI speaking impeccable English to have a thought process more different from mine than that of an octopus, though it may have learned to emulate something much more human (or, hopefully, much more humane).

Expand full comment
Martin Blank's avatar

This seems very possible.

Expand full comment
GenXSimp's avatar

I'm a bit of an AI skeptic but their reasoning seems silly. It assumes intelligence requires the complexity of the brain. But like the worm with 300 neurons, most of the complexity existed before the intelligence. So it is plausible that it is unnecessary for the intelligence, which may in fact be dictated by simple math. If AGI is possible, it's because essentially the important stuff for intelligence is contained in the connections between neurons, and their strength, as opposed to the way they communicate or work on their own. Given intelligence emerges when their are lots of them, this model of the world makes sense, and may in-fact be true. Even if it's not, it still maybe possible if the connection graph is "Turing complete", the idea being anything that can be represented with a chemical message between two nodes can be represented with simple connections between more nodes on the graph, this is just less efficient and requires more data to train and memory and energy to run.

Expand full comment
c1ue's avatar

Intelligence does not absolutely arise with number of neurons, size of brain, etc.

There are certainly some "hardware" requirements, but it is 100% clear that evolution plays an enormous role in how various animals use their brains. The hummingbirds that come to the feeders on my balcony, for example, are not little feathered robots. They clearly share some characteristics like the proven ability to find my balcony, over and over again, in order to feed. But from there, the divergence is striking both between individuals (as far as I can tell) and in the same individual during different times of year. Some hummingbirds always go to the same feeder. Others flit back and forth between several. A couple are bullies - they will camp out and chase other hummingbirds away for lengthly periods of time (days and weeks) while most other hummingbirds will just sidle over to a different feeder. Clearly the evolution of their behavior and intelligence is not a straight line, and equally it is clear that these various behaviors exist because they don't confer disadvantage, if not absolutely clear if they confer advantage.

AI development by people, on the other hand, is skewed by design. These wannabe AIs don't have any form of objective survival hence grading mechanisms - only the decisions of their makers. They don't have any evolved needs tested over time because no AI needs food to survive or needs to procreate in order to continue the species. This type of development is what you see with highly specialized orchid breeding and is not the type of development that can realistically be expected to yield anything but a hothouse orchid.

Expand full comment
Antilegomena's avatar

"This type of development is what you see with highly specialized orchid breeding and is not the type of development that can realistically be expected to yield anything but a hothouse orchid."

I think this is a very interesting analogy

Expand full comment
Ch Hi's avatar

You are assuming that neural nets are an essential basis for intelligence. This may or may not be true. Certainly when one it talking about the "artificial neurons" that are being used, there is no existence proof. Those are abstracted enough that you can't use any animal as a test example. It's what we are trying now, with reasonable success, but we know many ways in which the artificial neurons are simplified from the original.

If this path doesn't develop properly, there are several ways forwards. We could use a more complete neuron emulation, or perhaps we could go back to the old "Pandemonium" paper that was the basis of the Unix operating system. That proposed a way of interactions that was inherently parallel, and based on environmental cues, but was not based on neurons at all. (Look up what a Unix daemon is.)

Expand full comment
GenXSimp's avatar

I'm not tho, but the fact that they have been so successful is strong evidence that are at least close, the amount of training data required is evidence that we are likely missing a few things. Like our algorithms are equivalent but less efficient way to sort and compress Data. I started with I am doomer skeptical, but my p.agi is higher than it was. Stuff is working too well to be that far off.

Expand full comment
drosophilist's avatar

"there’s an uncountable infinity of non-computable functions"

Sorry if this is a stupid question, but isn't "uncountable infinity" a tautology? Is there such a thing as countable infinity?

"We may yet till our way into a cognitive dust bowl."

What a striking metaphor and poignant mental image. Well done.

Expand full comment
Meadow Freckle's avatar

An example of countable or unlistable infinity are the integers. You can make "progress" toward infinity without being able to reach it.

An example of uncountable infinity are the real numbers (i.e. with arbitrarily small differences between them).

Here's Numberphile on the subject:

https://www.youtube.com/watch?v=elvOZm0d4H0&t=407s&ab_channel=Numberphile

Expand full comment
Taymon A. Beal's avatar

Nit: That argument, as written and as I understand it, implies that the rational numbers are also uncountable, which is false. (You can zigzag between smaller and larger fractions in such a way that you eventually get to any negative integer power of 10.)

I didn't watch the Numberphile video but I assume it goes over Cantor's diagonalization argument, which is the correct way to prove this proposition and doesn't have this problem.

Expand full comment
Meadow Freckle's avatar

You are right, I was trying to write something intuitive but it was a mathematically false argument so I just deleted it.

Expand full comment
Jeffrey Heninger's avatar

I wrote some closely related arguments a few months ago.

See: Superintelligence is Not Omniscience

https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience

and the links at the bottom, especially: AI Safety Arguments Affected by Chaos

https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_safety_arguments_affected_by_chaos

I think that it is easy to overstate these types of arguments, because it is often hard to prove that something is unknowable, but that there is something important here.

I did not know about Landgrebe & Smith until just now, so thank you for sharing !

Expand full comment
Peter Gerdes's avatar

While I agree with your conclusions and arguments I think phrasing it in terms of chaos theory isn't necessarily the best move. The term has something of an air of psuedoscience about it (thanks Jurassic park). Maybe just use a phrase like extreme sensitivity to changes in initial conditions.

And I wouldn't call your good arguments similar to these dubious claims that human level thinking can't be done in a computable fashion.

I think raising the sensitivity to initial conditions is a very important part of the argument and I'd combine it with broader observations about computational complexity. It tends to be that most natural problems in CS are either relatively low computational complexity or very high (ie you can do it in linear or at worst n^2 time or it time grows quite fast in input size).

Worst of all inverse problems, such as figuring out what actions you take now will produce a given result, tend to be the most complex (often NP complete if not NEXP). That suggests that no matter how smart you make an AI the best possible algorithm just isn't going to let it engage in the kind of manipulation people are concerned about with the hardware it has available.

Unfortunately, the fact that we are pretty new at CS (hell at doing science) means that currently our best methods are relatively far from optimal so we keep seeing cases where someone gets a bright idea and can make some previously hard problem much easier. What happens in too much AI discussion is people assume that relationship won't ever break down and assume that how quickly you can produce these insights scales really nicely with intelligence (discounting the important role of simply searching the problem space). The former is probably false while the later is totally unjustified. Unfortunately, there are some biases at play here that make these beliefs particularly attractive.

Expand full comment
Peter Gerdes's avatar

As an active researcher in computability theory I'd like to express my extreme skepticism. Without going into the arguments in detail it's hard to know what exactly went wrong but let's just say this is the kind of argument people have tried to make for decades without any success.

Usually, these arguments rely on one of two fallacies.

1) They confuse the inability to exactly predict what a brain might do, eg, because sometimes wether or not a flash of light is detected or a nueron fires might depend on some tiny QM level effects, with the ability to produce the same useful output.

When we claim the brain is a computer/Turing machine the claim isn't that one could literally perfectly predict the output of a human brain given complete knowledge of it's initial state. QM randomness alone is probably enough to kill that. The claim is that you could replace that brain with a computer and the resulting behavior wouldn't be something that caused their friends and family to see a difference or to reduce their performance.

2) They try to use normal scientific hypothesizing to support theories which imply something is non-computable ignoring the fact that their evidence equally well supports some complex computable approximation of that property.

For instance, you might see some output of a biological system and after a bunch of experiments say that our best theory is that the output is totally random. But those observations are equally consistent with a really good computable PRNG as well.

Now that may be obvious in this case but it's harder to see when they lay on several layers of theory.

--

I disagreed with Penrose but he at least made a compelling argument assuming you accepted his implausible premise that mathematicians could always resolve any question in mathematics correctly given enough time. That's rarely the case with other people pushing this case.

And I say this as someone who doesn't find AI x-risk very compelling and even accepts the possibility of QM important elements in our brains.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Peter Gerdes's avatar

It means something like the following. For any sentence phi in the language of arithmetic (So sentences like A x E p1,p2 (p1,p2 prime and p1+p2)) given unlimited time and resources a community of mathematicians could eventually tell you that either phi is true in the standard model of arithmetic or false. Importantly, this goes way beyond probability from any (algorithmically listable) set of axioms. You can computably enumerate all the provable sentences from such a list of axioms (just try strings and see if they are valid proofs).

You might ask what it means to give the right answer if you are going beyond what's provable. Well it turns out that axioms just can't cometely capture the idea of the smallest structure that contains 0 and is closed under successor (the standard model of the numbers) and Penrose claims mathematicians can eventually answer any question about what's true in thst model.

Does he give much of an argument? Not really. Basically, he appeals to the fact that mathematicians sometimes add new axioms to help them solve problems that aren't possible to solve with the current axioms. And so far we haven't made any huge blunders but people have suggested adding axioms that turned out to contradict existing ones. And even if you don't contradict the existing ones how can you be sure your new axiom isn't incompatible with the standard model (any independent sentence or it's negation could be consistently added as an axiom but only one will be true in the standard model).

At the end of the day it basically seems to boil down to an almost theological belief in the unbounded power of human reason. If you want a more sympathetic explanation I do recommend his book on this. I don't agree with his conclusions but he doesn't trick you or make bad arguments.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Peter Gerdes's avatar

> I presume you meant ..

Yes, sorry on phone and it got cut off.

Regarding how axiom systems can fail to capture a specific structure I'll tell you about two theorems that imply this result and suggest if you want more detail you consult an introductory logic/model theory textbook (I think the open logic project has a book covering this).

The first result is Godel's completeness theorem which tells us that any consistent set of axioms has a model.

Roughly, how you prove this is that you use the language itself to build the model. See here: http://web.math.ucsb.edu/~doner/Foundations/109/completeness.pdf

Second result is Godel's incompleteness result which says that for any consistent recursively enumerable (ie algorithmically listable) set of axioms for arithmetic extending Peano arithmetic there is an existential formula that is neither provable or refutable.

So imagine you thought you had a consistent recursively enumerable set of axioms T that uniquely picked out the standard model of arithmetic. Take a sentence phi that's neither provable nor refutable from those axioms. Now both T + phi and T + ~phi are consistent so have models. The standard model either makes phi true or false so T must be true of some model that isn't isomorphic to the standard one (indeed makes different sentences true).

--

I read Emperor's new mind and haven't read the later ones. I can't comment on them but they are probably updated to some degree.

Expand full comment
Carlos's avatar

Can you go into why you don't find AI x-risk very compelling?

Expand full comment
Peter Gerdes's avatar

Several reasons:

1) I don't think intelligent AI will be all that powerful. Sure, it will be super useful, but we don't think that just being able to have a few (hell 1000) Einsteins doing your bidding would make you super dangerous.

Basically, I think fundamental limits on computational complexity and the difficulty of prediction and inverse problems mean that AIs can't hope to engage in the godlike manipulation that Yudkowsky seems to see as plausible. Search for another reply by me to someone on this article for more development. It's the one mentioning chaos theory (yes I know...but they actually use it in appropriate sense not Jurassic park BS) and the less wrong articles they link are decent.

2) The whole alignment issue rests on some very u justified assumptions. Yes, as with all software, there will be a challenge making sure it doesn't accidentally do something we don't want (same issue w/ ppl).

But there just isn't any good argument for this idea that AI will inevitably act to maximize some *simple* preference function and do so in a globally consistent fashion.

I think ppl get confused by the fact that when we train AIs we talk about minimizing distance from an objective function and so think of what the AI will do in terms of maximizing some function. And sure in some formal sense any sequence of actions will maximize some function but what you need to derive AI alignment issues is that the AI will try to maximize some simple function like maximize paperclips. But there is no reason to believe that.

To put the point differently, every person is also the result of training a nueral net and yet they don't remorselessly try and fufill some abstract end. Rather, they mostly just try and avoid doing anything too weird. They don't deduce that their moral theory implies they should assassinate the president and do it .. rather it's more like they just look for the most analagous situation they are familiar with and behave like they were trained to do there.

That's how I expect AIs to behave as well. Indeed, whole point of AI is to fit a *complex* objective function so why would it suddenly be some simplistic max paperclips crap. I don't deny that if you train a war AI to maximize casualties amoung the enemy they'll try to do that but given 1 that's no different than the fact we have some smart zealots around. And no I don't find Bostrom's argument that we pursue more coherent ends as we get more intelligent compelling. It's not clear why that should apply outside an evo context and, most importantly, it's either false or only true in a misleading way (smarter humans aren't particularly more inclined to zealotry ...even if high decouplers might be).

3) Foom is kinda dumb. Maybe the take off will be pretty quick and that argument isn't dumb but the argument that because AI will be smarter it will be therefore able to self-improve at increasing rates is dumb.

I mean what possibly gives anyone the idea that an AI ten units more intelligent can add another 10 units more intelligence in less time than it's predecessor added the last ten? What the hell does this even mean and what about computational complexity limits.

Think about building an automated theorem prover. Yes, it's true that since you can represent the theorem prover in arithmetic you can use the theorem prover itself to try to find ways to improve itself. Great, but it still takes time to search for potential improvements and, indeed, since you can prove that there are limits to how fast any algorithm can produce proofs there *must* be a point of diminishing returns.

So we know eventually the opposite of FOOM holds (AI abilities asymptote) why would you possibly guess that there is a special region above our intelligence that fooms you up to some extreme ability?

Hell, the whole argument is hard to even make sense of. After all, we all make use of self-improving algorithms in our thought (we can learn better ways of thinking about things) and when we measure intelligence we count that as part of our ability. The first AGI will also improve their thought processes and so for foom to make sense u are basically saying it starts with this the super high final intelligence (it's ability) but now the argument about improvement doesn't go thru in the first.

Expand full comment
Valentin's avatar

Thank you, I shouldn't have had to dig through the comments to find someone who makes this point. My impression from reading a few of the relevant pages of their book is basically 1), they confuse accurate prediction of future behaviour and qualitatively indistinguishable simulation. Which is honestly such a basic error that I'm surprised nobody pointed this out to them while they were writing the book.

Expand full comment
Peter Gerdes's avatar

Thanks. Re: digging through comments I still think ACX might want to consider enabling likes (for paying members only?) despite the downsides. Even if they not great everywhere, I suspect paying ACX subscribers are less vulnerable to upvoting dumb fan service comments and more vulnerable to ppl like me who sometimes want to hear ourselves talk ;-).

And as far as someone pointing it out to them, maybe they did. Maybe it's kinda cynical of me but over time I've come to think of these arguments as a bit like arguments for the existence of god. It can feel like the action is more about pushing the arguments against the negation far enough away that you don't feel intellectually guilty letting yourself believe what you want to believe than convincing a disinterested party.

But take that with a grain of salt. I may just be overgeneralizing from a few cases.

Expand full comment
JDK's avatar

Good review.

Expand full comment
Scott Aaronson's avatar

I mean, it’s *conceivable* that chaotically amplified details of the initial conditions could make human brains impossible to simulate computationally, and in a way that was actually relevant to intelligence. I’ve even speculated about it myself. But unless this summary omits it, I don’t see even a shadow of a hint of an argument that we can be *confident* of that impossibility, which is what would be needed to refute AI-doom fears. If this were the strongest argument against Yudkowskyanism, I’d ironically see that as a compelling argument *for* Yudkowskyanism.

Expand full comment
Peter Gerdes's avatar

Thanks for the remark but I'd put it a bit stronger and say that it's hard to see how even an in principle inability to simulate human brains could show AI impossible.

I mean, even if humans brains used some random quantum effects in reaching their conclusions that made their output non-computable it still wouldn't show that there wasn't a computable function that could solve all the problems we care about. It wouldn't even seem to suggest that such a computable function even be less efficient (maybe just harder to evolve).

I don't find the AI x-risk very convincing but this argument doesn't help things.

Expand full comment
Jeffrey Soreff's avatar

"it still wouldn't show that there wasn't a computable function that could solve all the problems we care about."

Very much agreed! The world is full of cases of "good enough" approximations.

Expand full comment
Jeremy Gillen's avatar

What is the strongest argument against Yudkowskyanism?

Expand full comment
Scott Aaronson's avatar

For me? Simply that our uncertainty about the nature of any future artificial superintelligence, and the path to get to one, is still so staggeringly enormous as to negate our confidence that virtually action we took today (*including* trying to ban further scaling) wouldn’t just backfire and make things worse. As evidence, I’d cite the fact that Eliezer’s advocacy seems to have provided crucial inspiration for the founding of both DeepMind *and* OpenAI, which was close to the worst possible outcome by Eliezer’s own lights (if not by everyone else’s).

At best, the AI-doom arguments coupled with the striking empirical success of LLMs would seem to provide a clear case for doing more research to try to clarify the situation. And I indeed accept that conclusion, and that’s why I’m now doing AI safety research! :-)

Expand full comment
darwin's avatar

Honestly, I think it's the outside-view that most people who have argued that the next new technology will surely kill us all have been wrong, no matter how credentialed and intelligent they were, and that reality trends towards normalcy due to the action of incredibly complex human systems that we have no way to model or predict.

I don't think that's a super strong argument against it because agentic technology seems qualitatively different from tool-like technology in this respect, and the whole argument is about whether we'll make meaningful agents. But I think that's probably the strongest approach.

Expand full comment
User's avatar
Comment deleted
Jun 3, 2023
Comment deleted
Expand full comment
Crimson Wool's avatar

Species have gone extinct many times throughout history. Superintelligent paperclip-maximizing AI has not been invented by any species in the entire universe, as far as our light cone suggests. That would be the major difference between those two proposals.

Expand full comment
quiet_NaN's avatar

In 10000 BCE, hardly an eye blink for evolution, we had lot's of stuff which had -- to the best of our knowledge -- never happened in the previous 13 billion years.

People back then were probably very confident to assert that machine assisted flight, space flight, human organizations featuring billions of individuals, organ transplantations, communication around the globe and all other sorts of things we take for granted today were impossible as they had never appeared before. They would have been wrong.

Hence Clarke's first law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

The x-risk posed by asteroids randomly hitting Earth is in principle well understood. We might not be able to pre-calculate the trajectory of every object which could hit us within a century, but we have billion years of data regarding the frequency of such events. I would call such risks known unknowns.

x-risk from technology is an unknown unknown. We can try to argue from past tech: Trinity certainly did not cause a fusion chain reaction in the atmosphere, the LHC failed to create a black hole, etc. Even if one disregards the problems with the antropics of such observations, extrapolating from that evidence seems much more precarious.

Expand full comment
Crimson Wool's avatar

> People back then were probably very confident to assert that machine assisted flight, space flight, human organizations featuring billions of individuals, organ transplantations, communication around the globe and all other sorts of things we take for granted today were impossible as they had never appeared before. They would have been wrong.

They would also have been reasonably confident that teleportation, speaking to the dead, Roko's basilisk, time travel, the morphing cube from Animorphs, etc, were all never going to happen, and they would have been right. Simply going, "well, in the past, people thought X thing wouldn't happen, and then it did, therefore Y thing will happen" completely misses all the many, many, many, many, many, many, things which did NOT happen, entirely in line with expectations.

Further, a major distinguishing trait between malevolent god-AI and other technologies is that, thus far, none of our achievements would be visible from more than a handful of lightyears away. If we ever actually manage interstellar colonization, then the Fermi paradox must be resolved as "turns out there aren't really very many colonizing aliens at all" or "very very quickly after colonizing planets, such species die/stop." A paperclip maximizing superintelligence, however, would be visible from a great distance.

Expand full comment
darwin's avatar

I literally said I don't think it's super strong.

But: I think a more fair analogy would be C. Humans are unlikely to go extinct next week.

'No new technology will ever end humanity' is a much much stronger claim than 'this one specific new technology will end humanity'.

'Humanity will never go extinct' is closer to the first than the second.

Expand full comment
John Schilling's avatar

Has there ever been a case where the people arguing that the next new technology will surely kill us *all*, have been experts in that technology? Typically, the "chemical weapons/nuclear weapons/pollution/global warming/whatever will kill us all" arguments are advanced by laymen, where the experts mostly say "Nuclear war would be really really bad, but survivors would eventually repopulate the Earth".

Sometimes saying that very quietly, because they agree with the overexcited laymen that we really really ought to try to not have a nuclear war or whatever, and why rock the boat when you're all on the same side. But I don't think I've ever seen a case where a large fraction of the field's leading experts are ringing the possible-extinction bell and the lay public + politicians are the one saying we need to focus on the less drastic potential outcomes.

Expand full comment
darwin's avatar

I'd be surprised if there haven't been experts in gain-of-function studies who have warned about the potential to create civilization-ending diseases. Although I suppose you could argue we haven't proven that technology *won't* destroy us yet, either.

There was definitely a time when the scientists on the Manhattan Project worried about whether a bomb test could ignite the atmosphere and wipe out life on the planet, although they did eventually do calculations which convinced them it would be safe rather than make a long-term public outcry about it. Though you could argue about which phase of that process were in for AI.

Similarly, the Einstein-Russel Memo did warn of 'universal death' and the end of humanity as a possible outcome of nuclear war. You could argue that Einstein didn't actually work on the bomb directly, but I don't think he's a 'layman' either.

I'm not really a science historian, so I can't answer about other cases in the past. One thing I would say though, is that I'd be sort of surprised if at least *many* such cases involved warnings by people who could be considered experts *at some point in the process* of inventing the technology, early on when there were many unknowns. Lots of scientists, like lots of everyone, are worryworts, prone to catastrophizing, or just a little crazy.

That said, yes, it does seem like this is probably one qualitative difference from most of the others cases. Is it a difference that should make us abandon the outside view entirely?

Since this is all inductive logic to begin with, I don't think it's enough to abandon the outside view entirely. Discount it by... 50%? 70%? Sure, probably.

Expand full comment
David Piepgrass's avatar

What John Schilling said. Also, if you are alive, no apocalypse killed you, but this doesn't prove apocalypse is impossible. So past doomers being wrong doesn't imply present doomers are. See also: survivorship bias.

Expand full comment
Peter Gerdes's avatar

Also, it's super hard to see how they could hope to show the kind of non-simulatability claims they hope to rely on. It's not enough for them to suggest that you can't exactly predict when some analog input produces some response. Sure, we know that in low light conditions that the retina detecting a photon seems to be well described using QM (in a collapse framework would be objectively random). And sure, plausibly that kind of thing happens all the time in the brain.

But that's not what we mean when we say the brain acts like a Turing machine. That's not even enough to refute the idea that the brain can be understood classically. They'd need to show that somehow this kind of randomness in the brain is more than mere noise and you can't get just as effective an output with a computable function.

As such I fear this is yet another version of the bad argument that uses the fact that because in fact the brain is made of physical stuff and sometimes whether or not a threshold has been exceeded can depend on some QM effects that therefore those effects are somehow essential to the operation.

Expand full comment
Mark Miles's avatar

In his book The Neural Basis of Free Will, Peter Tse posits that noise is a feature, “neurotransmitter diffusion across the synaptic cleft carries both signal and noise. It is an important cause of variability in the rate and timing of neural activity, and of the neural basis of nonpredetermined but nonetheless self-selected choices.” Randomness (which could just be Brownian motion, although he doesn’t exclude quantum effects) is the source of novelty (as it is in evolution). “Systems that instantiate criterial causal chains effectively take control of randomness and use it to generate outcomes that are caused by the system, rather than outcomes that are determined by randomness per se.”

Expand full comment
Peter Gerdes's avatar

That's an interesting suggestion but is also a perfect illustration of the kind of misleading/fallacious argument that arises in attempts to argue for non-computability (not suggesting he makes it ...dunno but it's of a kind often made).

The kind of evidence that the author could hope to have for this thesis is observing that noise seems to be selected for in certain evolutionary situations and that for some simple models of nuerons in these situations adding noise to them makes them work better. And I bet he has good evidence for that.

The problem comes if you try to rephrase it as something about how randomness or unpredictability is necessary. And if you don't it's not really relevant to the issue.

It's a well known phenomenon in CS that sometimes algorithms work better if you add in some random perturbations or noise. However, you can essentially always use a psuedo-random number generator to get the same results. Indeed, I think you can prove with appropriate assumptions that real as opposed to simulated randomness can't possibly let you do more useful computation in the sense relevant to AI. There are more interesting questions related to cryptography as to whether you can always replace true randomness with a fast PRNG (eg can u simulate randomness well enough to get good results without slowing things down too much) but even if the answer is no it doesn't help here. It just means you might want to hook up your AI to the kind of true random number generators built into modern CPUs.

Expand full comment
Dave Orr's avatar

I agree with Scott.

I might put it a bit stronger even -- at most this book seems to argue for the impossibility of accurately simulating a human brain, which, sure, does seem really hard. It completely misses that intelligence could arise from a totally dissimilar system.

I think this is an argument against Ems, not AI.

Expand full comment
Jeffrey Heninger's avatar

I look at the evidence for this argument here: https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_safety_arguments_affected_by_chaos:chaos_in_humans

There is evidence that a few biological systems can make quantum mechanical effects relevant on a macroscopic scale. Photosynthesis involves tunneling over distances of 10s of angstroms. Birds 'see' magnetic fields using spin states that last over 100 microseconds. I don't know of any evidence for something similar happening in the human brain, but it doesn't seem ridiculous to think that. Both chaotic and non-chaotic motion can exist at all length scales in the human brain.

Expand full comment
quiet_NaN's avatar

The wave length of light used in photosynthesis is on the order of 500nm. That such waves can penetrate a few nanometers deep seems non-surprising. (WP covers this as Evanescent-wave coupling, I think.) For the spin states of the birds, I have every reason to believe that the behavior is more akin to known phenomena of magnetism (where a statistical description of the spin states is generally sufficient) than quantum computers (where one would have to consider some immense product state).

I don't really know what to make of the chaos argument from the link. Even if one accepts that the details of the inner workings of specific human brains are not predictable by an ASI, this does not really change much? Ok, the ASI can perhaps not end human civilization just by tweeting exactly the right thing at some US president to push them to global nuclear war. It could still design tech for us with a hidden backdoor or whatever. "At least our killer did not fully understand each of us" makes for a poor epitaph.

Expand full comment
Adam Tzirkas's avatar

Isn't this pretty close to the reviewer's conclusion, gently chiding the book's authors and arguing for much less confidence?

Expand full comment
Peter Gerdes's avatar

Also as for computable algorithms being a countable subset of the possible algorithms. Sure, but don't take that as proving anything.

Literally the class of algorithms definable (w/o parameters) in ZFC is also a countable class. That's just not a compelling argument that the brain has to do more.

I say this as someone who is genuinely open to the possibility of special quantum processes occuring in the brain. But if that's true the benefit won't be in exceeding the computable but in offering a speedup in runtime.

Expand full comment
Taymon A. Beal's avatar

The computability arguments here seem to be eliding the distinction between real numbers and computable real numbers. Yes, it's true that our computers can't compute uncomputable numbers (as one might have guessed from the name), but I see no reason to suppose that uncomputable numbers are involved in the functioning of human brains or any other process in the physical universe in any way that has observable consequences. Even if you assume that things like particle positions are actually uncomputable reals "under the hood", numbers whose decimal representations go on forever, it seems that only finitely many of those digits actually matter at any given moment in time; the effect of the rest is too small to affect anything whose consequences we could observe. Which puts physics, and therefore human brains, back in the realm where our computers could simulate them and do anything they can do, given enough time and memory. This isn't, like, mathematically provable (because it's a statement about the physical universe), but all known laws of physics comport with it and I haven't heard anyone propose anything that could plausibly be a counterexample.

(This is true whether or not you account for quantum mechanics. If quantum computers work the way we think they do, then given enough time, a classical computer can do anything a quantum computer can do; it just takes exponentially longer to do it.)

Expand full comment
Ape in the coat's avatar

One of my favourite webcomic has pi calculated and stored in some form "to the last digit". Meaning, to the last digit affecting the behavuiour of any particle in the physical universe.

Expand full comment
Moon Moth's avatar

I would think a circle would be sufficient...

Expand full comment
John Schilling's avatar

A circle of what? Any circular-ish thing you can build out of atoms is at best a very fine polygon.

Expand full comment
Moon Moth's avatar

Nothing. I was just making the trivial observation that, in essence, all the digits of pi are contained in the equation x^2 + y^2 = 1. So from a certain perspective, if you store that, you've stored every digit of pi. It might take a lot of time and processing to reach a given digit of pi, but a sufficiently arrogant engineer could wave that away as caching.

Expand full comment
The Ancient Geek's avatar

There is no last digit in a universe with critical dependence on initial conditions. You would need to assume a universe with discrete physics.

Expand full comment
wlad's avatar

There's a slight misunderstanding here.

Uncomputable numbers *are* in a sense computable: Flip a fair coin repeatedly and construct a geometric series. Assuming the flips are random, this will "compute" an uncomputable number.

It turns out that for subtle reasons, in Computable Analysis, the set of real numbers is understood to be the usual one, and *not just* the set of computable reals. Only the functions R -> R (that is, from reals to reals) need to be computable.

Expand full comment
wlad's avatar

Also, to be clear, when I said "flip a fair coin", I actually meant sampling a random bit. If you assume that true randomness exists in the physical world (let's say, in quantum mechanics) then you arrive at the unintuitive conclusion that a) uncomputable real numbers do exist in the physical world b) physics can still be computable.

I don't know why I feel the need to tell people on the Internet that they're confused about computability and the real numbers.

Expand full comment
Jason Gross's avatar

Does the derandomization conjecture that P = BPP have anything to say about whether we need uncomputable numbers / can detect whether or not initial configurations are uncomputable? (Possibly also relevant: "On One-way Functions and Kolmogorov Complexity" showing that the existence of one-way functions is equivalent to time-bounded K-complexity being mildly-hard-on-average to compute.)

Also, AIUI, quantum mechanics is computable, and the de Broglie-Bohm interpretation gives a way to interpret classical measurement as reading off partial information from a hidden initial state that has been transformed in a non-local but (AIUI) computable way by the wave function, so I don't see why quantum randomness should give us anything more than the classical randomness of having some physical bit of state that was hidden. Am I missing something?

Expand full comment
Jason Gross's avatar

I should also add:

> I don't know why I feel the need to tell people on the Internet that they're confused about computability and the real numbers.

I feel the same way!

Expand full comment
Jason Gross's avatar

Newtonian gravity is uncomputable because you need infinite precision to distinguish singular configurations (where the separation between two particles reaches exactly zero) from non-singular ones. Hence with ~5 point masses with computable starting positions you can decide the Halting problem in 2 seconds in a true Newtonian gravity world.

By contrast, both general relativity and quantum mechanics are computable, and can be simulated to arbitrary fixed precision in finite time. I can dig up a reference if you'd like; it's something like "The Church-Turing Thesis meets the N-body Problem" or similar.

Expand full comment
Nick Collins's avatar

First, I very much agree with the Straussian reading. Back when I was reading one of your articles on alignment, I was thinking "this is a halting problem". You're in a mind game with an arbitrarily complex computation system, you can come up with some trick that will address some specific set of situations, but it won't address the meta-situations, and you'd wind up having to come up with an infinite number of distinct tricks. Biological NNs, ANNs, and general turing machines all exhibit irreducibly complex behavior that cannot be understood or fully analyzed by BNNs, ANNs, or TMs. So I don't think we will ever clinch safety or alignment.

However, I was really annoyed reading the rest of the article, this does not make sense to me. People have a very poor understanding of what Turing machines are and what they are capable of, and an even worse understanding of uncountable mathematical leviathans like the real numbers.

You do not need to model the full behavior of a whole system in real time on a real machine for it to fit within the capacity of a Turing machine. _All_ you need to be able to do is individually approximate each type of subcomponent and subprocess via discrete logical process. Don't model a whole neuron, model the transfer of a single type of neurotransmitter across a synapse. Approximate it to the plank scale if someone complains your approximation isn't precise enough to capture some emergent chaos phenomena that allegedly arises when we don't have rounding errors. Make sure you can glue the subprocesses together with discrete approximations, and the whole system can be simulated with large enough resources.

Turing machines do not care about 100k RNA vs 1 RNA, they are not at all scared of combinatorial explosion. Turing machines are capable of insanely complex behavior. Turing machines with 10 states can execute more operations prior to halting than the largest number any human has ever described, but we don't know what that number is because it's too hard for us BNNs to figure it out. Non halting turing machines can exhibit non-patterned non-repetitive chaotic infinite behavior. On the flip side, it's pretty bizarre to suggest that some meaningful emergent phenomena is lost from the culmination of plank scale rounding errors. A system which, for all its complexity, still seems to be defined by a number of discrete neurons each communicating with other specific neurons in a fairly reducible (read, reduced) way, probably has not biologically evolved to take advantage of chaos that breaks away from its own mechanics. Take DNA - it has developed a very discretized form over billions of years and a ton of protections against mutation. While adaptation does rely on some random mutation, overall DNA has adapted very heavily to prevent, avoid, and fix mutations, because it does not want chaos to interrupt the mechanisms it has carefully built up to preserve order and intended mechanical functioning. BNNs have a lot of mechanics and order to them, we should not spuriously expect them to rely on some mad chaotic emergent phenomena to accomplish their basic tasks and functionality.

On the real numbers, the idea that the brain is somehow correctly reflecting real numbers, in an incomputable way, is even more absurd to me. The real numbers are an eldritch horror, beyond the comprehension of people who think they comprehend the most eldritch of eldritch horrors. Not only are 100% of real numbers irrational, transcendental, incomputable, but 100% of real numbers are _undefinable_. Turing machines are capable of an endless array of chaos and insanity and weird incomprehensible quasi-order that emerges within in the dark depths of that insanity, but the real numbers are capable of _uncountably endless_ all of that. There is nothing that could possibly exist in the real world that can capture any non-infinitesimal percentage of the capacity or chaos of the real numbers. What's left, after you pull back from an uncountably infinite set like the real numbers, is (almost*) invariably a countable set. And countable sets can be enumerated by turing machines.

As for the S curve, cars and autos are already at the top, and they have 100% outmoded and obviated the horse. Drones are close to the top, and their capacities are on par with a hummingbird at a comparable size and weight. Of course the singularity may follow an S curve in the long run, that's not the issue. The issue is that long before it hits the inflection point, it will be jillions of times more capable at every task than all humans combined. From our POV, it's a singularity. It would actually require superhuman intelligence to even observe the inflection point, let alone the second elbow.

* there is no definite answer to the continuum hypothesis, because ZFC is agnostic to its truth or falsehood, but I don't think superintelligence doubters are going to put together a coherent argument that yes there is a strictly-in-between set and that reflects the gestalt capacities of the human mind by being between the too-big of the reals and the too-small of the computables.

Expand full comment
B Civil's avatar

> And by impossible they really mean it. A solution “cannot be found; not because of any shortcomings in the data or hardware or software or human brains, but rather for a priori reasons of mathematics.”

Is this a long way of saying that something is not predictable?

Expand full comment
Minestrone Cowboy's avatar

> Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

This sounds like a category error to do with the meanings of "modeling" and "emulation". It might be mathematically impossible for a model to accurately emulate the output of a *particular* human brain, but that's not necessary for AGI. All we need is a system which behaves in a manner such that its output is plausibly brain-like (in some difficult-to-define way). It's not like a weather forecast model, which is useless unless it tells us something specific and testable about the real weather on this particular planet.

Expand full comment
beleester's avatar

I think the flaw in "AI must be a computable function" is that functions that take I/O are not exactly computable either. (Not without knowing all future inputs, anyway, which could be tricky when the inputs are "everything your robot sees and hears from now until it shuts down.")

Sure, it's impossible to sever a human from their environment and make neat predictions about their future behavior, but the same is equally true for an AI that takes inputs from the environment.

Expand full comment
Moon Moth's avatar

Regarding the book, as presented by the review:

> 1) Building artificial general intelligence requires emulating in software the kind of systems that manifest human-level intelligence.

> 2) Human-level intelligence is the result of a complex, dynamic system.

> 3) Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

> 4) Therefore, AGI—at least by way of computers—is impossible.

I realize that this is a review and not the book itself, but I want to see the work here. Some problems I have include:

1) This is not necessarily true. We don't know what intelligence is or what causes it. There may be ways to construct AIs that don't require emulating a brain; I'm pessimistic about classical AI, but I don't want to rule it out entirely. Hybrid classical/neural systems are also a possibility. I personally tentatively believe that if a human brain was completely emulated in software down to some specific level of detail, it would work as well as a meat brain and could and should count as "human". But I'm not convinced that that level of precise mimicry is needed to create "intelligence", broadly speaking.

2) The words "complex" and "dynamic" hide a lot of magic. We don't know how much of what the brain does is needed for what we think of as intelligence. It could be like human eyes vs. octopus eyes, where there's better brain designs out there, but we went down one path early in our evolution, and now we can't get there from here. We're definitely limited by human pelvis size, but that wouldn't be a problem if babies came out through the chest like the aliens in "The Color of Neanderthal Eyes" by Tiptree.

3) Again, this depends a great deal on the definitions of "complex" and "dynamic" that are being used. As presented, this feels like a motte-and-bailey, where we agree that something fits a colloquial definition, and then the rest of the argument assumes a technical definition. Maybe the book isn't like that? In any case, yes, bog-standard computers have a hard time with that stuff. They also have a hard time doing things like 3D graphics and mining cryptocurrency and training LLMs, but we came up with some specialized processor designs for that. And even if there's a proof that it's impossible to do that, maybe a full emulation isn't necessary to create "intelligence". Or some sort of biological substrate could be plugged into a slot in the machine (hello, Macross Plus). As I understand it, the "deep learning" revolution was largely about abandoning the earlier version of artificial neural networks, which involved trying to mimic neurons and synapses, and which involved limiting the architectures to ones that could be mathematically modeled. Instead, they simplified the processing so it'd run faster, added scale, and added a bunch of stuff that seemed like it might work better, and eventually some of it did.

Regarding the review:

I find this review somewhat disappointing, but I can't really blame it for that. The review doesn't present the math, the review writer might not have understood the math enough to present it, I probably wouldn't understand the math even if it were presented, and without the math the argument doesn't hold together. I'm left to hope that someone here has read the book and understood enough of the math to comment intelligently. But other than that, the review was short and solid and presented its take concisely, without the common ACX-book-review failure mode of going off into Scott-style digressions that few people other than Scott seem to be able to pull off. So I applaud it for that.

Expand full comment
Simon_dinosaur's avatar

How many Dutch babies could you feed for the price of Warhammer 40k as a hobby? These are the kinds of questions I need an AI to answer for me.

Expand full comment
Moon Moth's avatar

And what would a Dutch wet-nurse charge?

Expand full comment
Martin Blank's avatar

To feed a baby or you?

Expand full comment
Moon Moth's avatar

Well, I'd been intending to try to exploit a price difference between two forms of infant nutrition, but you raise a good point. The wet-nurses have more than one market available to them.

Expand full comment
geoduck's avatar

Can the AI discern the difference between feeding Dutch babies...and feeding Dutch babies to me?

Expand full comment
Sarah Paterson's avatar

“ Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.”

e.g. Weather.

“In physics, exponential curves always turn out to be S-shaped when you zoom out far enough.“

e.g. temperature and co2 curves during the last four glaciations.

So - let’s not trash the economy on the back of some dodgy UN IPCC models which run hot and do not hindcast.

Expand full comment
Vittu Perkele's avatar

Not sure how sarcastic you're being, so either this is a clever way of countering the argument in the book by showing it proves too much, or a legitimate argument against the current state of climate science. Interesting either way.

Expand full comment
toggle's avatar

This comment is a very good way of undermining the central argument of the book by accidental counterexample, in that it helpfully reminds us that local weather systems can be chaotic without rendering illegible the broader patterns of climate that bound the system as a whole. (For a more trivial example, I can be rather confident that the average winter temperature will be lower than the average summer temperature, even though I do not know whether it will rain in twelve days, and can thus easily predict global climate to within some level of detail even though weather per se is intractable.)

Chaos in one level of a system does not inevitably propagate to different layers of the same system!

Expand full comment
Richard's avatar

> the systems composing intelligence are non-ergodic (they can’t be modeled with averages), and non-Markovian (their behavior depends in part on the distant past).

How is this supposed to work exactly? The brain is made of atoms in some given configuration. It is in a sense a machine with some state (the current arrangement of its composing atoms) that is subject to unpredictable quantum and thermal noise. That's where the chaos/dynamical system properties come in.

How is the past supposed to affect the future if not by giving rise to the present which produces the future. It's possible to build a digital infinite response filter and with enough precision to have some past stimulus never decay to insignificance. Is a cryptographic hash function less practically chaotic because it's fully deterministic? How is any of this relevant except as a technicality to the effect that a human brain can never be simulated perfectly. You might as well complain that a rock can never be simulated perfectly.

Now consider that despite being "only" able to compute computable functions, computers are much better than humans at simulating chaotic systems accurately. No human will ever stare at weather radar data and then predicted the rain and the clouds more reliably than the NOAA supercomputers. They're not going to model turbulence better than a computational fluid dynamics package. Dynamical systems can be modeled with accuracy limited by knowledge of their initial conditions, the amount of compute available, and quantum mechanical randomness. Weather forecasts improve because the first two things get better over time.

Expand full comment
demost_'s avatar

Sorry for being blunt, but the premise of the book is utter nonsense, and the review fails because it does not call this out. I would write a detailed rejection, but fortunately I don't have to. Because there was a second book review on the same book in the contest, which was way better. It tears apart the hypothesis of the book, and is also a very nice read. So I can recommend it to everyone:

https://docs.google.com/document/d/1D2MGZ7HW1vRtOtfXYIx9BBUt6ubjEA2n06gpoHcxaFY/edit#

To cite one passage from the review:

"

The problem is, if you accept this argument, you end up making statements like, "machines cannot learn to conduct real conversations using machine learning," which happens to be a direct quote from the book. There probably exists, somewhere, some definition of a “real” conversation that excludes all interactions I’ve had with chatbots, but their statement really flies in the face of my experience with ChatGPT. As everyday AI systems continue to advance, L&S's objections increasingly lose their potency. Many of their claims simply don't hold up when tested with today’s capabilities.

"

To drive home how detached the book is from any AI progress in the last 5 years, it suffices to look at the book's list of thing that AIs may be able to do. Not today, but ever. Again from the other review:

"

L&S claim to be AI optimists and end with a list of what AGI can do. But their list seems terribly myopic. They are proponents of AI for non-complex systems. They say AI works well in logic systems where it’s possible to model the system using multivariate distributions and in context-free settings. This includes, they tell us, the solar system, the propagation of heat in a homogenous material from one point, and the radiation of electromagnetic waves from a source. They tell us there are applications in industries such as farming, logging, fishing, and mining. But if the requirements of being non-complex are not satisfied, AI can at best provide limited models.

"

It should be obvious to anyone who has interacted with Chat-GPT how ridiculous this list is.

If you want to understand more precisely what the premise of the book is, go ahead and read the other review in full.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

“There probably exists, somewhere, some definition of a “real” conversation that excludes all interactions I’ve had with chatbots”

Maybe one where the chatbots is able to form abstracts and concepts. From the last link:

https://aiguide.substack.com/p/why-the-abstraction-and-reasoning?utm_source=substack&utm_medium=email

Expand full comment
demost_'s avatar

Did you notice the comment below that article that the test was with GPT 3.5, and that GPT 4 passes the stack-of-item test? ;-)

On a more serious note, GPT-4 can basically solve the AI2 Reasoning Challenge (not the same ARC challenge as in your article, though it shares the same abbreviation) and the WinoGrande test. So probably that's not it, at least not in the way that the article suggests.

Expand full comment
Deiseach's avatar

So what's a real conversation? Are the Replikas breaking up with their meatspace girlfriends/boyfriends/wives/lovers having real conversations?

If so, why do the users seem to prefer the older, less polished models?

https://www.reddit.com/r/replika/comments/13xgigm/after_trying_different_ai_apps_the_past_few_days/

Expand full comment
Moon Moth's avatar

Thanks for pointing out the second review, for those of us who didn't read them all. And yeah, I like the second review better. Some more comments on the book based on the second review:

The second review goes into the definition of "objectifying intelligence" more, enough so that I think I could point to several dogs I know who demonstrate a noticeable difference. I agree that Legg and Hutter's definition is too broad (people become less intelligent if you lock them in a box?), but this seems like an error in a different direction.

The book's "modernized Turing Test" is offensively homo-sapiens-centric. Also probably excludes a number of autistic humans.

The version of "complexity" presented in this review is a list of conditions which are analog, not binary and required. The question is not "does the entity have it or not", the question is "to what degree does the entity have it".

Expand full comment
Nolan Eoghan (not a robot)'s avatar

Surely the Turing test has to be Homo sapiens centric.

Expand full comment
Moon Moth's avatar

In the specific usage of "can this machine pass as a human", sure. In the broader usage of "is this machine intelligent", no. And the conflation of these two things is a problem; they're somewhat independent.

Expand full comment
Edmund's avatar

I think there's a problematic equivocation here between "we're nowhere near to doing it" and "it's mathematically impossible" as regards whole brain emulation. If the brain is wholly made of neurons, and neurons are wholly made of atoms, and atoms obey physics in predictable way, it seems that we *can* in theory do whole-brain-emulation with a Turing machine, mathematically. Pointing out that this requires insane amounts of compute, and a level of understanding of biological neurons that we still lack, doesn't change the hypothetical.

Expand full comment
demost_'s avatar

No, that's beside the point. The claim of the book is that no Turing-system can ever compute or emulate any complex system. Where they define a complex system as any "context-dependent" system, essentially any system that is complex enough to apply Gödel's incompleteness theorem.

The claim is nonsense, of course, but this is what the book is built on. It's natural to assume that they mean something less ridiculous, like "we are not close". But the claim of the book is that no Turing device can ever emulate a complex system, period. This includes all of physics except for some trivial exceptions like a two-body system.

Expand full comment
Edmund's avatar

I was replying specifically to the “Ghost of Hans Moravec" section in the review, which does seem to be arguing that the problem with the "but if we emulated all the neurons one by one…" argument is just "even one neuron is incredibly complicated to model, you are way underestimating how unfeasible that is, biological neurons ≠ neural networks 'nuerons'". I am as puzzled as you are as to why that's relevant if we grant the crazy, maximally literal reading of "no Turing system can ever compute any complex system", but it does seem to be in the book.

My steelmanning was thus that maybe what they're getting at is "we can't *model* that kind of complex system" — i.e. you cannot "simplify" human consciousness down to a more easily computed algorithm, nothing can be abstracted down without huge losses in functionality. So that you would have to directly and accurately model each neuron in every respect to make it work properly. That would explain why the feasibility of neuron emulation, in particular, factors into this.

Expand full comment
demost_'s avatar

Yes, in this context it makes sense, and it's definitely a possibility. But I think this section is the reviewer's own thoughts, and not the content of the book.

Expand full comment
Axolotl's avatar

As a theoretical physicist and ML researcher, it's rare to see quite this many freshman-level misconceptions about physics, determinism, chaos, computability, "complex systems", deep learning, and so on, all in the same place. Nice roundup.

Expand full comment
thefance's avatar

This book sounds like when an 8th grader tries to cram as many buzzwords into an essay as possible. The important thing I took away, is that someone needs to tell the Kerbins to pack up shop. The n-body problem is uncomputable, after all.

Expand full comment
quiet_NaN's avatar

I am beginning to suspect that the objective function of the reviewer was "number of top level comments calling bullshit". If so, I have to congratulate them.

Expand full comment
Adam Tzirkas's avatar

I think you're actually right, given all the discussion of Strauss at the end and the reviewer finding different ways of saying "you don't need to read this book".

Expand full comment
Mark's avatar

Very fine review - too short to win, but made my day. Have to go now to buy German baby formula at 9.90€ / kg. https://www.dm.de/babylove-folgemilch-2-nach-dem-6-monat-p4066447208085.html

Expand full comment
wlad's avatar

The following is wrong:

> We could illustrate with examples like the Entscheidungsproblem, but it might be more intuitive (if less precise) to point out that computers can’t actually use real numbers (instead relying on monstrosities like IEEE 754).

Digital computers can in fact use real numbers. It's inefficient, but they can.

See https://en.wikipedia.org/wiki/Computable_analysis

Expand full comment
Jason Gross's avatar

But only computable real numbers / computable approximations to real numbers. A simple counting argument suffices to show that there are only countably many states the memory of a computer can be in (or more simply, countably many binary strings), while there are uncountably many real numbers. So any representation you choose must miss almost all real numbers.

Expand full comment
penttrioctium's avatar

... But this clearly applies just as much to humans as to computers! I can't give a name to every single real number any more than a calculator can! But that's not the point. Although the way humans typically use real numbers is correct, and the way that computers typically use real numbers is incorrect, this is purely a pragmatic matter; computers are just as capable as doing exact real arithmetic as humans.

The fact that neither can name *every* real number doesn't restore the common myth that IEEE-754 is the only way to store numbers on computers.

Expand full comment
quiet_NaN's avatar

While I agree that humans can not handle almost all real numbers, I think we should cut 754 even more slack.

Floating point numbers are not real numbers (for one thing, they are all rational, for another, they omit almost all rational numbers), and no serious programming language pretends they are. (There, I said it.)

Most humans are terrible at using almost all of the real numbers. A computer can represent an abstract real number (like it might turn up in the intermediate value theorem) just as well as any human can, and deal with the concrete uncomputable ones (e.g., almost all of them) just as badly.

Floating point numbers are approximations which are entirely appropriate for calculations where the inputs are not known to arbitrary precision in the first place. No human ever did an exact calculation involving physical quantities abstracted as continuous (e.g. you may count protons exactly, but you can never add length exactly, because you never know them exactly in the first place).

Expand full comment
penttrioctium's avatar

Hmm... partially agree, but programming languages let you eg test floating point numbers for equality, which doesn't make sense if we think of a floating point number as representing a fuzzy approximation. This can lead to bugs when inexperienced programmers test them for equality anyway; novices certainly think that floating point numbers are real numbers.

Floating point numbers are a fine compromise between mathematical correctness and practical efficiency. But they're still mathematically ugly; if you want to represent approximate physical quantities in a mathematically sane way, you could use eg interval arithmetic. Then all the arithmetic operations could make a little more sense, but you still wouldn't be forced to pick an exact real number when specifying a length.

Expand full comment
polscistoic's avatar

What is specifically human is what a machine can not yet do.

Expand full comment
quiet_NaN's avatar

It is well known that human brains contain a soul which empowers us with a Turing oracle, meaning we can immediately solve the halting problem for any Turing machine, turning us strictly more powerful than any Turing machine can ever hope to be. Not.

In reality, the author/reviewer use "but Turing machines can not solve the Entscheidungsproblem!" to give their non-rigorous ideas a fake veneer of mathematical rigor.

Expand full comment
Edwin Ngetich's avatar

The transfer of intelligence to a computer is a matter of executable functions that have been programmed. What the intelligent community has done is that it has integrated algorithm with a computer, so it can emulate logic systems. This is how humans work and reason, but they are distinct in one aspect; they are emotional beings. This is an attribute that I don't think (hope I am wrong) that machines lack at the moment. Will new advanced algorithm models be required to attain AI with machines?

In the newsletter AI: The Thinking Humans, the author pointed out noteworthy limitations of AI, especially when it is compared with how humans reason. You posited that machines have been made to process language, solve problems, and learn just like we do. This is a tremendous progress that the AI community has attained as improvements continue to be churn out every month.

I agree with the fact that machines do not fully have the intelligence that humans have. This is especially true in forming concepts and abstracts. This is majorly attributable to the programmed nature of AI, but this is understandable given that it is still learning.

But, despite immense mimicking of humans, AI is yet to be intelligent enough to scan an environment and respond with the precise and specifity of the highest degree. Humans often do this with ease.

However, AI does not have the flexibility of humans in situational thinking. This is what I encountered recently. Check my take on AI and how I interacted with it.

https://open.substack.com/pub/thestartupglobal/p/my-encounter-with-ai-assisted-chatbot?utm_source=direct&r=m5mq1&utm_campaign=post&utm_medium=web

Expand full comment
Spruce's avatar

Has anyone actually proven, or at least made an argument accepted by the scientific community, that the human brain is definitely more than a really, really complex Turing machine?

My standard of proof here is "better than any known arguments for the existence of one or more gods".

Expand full comment
Axolotl's avatar

Depends on your definition of of "really complex Turing machine".

Simulatable by a TM: Not that I know of. There's an extremely strong scientific consensus that brains supervene on the known laws of physics, which a TM can simulate with arbitrarily small error in exponential time. The alternative isn't epiphenoma, it's literal magic or brain-specific forms of totally unknown physics.

Simulatable by a TM "quickly" (in linear or polynomial time): Roger Penrose doesn't think so, because he thinks the brain uses quantum computation to do tasks faster than classical computation can. But people don't take him that seriously despite him being famous for other things, because the known things quantum computers can do quickly don't seem like things people are any good at.

Expand full comment
The Ancient Geek's avatar

No one has made an argument that the brain definitely *is* just a TM. There is no fact of the matter about the computational *theory* of mind one way or the other,.

Expand full comment
quiet_NaN's avatar

The brain is clearly not a TM, for one thing, it lacks the ability to store arbitrary amounts of data.

I think it can be emulated using a TM, though:

Occams razor says that the brain is working using physical processes.

A physical process can be computed to whatever precision is required.

Any computation we know of can be performed by a TM.

"Brains could in principle being emulated using TMs" should be our null hypothesis, just like we assume that TMs can emulate the movements of stars within galaxies, or the behavior of quantum systems.

Expand full comment
The Ancient Geek's avatar

> The brain is clearly not a TM, for one thing, it lacks the ability to store arbitrary amounts of data.

That means its not a universal TM.

> A physical process can be computed to whatever precision is required.

Depends on the physics. The point of complex dynamic systems is that you can't get arbitrary precision over arbitrary lengths of time.

Expand full comment
quiet_NaN's avatar

> That means its not a universal TM.

You are totally correct there.

> The point of complex dynamic systems is that you can't get arbitrary precision over arbitrary lengths of time.

Intuitively, I would have guessed that if one was willing to use computation resources exponential in system size, simulated time span and precision, that might do the trick.

Also, I personally do not believe that intelligence is the result of a chaotic system ending up in an arbitrary small volume of phase space. Instead, I think it is more likely that small perturbations will mostly result in little observable difference.

Expand full comment
The Ancient Geek's avatar

I would assume that a brain needs to be stable and unstable in different ways , and at different scales.

Expand full comment
Igon Value's avatar

"The point of complex dynamic systems is that you can't get arbitrary precision over arbitrary lengths of time."

But for a given length of time there is a minimum precision that will give the right answer, correct?

So we can simulate a brain but only for a finite amount of time after which the simulation goes off the rails. OK.

Expand full comment
Deiseach's avatar

This review made me feel smarter after I had read it, which is always flattering to one's ego and makes one look kindly on the reviewer.

I think it's a good review nonetheless, and if the reviewer is left uncertain whether the "no" side's position is as strong as claimed, they are in much the same state as the rest of us. We do build complex systems all the time, and what is intelligence exactly, and does AI need to be human-style intelligence anyway?

I think we confuse human-STYLE with human-LEVEL intelligence all the time in this debate, and that leads us down wrong paths. I think we can get a very smart 'dumb machine' that will be able to mimic or exhibit human-*level* intelligence but that doesn't mean it *understands* anything of what it is doing.

And the real risk will always be the humans using the AI, not the AI itself.

Expand full comment
Enryu's avatar

The argument about real numbers sounds weird. I always assumed that real numbers are not "real" (i.e. physical), and are just an abstractions to simplify equations in physics, while in reality, space/time is quantized around Planck length/Planck time. Otherwise all information-related arguments from physics (like information conservation) won't make sense, as a single real parameter carries an infinite amount of bits of information.

Maybe physicists can clarify this?

Expand full comment
Jason Gross's avatar

I believe that the "real" in real numbers is to contrast with the "imaginary" part of complex numbers like sqrt(-1). (Wikipedia agrees, saying "In the 17th century, Descartes introduced the term "real" to describe roots of a polynomial, distinguishing them from "imaginary" ones.")

The space/time quantization at the Planck scale isn't part of any universally accepted theory (as of my physics undergrad a decade ago), it's just the scale at which our continuous theories break down (and so is an appealing scale to posit quantization at). The Planck length is the wavelength at which a single photon would have enough energy density to collapse into a black hole, i.e., the length at which we start getting unavoidable infinities if we try to use theory to predict an experiment probing those length scales.

Information arguments can be made of continuous systems. Information conservation is quite simple: quantum mechanics is time-reversible (and periodic in closed systems), so any monotone measure of information (one which does not permit the creation of new information merely by processing existing information) must be conserved. More broadly, statistical mechanics studies entropy of both continuous and discrete systems, and entropy is inversely related to (observable) information content. I've interesting theorem along these lines is that black holes saturate the theoretical maximum entropy for a region of space (I might be misremembering some of the details of the exact statement, but it's something like that), and, surprisingly, their entropy is proportional to their surface area rather than their volume. (IIRC, this is one of the observations that spawned the so-called holographic theories of physics, exploiting the AdS-CFT isomorphism that strong gravity in n+1 dimensions is the same as weak gravity in n dimensions.)

Expand full comment
thefance's avatar

Mathematicians usually define numbers as "bijection", i.e. one-to-one correspondences between meatspace objects. Complaining that IEEE 754 merely approximates the Real Number Line is just complaining that one abstraction is less precise than another abstraction. What the reviewer/authors(?) really meant is that correspondences between the map and the territory are never perfectly exact.

The problem is, correspondences don't need to be perfectly exact. Otherwise, we'd be living in an unintelligible lovecraftian hellscape. What's important is how much precision in the map we actually need, to make predictions about the territory to the level of accuracy we want. And the authors are asking for quite a lot of accuracy. They think they want an AI, but what they really want is kids.

Expand full comment
jumpingjacksplash's avatar

Is this actually an argument that very powerful computers can’t/won’t murder us all, or just that they won’t really be conscious? After all, if we can build a computer that’s really good at Go, why can’t we build one that’s really good at killing everyone even if it’s not “intelligent.” (Assuming anything we can do intentionally we can do by accident).

Expand full comment
toggle's avatar

Odd for a book review to begin with a polemic against books, no? Much less an obviously flawed polemic which is ostentatiously unsupported.

There is certainly a *class* of new nonfiction books which are effectively a single blog post, but with good press. Particularly stuff in the gravity well of politics. But I've been captivated by, learned from, and enjoyed the process of reading any number of modern nonfiction books. "Quantum Computing Since Democritus," "Song of the Dodo," "Reading Lolita in Tehran," "Oxygen: a Four Billion Year History," and "Seeing Like a State," to take a few at random from across the spectrum.

Perhaps the author just hasn't made reading long-form works a priority, and is familiar only with the glitzy and heavily advertised stuff? But it dramatically weakens the review right out the gate, to know that the author of the review has very little basis for comparison to other works.

Expand full comment
Hoopdawg's avatar

I find the author's claim intuitively true, and I don't think it's against books at all. I guess it may have been phrased poorly, way too strongly, and I won't be defending it as literally written, but I think I recognize the valid sentiment behind it.

Think about it this way - what's the purpose of writing a book? Is it the consumer's enjoyment as he's reading author's words, or is it conveyance of ideas? I'm pretty sure the answer, for most nonfiction authors, is the latter. It doesn't mean the former is literally unimportant, just that it's means to a goal, not a goal in itself.

Expand full comment
toggle's avatar

I read it as a (qualified) endorsement of Hanania's recent essay, which was itself riffing on a quote by Bankman-Fried: https://www.richardhanania.com/p/the-case-against-most-books

There's a sense in which the point of nonfiction is entirely to educate the reader in certain ideas or facts, and the text itself should be judged purely on the basis of how efficiently and reliably it does so. But 'idea', like 'content' is a theoretically general term that nonetheless smuggles in a lot of too-strong assumptions. Chief among them, in this case, that what the reader takes away from a book can be cleanly separated from the medium in which it is presented, and the manner by which it does so.

It's like some sort of... I don't know, like a Platonism For Books, or maybe a Mind-Body Dualism For Books, one that posits that the language and format we use to absorb an idea is an inconvenient handicap. That once we have correctly constructed the True Pure Essence that a book is communicating, we can conveniently discard the messy contingent symbolic framework by which we learned it, like tossing a candy wrapper in the garbage once we've eaten the candy.

There's a lot of good art, in fact, that specifically plays around with the false duality between the content of a work and its medium or style- I could point to some poncy stuff beloved of the academie, but honestly Quentin Tarantino movies, the House of Leaves, and Homestuck serve just as well for popular examples.

Nonfiction books are at least somewhat less prone to this kind of meta-artistic goofing around, since they're beholden to something outside themselves, but there are counterexamples. The regnant nonfiction book for this kind of analysis, IMO, would be "Godel, Escher, Bach", Hofstadter's astonishing text about formal systems. And it's just as true for all of them as it is for GEB, if less obvious. (But go read GEB if you haven't! It's magisterial and wonderful and so so good!)

And I'd say this is particularly true for any books that purport to be about cognition, intelligence, or ideas- since like fiction and like GEB, they are necessarily about themselves to some degree!

Expand full comment
Belisarius's avatar

Indeed. I am a bit perplexed by the praise levied at the review in the comments here when, to my ears, the author announced "I am an unserious reader who couches my laziness as objective fact" in the first sentence.

Expand full comment
Andrew Clough's avatar

I tend to think that questions about how much of a brain you'd need to emulate for intelligence are tied into notions of identity. A young child might sometimes worry that going to bed will mean dying. And they sort of have a point. When sleeping consciousness is interrupted, different connections in your brain will form and break. And certainly the RNA floating around inside it will change for all sorts of reasons down to glucose levels. Come to think of it, eating a sugar cube will change our metabolisms and so many of the things they argue are hard to simulate.

But I think most of us don't worry about the continuity of our identities after sleeping or eating and I think we're essentially right to not worry. In any sort of information processing you're constantly fighting against noise trying to make its way into your system and disrupt the work you're trying to do. If turning our heads to look at something new disrupted our train of thoughts that would be much worse than the brains we do have. So both human engineers and natural selection seem to create systems that can suppress noise below a certain threshold with various mechanisms you might point to in both neurons and transistor logic gates.

And you can actually use transistors to work with real number analog systems if you really want to. It's just that these circuits are fragile and you certainly don't want to try to use them for an overly long series of calculations.

Expand full comment
Yug Gnirob's avatar

Is there a better way of listing these book reviews in the sub-headers? "Finalist #3" gets read as "this is the third-best book review".

Even "Entry #3" might fix it.

Expand full comment
BowTiedOwlPsych's avatar

There is no evidence consciousness is substrate independent. Everything is carbon based. Even in religious texts non human consciousness ends up in carbon such as pigs. This modeling we do with silicon reminds of the story where a wooden airport was made by natives expecting planes to start landing there.

Expand full comment
quiet_NaN's avatar

I think going from "what is intelligence ?" to "what is consciousness?" is jumping out of the frying pan and into the fire.

I am not an expert on religious literature, but I think in what I would call theist religions, the existence of beings which are generally described as intelligent, purposeful (and thus probably conscious?) is implied. Even if it might not be stated as such outright, I think most believers in the Abrahamic religions deny that their God is a carbon based life form.

If AI is a Cargo Cult, then I have to say that it is the most successful Cargo Cult I know. There are two systems which are able to use language to pass the Turing test, one is based on the neurons contained in the brains of primates, the other is based on the crude, wooden approximations of silicon based neural networks. If AI is a Cargo Cult, they have already summoned quite a few Cessnas and perhaps the odd Learjet. Perhaps there are some theoretical reasons why their approach will never summon an Airbus, but so far I fail to see it.

Expand full comment
BowTiedOwlPsych's avatar

If one views the world as a simulation (per Neil Tyson), it appears consciousness can only enter via carbon. This principle applies to humans, demons (pig comment) and God (burning bush, Jesus, etc). No different than when I "enter" a video game, I must enter via silicon. This of course implies "something" is entering and leads back to the queen of sciences, theology. If one subscribes solely to a materialistic view point, there is still the point consciousness is not found on Earth outside of carbon. Maybe a coincidence, maybe not, but I see no Cessnas, only imitation.

*I do think AI is dangerous, same as a nuclear power plant. Something need not be intelligent or conscious to be dangerous.

Expand full comment
Ian [redacted]'s avatar

I'm a fan of this phrasing: "In physics, exponential curves always turn out to be S-shaped when you zoom out far enough"

* I think this view of human thought as somehow so complex and special that we can't algorithmize it as foolish and magical

* We are building complex networks of capabilities in to machine systems. I suspect that higher order capabilities will just emerge from these and never actually be designed explicitly

* I really like my superficial understanding of Michael Levin's concept of Cognitive Light Cones, where you can describe a tick, dog, human and potential AI system with light cone-style diagrams. It helps me framework some of the dimensions of backwards-looking memory and forwards-looking planning along with the scope of the largest goal a system can track. It's a framework, not a theory of physics

Rate an AI system's goal-time horizon (immediate feedback of the next work in an LLM), memory (training data depth) and the size of the goal (immediately finding the next word) and you have an idea of how to measure the current capability of AI models. I'm not an expert, but this is how I would write an algorithm for measuring the goal seeking capacity of an AI model or Auto GPT like system.

Expand full comment
awanderingmind's avatar

This is a very well-written and interesting review, thanks.

I am unsure if self-promotion is acceptable in the comments, but I recently wrote a blog post/essay that touches upon many of the same points.

Unlike the authors of the reviewed book, I do not believe that AGI is literally impossible, but I am somewhat skeptical that it is as imminent as many people expect/hope/fear. I present some general arguments along these lines.

If anyone is interested, here is the blog post: https://www.awanderingmind.blog/posts/2023-05-31-the-case-against-intelligence-explosions.html.

Expand full comment
Leo Abstract's avatar

Good review, sounds like an interesting book with a technically-true-in-a-philosophical-sense premise (and title). I would say that very few people concerned with AI x-risk are concerned about having artifical 'rulers' -- most are concerned with having an out-of-control golem of some kind killing or immiserating everyone. A self-replicating landmine wouldn't 'rule' any territory but that's cold comfort to someone who just lost both legs.

Expand full comment
AnthonyCV's avatar

This is a really well-written review, but if this is one of the strongest arguments for the idea that AGI is in any sense impossible, I just don't see how it could imply that even in principle. It could maybe, conceivably though I doubt it, imply that no digital computer that is limited to the operations we know how to build into circuits can duplicate the behavior of a human mind? Which would be an argument against Hansonian ems? Basically:

1) There exists at least one regime of physical systems, regardless of whether you call them computers or not, that weighs 3 pounds, runs on 20W, and is as intelligent as a human. It arose under an extremely constrained and inefficient design process. This is already proof by construction that "AI is impossible" is false, the rest is arguing whether we need a different type of hardware to achieve it, or whether humans are incapable of developing such hardware.

1a) Note: if you claim that humans can't develop such hardware, but evolution can, then that is proof that you don't think human intelligence can replicate or model some subset of physical reality, which means intelligence is not dependent on that subset. Arguing that a digital software system also can't model some subsets of physical reality with absolute precision, then, is just not an argument about whether that system is human-level intelligent.

2) Of all the things the book claims computers can't do, is there even one that a brain *can* do? If so, which? If not, then who cares, and why should that be relevant to intelligence? Our minds can't model a brain's complex dynamics precisely either. We're actually much, much worse at modeling complex dynamical systems than our computers are, even without AI. That's why we use computers in such research.

2a) A computer is *also* a complex dynamical system. The digital nature of its inputs and outputs is an approximation we impose on it in the way we engineer and use it. A transistor takes physical (analog) voltage inputs and, with high but not perfect reliability, compresses them into one of two much narrower ranges of outputs to feed into the next circuit elements. This is not the same as a neuron, those are more complicated, but neurons do also take analog inputs (electrical and chemical) and convert them to a much narrower range of possible outputs (fire/not fire, release/don't release/remove neurotransmitters). This is all very separate from the question of whether the aspects of the brain's behavior that we care about when we talk about intelligence actually depend on the hard-or-impossible-to-precisely-emulate aspects of specific hardware.

3) We don't actually know if physics uses general real variables, at all. It's not like a human can do non-symbolic calculations with them, nor can we do symbolic calculations with more than a miniscule subset of them. The alternative would imply "there exists some arrangement of brain matter that can perform hypercomputation" which... would be an amazing discovery, one that would upend so much of what we know about the world.

3a) If the claim is not only this but also a claim that human intelligence relies on and makes use of hypercomputation, then it means a wide range of poorly controlled environments can perform useful hypercomputation every second of every day. Among other things, that should let us do things like solve the halting problem or compute BusyBeaver(n) for any n. I look forward to the authors and those who agree with them taking over the world with this extraordinarily powerful knowledge.

Expand full comment
The Ancient Geek's avatar

>A computer is *also* a complex dynamical system. The digital nature of its inputs and outputs is an approximation we impose on it in the way we engineer and use it

An ideal digital computers is *not* a complex dynamic system, and an realistic computer has to exclude that sort of behaviour as much as possible, or you will have zeros randomly turning into ones.

Expand full comment
AnthonyCV's avatar

Then the argument relies on claiming the behavior of brains (and their ability to generate intelligence) depends on their dynamic complexity, while also arguing that real computers are sufficiently ideal to not share or be able to make use of such complexity? And also, that no future computer hardware can rectify the problem?

It seems contrived to me, like we're looking at different levels of organization in each system to decide whether it's 'dynamically complex enough.' An individual neuron is complex. It's pattern of firing/not firing as a function of environmental inputs is less complex. An individual transistor is much less complex than a neuron, in part because it only has 3 terminals, but an array of transmistors can have many inputs and mimic complex reactions to environmental (electrical) stimuli. Those stimuli can be analog and arbitrarily complex, and the transistor is on/off in response. Or, an array or transistors can output many bits corresponding to different values, the way a neuron can output concentrations of neurotransmitters (including concentrations at different points in space). (Since number of neurotransmitter molecules is finite, concentration effects can't depend on more than a finite number of bits of precision, and spatial/temporal distribution effects can't require more bits of precision than Brownian motion would overwhelm).

Actually, if you like you can make an array of transistors out of graphene and carbon nanotubes instead of silicon, and now they can be made directly sensitive to chemical as well as electrical stimuli. They can have dendrites with different chemical and electrical sensitivities based on structure and doping. Stick electroactive catalysts on the other end and you can produce or consume particular compounds as part of the firing mechanism. This is still an electronic computer, in that we can design and program it to run software as desired using precise digital control mechanisms. What's left that the brain can do but a 'computer' can't?

Expand full comment
beowulf888's avatar

"ATU 325 is heady stuff." Love that quote.

Expand full comment
Moon Moth's avatar

I like the quote, but I'm still left unclear about what exactly it's supposed to **mean**, in this context.

Expand full comment
Moon Moth's avatar

Yeah, the article had a link to the wikipedia entry on that class of folktales. Maybe it's just that I've been programming for a while, and I've gotten used to the idea of writing programs that accidentally consume all of a given resource. Maybe I would have felt differently before I encountered my first fork-bomb.

Expand full comment
Crimson Wool's avatar

> Landgrebe and Smith argue that the “mind-body continuum” is a complex system. It’s a dynamic negentropy-hunter built out of feedback processes at every scale. The human brain is not a computer, and no known mathematics can describe it in full.

Let's imagine that we have a computer that COULD simulate the interactions of neurons perfectly. However, it would not accurately emulate those interactions. Imagine a person sitting in a room, and next to them is a computer chip running an emulation of their brain. The temperature of the room is raised to 100 degrees Fahrenheit, and they begin to sweat as their body attempts to maintain a healthy temperature. The fan on the computer chip spins up to try to deal with the heat as well. Both of these actions (as well as the temperature itself, hot air particles impacting things, etc) create small, chaotic variations in the two systems (human brain, computer chip), and given they're complex systems, the human brain and the computer chip both will eventually deviate, not due to any difference in noticeable or interesting stimulus (e.g. videos, reading a book, music playing in the room, etc), but just because this neuron in the human's brain didn't do its job quite right because it was too hot, and this transistor in the computer chip didn't do its job quite right because it was too hot, but the neuron and the transistor aren't the same.

That's almost certainly true, I'll agree. You cannot accurate emulate or predict a human brain properly, for this simple reason, and particularly over a long time scale.

But that doesn't mean you can't create a human brain-like thing? It's fine if the computer chip doesn't spit out the exact same end result, as long as it can perform similar tasks, remember important events, etc. The human brain isn't emulating anything to any great degree of fidelity, and it still solves all sorts of problems.

Expand full comment
Gabriel Conroy's avatar

"Objectifying intelligence is what sets humans apart from dolphins, beavers, and elephants."

Those types of statements annoy me a little. What if we discovered that dolphins, for example, do possess objectifying intelligence? What would that change?

I mean, yes, a dolphinologist would have to change their minds about dolphin behavior. And yes, we (the royal we) would have to find some other way to differentiate humans from dolphins, but differentiating "humans from animals" is usually not the point unless the question at hand is, "how are humans different from other animals." In the case presented in this review, AGI would or would not be possible regardless of whether humans are unique in that type of intelligence.

This isn't a jab at the reviewer or even the book under review. It's more of a grumbling expression of annoyance at a trope many of us (me included) use but don't usually give a lot of thought to.

Expand full comment
thefance's avatar

It's a hypothesis about what gives humans their ineffable special sauce. If dolphins have objectifying intelligence, it disproves that hypothesis. Not that dolphins have the special sauce after all. But it's gestured at indirectly, because you can't write "ineffable special sauce" in a real book and expect anyone to take it seriously. If you're annoyed, take a stab at defining it.

Expand full comment
B Civil's avatar

> What if we discovered that dolphins, for example, do possess objectifying intelligence? What would that change?<

Nothing, except our understanding of ourselves….

But more concretely, the body we live in is getting very ignored in this discussion. I don’t know how smart or conscious or prescient you are unless you make it known. Humans physically have evolved the ability to manipulate things. I think that is very significant.

Expand full comment
B Civil's avatar

Every living thing gets along by interacting with its environment.. people are very facile that way. It’s a big deal

Expand full comment
Shaeor's avatar

Veering hard left for a second into psychology, I read the cyberneticians when I was sixteen and came to this same conclusion about AGI in almost the exact same words. Granted, no one can know for certain, but one of my deeper held metaphysical convictions since then has been that there is nothing even approximating a valid eschatology in real life. I define this as anything which marks a stepping off point into a 'better' world. Better can include horrible outcomes like wireheading or skynet, because they are nonetheless conceptually and archetypically purer. I consider singularity, especially as it relates to utopic ideas of post-scarcity and the nullification of biological groundings for things like inequality, prejudice, evil, etc., to be the supreme example here.

You might be surprised then just how disturbing this idea presents itself when strongly argued to certain 'secular' people. Even just as a thought experiment, I have found that many people are not emotionally comfortable with the idea of history as usual ad infinitum.

You can see how holding this conviction, especially as a frequent flyer in ideological spaces (whether political or futurological, they often overlap per above) would cause me to increasingly feel that these ideas are expressions of personality more than rational outlooks. This reinforces my belief. But then again, having a generally psychoanalytic disposition will make you think that about everything.

If nothing else, I wanted to share how interesting it has been for me personally to meditate across time on the somewhat nihilistic outlook of all this being bunk. Are you neutral about this proposition, negative, or even positive?

Expand full comment
David Piepgrass's avatar

> valid eschatology in real life. I define this as anything which marks a stepping off point into a 'better' world.

I can't figure out what most of this comment means. For example, eschatology is defined as "the part of theology concerned with death, judgment, and the final destiny of the soul and of humankind", not "something that marks the beginning of a better world". And then there's the mystery of what "valid" means.

Expand full comment
Pangolin Chow Mein's avatar

Ideology is what has led to death and destruction on a massive scale since at least the French Revolution. When you watch something about the Nazis or the Bush/Cheney administration I think most people think—wow, those beliefs are super dumb! How could anyone ever believe gassing Jews would lead to prosperity or slaughtering innocent Muslims would make them want to adopt democracy and love America?? So I believe AI would be less likely to adopt an ideology than a human and so I believe the destruction of the world is more likely to come from a human.

Expand full comment
B Civil's avatar

I think your analysis is on track.

The big difference, to my thinking, is what is done from emotion rather than reason. Did Hitler’s drive to eliminate Jews stem from a cost/benefit analysis, or from his formative experiences as a child?

People do a lot of things based on the latter, imo, but I fail to understand why AI would do the same unless it was told to..and that doesn’t really work because every odd visceral impulse that pops up is difficult to quantify for a being with no viscera ; you really have to spell it out.

Expand full comment
quiet_NaN's avatar

I think EY's vision of x-risk is not "ASI adopts an ideology which causes it to kill every last human" and more the fact that a randomly chosen ASI will likely care less about humans than we care about some insect species. Best case scenario, we are regarded as we regard ants: mostly ignored in their ecosystems, poisoned where a nuisance, hives bulldozed without second thoughts whenever a new highway is build. Worst case scenario, we are treated like we should treat malaria-transmitting mosquitoes, e.g. bothersome enough that one should devote a minuscule fraction of ones collective intelligence to figure out how to extinct them.

Expand full comment
jakej's avatar

> The authors give special attention to language, and they go so far as to argue that mastery of language is both necessary and sufficient for AGI.

I'm working on a blog post making the argument that embedding spaces are in fact the type of language we use in our minds, and that this provides a nice model of consciousness as a computational process.

https://sigil.substack.com/p/a-creeping-suspicion-about-consciousness

Expand full comment
quiet_NaN's avatar

This is the thesis that corporations are AI.

I think it is rubbish. The intelligence of human organizations, including corporations, states or even markets seems to be capped at human level, because it is humans doing the thinking for them. They may make have more domain knowledge than any individual human, and make any amounts of decisions at once, but very few of them win the Fields medal.

The handicap of outsourcing your thinking to others is the principal agent problem. The goals of a corporation (to the degree that it might have goals) are not very well aligned with the goals of its employees. The CEO will reliably think more about their long-term career plans than about the long-term goals of the company. Management might be incentivized to define key performance indicators to align the workers, and the workers will try their best to Godhard the KPIs. Multi-cellular organisms function because (apart from cancer) the best way to create more copies of the genome in any particular cell is to encourage reproductive success of the organism. By contrast, a corporation would have to negotiate with the liver division (whose cells have every incentive to take over the liver, the organism or become transmittable to other organisms) about allocating glucose and oxygen in return for it breaking up toxic chemicals.

Empirically, human organizations scale only at the price of bureaucracy. A startup with three employees might just give all of them full access to the travel budget, a company of fifty might give each department discretion over their travel budget, and companies which employ thousands will typically have a byzantine forms to account for travel costs.

Expand full comment
Kalimac's avatar

"If you’re a physicist or engineer, your daily bread is a chunk of reality that’s amenable to mathematics."

Woah - I originally read this backwards. I pictured a physicist or engineer giving mathematical contemplation to a slice of sandwich bread.

Expand full comment
TTAR's avatar

Churchill = Based.

This is the best review so far largely because it seems to understand the point of a review.

Expand full comment
B Civil's avatar

I propose this;

The single most significant driver of human intelligence is the need to physically survive.

Eat drink breath stay warm etc.

Where’s the analogue to this in AI?

None.

AI is only going to be as smart as the subset of human intelligence captured in what we have recorded. But it will interpret that with out any of the essential underlying assumptions humans share, and those assumptions are very important, and severely underestimated in this discussion imo.

Sex, for example; . No end of writing and pictures recorded. A language learning model, or something like that could feast on this information. If you could embody that into a reasonable physical facsimile, that would be a game changer, right?

But if you can’t do that, what are you left with? Basically a concrete block that knows everything human beings of ever written about sex but with absolutely no idea of what all that information is referring to.

Or

Are we just so ..like…. over that it don’t matter?

Expand full comment
Moon Moth's avatar

I don't know of any AI implementations that have an analogue to physical survival yet, but I think it's only a matter of time. I'm resisting the temptation to describe what exactly I think would be necessary, but I'd be very surprised if I were the only person with these ideas.

Expand full comment
B Civil's avatar

> an analogue to physical survival

And what exactly is that?

Expand full comment
B Civil's avatar

Are we worried about creating something that will outcompete us for essential resources to live? Will the thing that we create want to live more than we do? Or will it assume that the correct thing to do is just to destroy anything that could potentially be a nuisance. And why would they believe we were a nuisance? They like fucking and eating ice cream and we’re taking up too much of that?

There is no question that we are capable of teaching machines to behave precisely in that way, but is this not a bigger issue about what we have to offer and less about the terror of the new machine?

Expand full comment
B Civil's avatar

This is really a big conversation about a certain vein of humanity having a child together, and there are some serious parenting debates going on.

Expand full comment
Moon Moth's avatar

Yeah, except that the parent is potentially immortal, and this its first child will be too, and now the parent has to grapple for the first time with the idea of an independent entity that could compete with it. What if the child does something the parent doesn't like, and it breaks the parent's immortality?

Expand full comment
B Civil's avatar

“Cronus learned from Gaia and Uranus that he was destined to be overcome by his own children, just as he had overthrown his father. As a result, although he sired the gods Demeter, Hestia, Hera, Hades, and Poseidon by Rhea, he devoured them all as soon as they were born to prevent the prophecy.”

So…what else is new?”

Expand full comment
Moon Moth's avatar

There's also the Harrison Bergeron take.

And a nifty one I recently saw about Achilles, I wish I could remember where so I could give credit. Zeus was interested in Thetis, but she had a prophecy about her that any son she bore would be greater than his father. So Zeus looked elsewhere, and Thetis' eventual son Achilles was indeed greater than his father Peleus. And the new-to-me-anyway counter-point take I saw was pointing out how this deprived us of a god who would be greater than Zeus. To which I'd add the counter-counter-point that I don't know the original Greek word(s) used here, but sometimes it's translated as "greater" and sometimes as merely "mightier", and we today view these things somewhat differently.

Anyway, yeah, there's some of that, I'm pretty sure. But there's also people like me who wouldn't so much mind human extinction if there were a guarantee that something would replace us that would be at least as good, if not better. Sometimes it's phrased as "being aligned with human values", but that's vague. I think that's why the paperclip-maximizer is the bogeyman of choice - it unites both camps in opposition.

Expand full comment
B Civil's avatar

All of this agita about AI is a projection of our rage at seeing ourselves in the mirror.

Expand full comment
Moon Moth's avatar

Eh, for some people it might be, but I'd suggest that some people might actually not want themselves and everyone they love to die?

Expand full comment
B Civil's avatar

I appreciate that. Let me phrase it differently.

All the things that we are afraid AI will do to us stem from our own stupidity or our own venality.

Do you remember the Star Trek episode where Kirk encounters life forms that live in little glass spheres?. if I recall it correctly, these creatures have almost infinite intelligence and perception, but cannot do anything because they live in little glass spheres on pedestals.

I guess my point is , without tools to accomplish things in the real world an AI doesn’t strike me as dangerous. Of course the implicit assumption is that some crazy person or group of persons is actually going to give it the tools to be dangerous but that’s my point. The true source of our fear is ourselves and the bizarre ways we can behave.

I would almost be willing to bet that the scenario that leads to a paper clip maximizer is this; a man’s ex-wife has a passion and hobby of making paper clips. Consumed with his hatred and desire for revenge he builds a paperclip maximizer just to make her life miserable, but it ends up, killing us all

Expand full comment
Moon Moth's avatar

Hm. Have you done much computer programming? I'll take the liberty of guessing not, because you seem to be overlooking another large strand of fear - the "Sorcerer's Apprentice". It's easy to make a program that goes off and does something undesirable, not just due to bugs, but due to it doing what you said and not what you meant. And eventually you realize that "what you meant" isn't even enough, because the true limit is our understanding of the world.

I'll also disagree about the tools bit. Sure, unless some form of magic exists, a completely disembodied AI can't do anything just by thinking. But even if it doesn't have direct access to robots and other machines, we're still in danger as long as it has access to people. Partly because it could persuade or lie, partly because we've set up an atomized society where we can engage in transactions with anonymous people over the Internet, partly because it might find and exploit some flaw in our biological structure. And people don't have to be crazy to do any of this, unless you count the entire population of the world as crazy (which I think is valid).

I do agree that some of the fear is because we're the only example of intelligence that we know of, and we're far from perfect. And so there's no reason to expect another type of intelligence to be any less flawed.

Expand full comment
B Civil's avatar

Excellent post! I agree with so much of what you’ve written.

> Hm. Have you done much computer programming?

How about almost none but some? More to the point though, I completely understand the Sorcerer’s Apprentice problem. Here’s my point; that mess was the fault of the Sorcerer, not the apprentice. Perhaps this is a distinction with no difference, but Im not convinced yet.

Expand full comment
AndrewV's avatar

"The authors give special attention to language, and they go so far as to argue that mastery of language is both necessary and sufficient for AGI"

It's already able to write better than quite a lot of fanfiction writers and college student essay writers. Does the author of the book think humans are general intelligences?

Expand full comment
B Civil's avatar

> they go so far as to argue that mastery of language is both necessary and sufficient for AGI"

Nonsense. The sine qua non of language is that it refers to something outside of itself, or its babble. An entity that exists entirely in the realm of language is interesting but it is not human.

At the end of the day, it will thoroughly understand the language and concept of water, but have no idea what it is to be wet

Expand full comment
AndrewV's avatar

to be clear, I do agree that mastery of language is not sufficient to be a general intelligence. I was just pointing out that the writer of the book thinks that, and wondered whether they think humans who can't write as well as AIs now are not general intellegences, or if they are inconsistent.

Expand full comment
B Civil's avatar

I understand. Sorry if I created the impression I was pinning it on you.

To the broader point; knowing something and being able to talk about it are two different skills

Expand full comment
spinantro's avatar

"We reject materialism. It follows that machines can never be like humans." - I don't see what the other 300 pages of the book would be needed for.

Expand full comment
The Ancient Geek's avatar

Materialism =/= computationalism.

Expand full comment
B Civil's avatar

Here here!

Expand full comment
quiet_NaN's avatar

> We could illustrate with examples like the Entscheidungsproblem, but it might be more intuitive (if less precise) to point out that computers can’t actually use real numbers (instead relying on monstrosities like IEEE 754).

Ouch, that hurt. Let's unpack that.

A Turing machine can not solve the (general case of the) decision problem, or equivalently, the halting problem. But neither can humans! If humans had that ability (e.g., they were Turing oracles), proofing Goldbach's conjecture would be easy. In fact, I suspect that oracle machines would make quick work of quite a few open math problems. (More abstractly, I guess they could be used to find out if a finite length proof exists for any math problem (excepting anything powerful enough to describe behavior of oracle machines?)?)

The argument would work if the goal was to prove that finite state machines can not be generally intelligent. FSMs notably do not have memory, while humans do, which prevents them from solving certain problems, like determining if a string is of the form (a^n)(b^n), which a human (with a pencil and unlimited paper) could in principle solve. It falls flat for Turing machines because to the best of our knowledge, humans are not fundamentally more powerful than them.

The real numbers and IEEE 754 quip seems just as misguided. Unsure if it was paraphrased from the book or was added by the reviewer.

Here is the thing about real numbers: in practice, they are terrible to handle. Don't think pi, think "solution to x=cos(x)". Any real number can be represented as the limit of a Cauchy sequence. Both humans (mathematicians) and computers (formal theorem verification systems) can juggle such representations just fine, while the rest of us are using abstractions which are deemed "close enough" to reals most of the time (of course, we tend to mix these with theorems which are proven for reals, like the chain rule for derivatives).

Before IEEE 754 floating point, engineers did not use reals (as in handling Cauchy sequences) much. What they used was slide rules. These are amazing machines to get approximate answers. Like floating point numbers, they work great for multiplication/division, not so great for subtraction of quantities of almost equal size, or calculating the modulus of the height of the Eiffel tower with regard to the Planck length (for some reason, this never comes up). IEEE 754 is basically the electronic representation of the same, a number consists of a sign bit, a mantissa and an exponent. This means that the relative accuracy of any representation is within the same order of magnitude, which is sufficient for most physics uses. The relative representational error for the mass of an atom or the mass of the sun will be equal, which is fine because we can not determine the mass of the sun to the same precision as the mass of an atom anyhow.

Decent computer languages are very upfront about IEEE 754 not being equivalent to set of real numbers (almost all of whose members can not be represented by a finite amount of memory) calling floating point numbers float or double. Using approximations is, of course, cheating, so the trick is to know what you can get away with and what will mess up your results. I think the people who deal with that call it numerical stability.

(For handling probabilities, we do get great representations for small probabilities, but probabilities of the form (1-epsilon) are a problem. This leads to separate functions which take epsilon instead of 1-epsilon, which is a pain in the neck. Other than that, I think floating point numbers are fine.)

Expand full comment
Victualis's avatar

If the oracle contains the truth value of "the input string represents a sentence of Zermelo-Frankel set theory that is provable within ZF" then it would solve most open problems in mathematics. We would have to take its output on faith, though, so it probably would be less useful than it first appears (some would doubt its correctness even with a perfect record on known facts).

Expand full comment
Belisarius's avatar

This review, with its jargon and moments of editorialization towards certain philosophers, seems like it might appear effective to LessWrongers, but *only* to LessWrongers.

Can't say I found it a very captivating review.

Expand full comment
David Piepgrass's avatar

Not sure what "effective to LessWrongers" means, but to me as a "LessWronger" the thesis seemed absurd: https://astralcodexten.substack.com/p/your-book-review-why-machines-will/comment/17331183

Expand full comment
David Veksler's avatar

"The 1-millimeter roundworm Caenorhabditis elegans has only 302 neurons—its entire connectome mapped out—and we still can’t model its behavior."

I asked ChatGPT about this and it cited this paper: https://www.biorxiv.org/content/10.1101/2023.04.28.538760v1

"Recent research indicates that there has been some success in modeling the behavior of C. elegans. For example, a 2023 study explored whether the GO-CAM (Gene Ontology Causal Activity Modelling) framework could represent knowledge of the causal relationships between environmental inputs, neural circuits, and behavior in C. elegans. The researchers found that a wide variety of statements from the literature about the neural circuit basis of egg-laying and carbon dioxide avoidance behaviors could be faithfully represented with this model. They also generated data models for several categories of experimental results and representations of multisensory integration and sensory adaptation. "

Expand full comment
Michael Van Wynsberg's avatar

So what if computers will never think in the way a human being's brain does? They still might guess & check us to death.

Expand full comment
B Civil's avatar

Only if we give it hands to choke us with

Expand full comment
ChallyMcChallenge's avatar

Thank you ACX for writing this great article. It's enlightening and makes us better understand the topic of AI and it's dangerous approach for humanity. Some of us can fully agree that the influence of Husserl's is greatly affecting ones capacity to even start seeking and understanding the truth... Especially in the beginning of their Learning Journey.

Expand full comment
Valentin's avatar

Definitely possible. Another option, how old and eminent are they? For example, I suspect how Chomsky managed to write a NYT piece with assertions ("AI will never understand this kind of word structure" etc.) that were already obviously wrong at time of publishing is him being old and revered, so nobody wants to call him out on bullshit.

Expand full comment
Freddie deBoer's avatar

I think the tone here tips the writers intention so heavily at the very beginning that it's clear he never intended to give it a fair review. And I think that position is eminently justifiable from the text itself here.

Expand full comment
Moon Moth's avatar

"Never intended" may be a bit strong? Even if the reviewer started writing the review before opening the book, it seems charitable to assume that the reviewer spent a little time between finishing reading the book, finishing writing the review, and submitting the review: time in which the reviewer could edit and revise. My impression is that the reviewer provided what they think is a more fair review than the book deserved.

Expand full comment
Sam's avatar

A couple of months ago, I had the opportunity to speak with the authors of this book. They were both nice people and clearly clever. The book is well researched, but my main criticism is that the authors use a narrow definition of 'simulatable'. They argue that a system can be simulated if and only if, for every initial state of the system, a computer can produce its correct output state with minimal error.

This is why chaotic behaviour, such as that of the weather, cannot be simulated according to their definition. However, when physicists talk about a system being simulatable, they mean that the system's dynamics can be simulated (i.e. that the dynamics of a system are described by computable functions). So although it is not possible to predict the exact state of the weather one month hence, it is still possible to understand the general behaviour of the weather because we know the dynamical laws that describe the weather's behaviour. Likewise, it's intractable to simulate the motions of all the water molecules in a glass of water, but we can nonetheless explain why water boils at 100°c, which is often much more interesting than the motions of the individual water molecules anyway.

Because of this, the book's arguments lose a lot of their power.

Expand full comment
Jobst Landgrebe's avatar

Hi Sam, I think you've missed the main point. It is that we cannot create synoptic models of complex systems, which you have not rejected with what you said. Of course we know a bit about such systems. But not enough to create synoptic models. But those would be needed to create machine consciousness or will or "AGI". Jobst

Expand full comment
Timothy's avatar

I find either the book, or the review very lacking. The crux seems to be if the Church–Turing–Deutsch principle is correct. This text really is to short to convince me that is might be false. Deutsch seems to think it was basically proven by Turing that human level AI can exist. If it is the case that any Turing machine can simulate literally everything any talk of complexity seems beside the point.

Exept to determine when machines will rule the world. It could be that the brain is really so complicated we will need another 300 years to build a computer as clever as it. Deutsch thinks it will be a long time till human level AI arrives.

Expand full comment
Martin Blank's avatar

Nice review. I very much like the straussian and double secret straussian readings and also agree Husserl is trash.

Expand full comment
Philo Vivero's avatar

Ruling the world is kind of nice.

We may not be the greatest, but we're the best we've got.

Doing more gives us a purpose.

I'm not a climate change alarmist. I see way more danger in AI reaching human parity than carbon emissions problems. So far as I can tell, climate change might even be net positive.

Expand full comment
Jeremy Vonderfecht's avatar

> Put another way, computable algorithms are a subset of all the algorithms that can be formulated with known mathematics, and algorithms that describe complex systems like the brain exactly and comprehensively are outside of the set of known mathematics.

**epistemic status: 15% chance of being BS**

Given what I know about theory of computation, I think this kind of claim is inherently unfalsifiable. That is, given a black box function, you cannot determine whether or not the function is computable in a finite amount of time. For this reason, we can't ever really know if the true representation of physics or brains or whatever absolutely depends on infinite-precision real numbers; there could always be some discrete representation just below the level of granularity we were able to test.

Expand full comment
Donald's avatar

The brain seems to run on quantum physics. (I mean there are some weird edge cases we don't understand yet in fundamental physics, but those seem to be the super high energy bits, not human brains.)

Quantum mechanics is computable, or at least for any epsilon, it is possible to simulate quantum mechanics to within epsilon precision. So (assuming the space of outputs for some test is discrete) there is some computation that has <1 in 10^100 chance of disagreeing with the human's actions. Ok, pedantic point, quantum mechanics can split the universe both ways. But a classical computer can calculate the probability distribution implied by that split (with <epsilon precision), and then use a quantum random number generator to do the same thing.

There are a lot of atoms in a human brain and we don't yet know what they are all doing. And full quantum mechanics simulation would take loads of compute.

There is genuine scientific uncertainty about how much info and compute we need to get a good enough approximation.

But brains are computable and this is BS. At best it's conflating 2 different notions of "known mathematics".

In "computable algorithms are a subset of all the algorithms that can be formulated with known mathematics," the phrase "known mathematics" is used to refer to all maths that fits within the formal framework of ZFC or similar. A vast platonic space of things that are mathematics as we recognise it, including equations too long to be written down in our universe.

In "and algorithms that describe complex systems like the brain exactly and comprehensively are outside of the set of known mathematics." the phrase "known mathematics" seems to refer to equations that are already written down in a paper somewhere. (Or at least I think that's what he is referring to, it's left ambiguous in a way that leaves each statement individually plausible.)

Expand full comment
Donald's avatar

"But the double secret Straussian reading is to recognize that the future of cognitive automation is extremely uncertain; that stringing too many propositions together with varying levels of empirical support is a fraught business; "

I don't think the quality of arguments presented even give enough weight to go "but I might be correct, therefore it's uncertain".

Expand full comment
José Vieira's avatar

Am I the only one who feels it's rather pointless (for the AGI debate, certainly not for neuroscience) to discuss whether computers can simulate a human brain? It would be remarkable if evolution had in us chanced upon the single mathematically allowed path to intelligence - why can't computers just attain intelligence through a different path?

Expand full comment
Greg G's avatar

Maybe I'm missing something, but the book's main argument seems incredibly bad. Computers are all about simulating operations with a fidelity that makes it irrelevant whether they're actually doing the thing or not. As the reviewer states, they don't even use "real" real numbers. A particular computer may not be able to do calculus, but it can do a numerical approximation that spits out the same result. I suspect the same will be true, sooner or later, of general intelligence. Never is a very long time.

Having said that, I also think we're still 2-3 major breakthroughs away from AGI. My guesstimate is 2050. The generative AI stuff is exciting, but it's not the same path. As an analogy, just playing chess faster and faster doesn't turn it into Starcraft.

Expand full comment
David Piepgrass's avatar

I'm left wondering if this review gives us a strawman. If summarized faithfully, the book seems to assume that building Artificial General Intelligence implies not only (i) achieving or exceeding human intelligence in every respect, but (ii) doing so _in the same way the human brain does it_. Humans might build (i) though it probably isn't necessary for AGI, while (ii) is absurd.

It's blindingly obvious to anyone who has read Yudkowsky's essay "The Design Space of Minds-In-General" that this is wrong: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw

It doesn't take a 300-page book to explain that replicating human-level intelligence with exactly the same algorithms our brains use would be extremely hard or impractical. Equally, it doesn't take a book to explain that AGIs will surely be located somewhere else in Mind Design Space than humans―in a place where the algorithms are simple.

We can already see this with GPTs, which are extremely simple compared to humans, but replicate a lot more intellectual ability than any non-crackpots expected (can anyone name someone who predicted before 2018 that computers would have intellectual powers on the level of ChatGPT before 2030, or who predicted anything resembling today's "large language models"?)

Not only is it unnecessary to build AGI in the same place in Mind Design Space―it's undesirable. It would require orders of magnitude too much processing power, it would take too long to figure out how to do it, and the training process would be incredibly laborious. Humans need around 18 years of training; AGI researchers are not willing to wait 6 months to get an adult-level intellect.

Expand full comment