612 Comments

I'm kind-of shocked that Scott didn't make the obvious point:

The appropriately named "Danger AI Labs" (魚≈סכנה) makes a new form of potentially dangerous AI; this is not a coincidence, because nothing is ever a coincidence.

https://twitter.com/davidmanheim/status/1823990406431858883

Expand full comment

As somebody who also speaks both Hebrew and (kind of) Japanese, this call out sent me down a rabbit hole. ChatGPT estimates there are ~5K of us (https://chatgpt.com/share/66eac607-1ab4-8012-9db6-21de21efc5b4).

Separately, I would estimate ~98% of the people who speak both of these languages will also speak English.

Expand full comment

Seems a bit low to me, though I guess it hinges on how "fluent" one has to be. I know what "Sakana" means in both, but I would never be able to hold a conversation in Japanese, though one day I hope to (my conversational Hebrew is pretty decent). If I really invested into it, or went to live in Japan for extended time, I could get to pretty good level within about a year I think. I am not that special, so it makes me think 5k people that have that seems kinda low.

Expand full comment
Sep 18·edited Sep 18

Not only this, but the author who wrote two prior articles about a theoretical strawberry picking AI which results in unintended, dangerous behavior based on how it was trained [1,2] then finds a real case of an AI named strawberry which performs unintended hacking based on how it was trained. This is not a coincidence, because nothing is ever a coincidence.

[1] https://www.astralcodexten.com/p/elk-and-the-problem-of-truthful-ai

[2] https://www.astralcodexten.com/p/deceptively-aligned-mesa-optimizers

Expand full comment

Has anyone checked the gematria on "Danger AI Labs" yet?

Expand full comment

Perhaps this is following the precedent of the naming of other AI research labs, along with the openness of OpenAI and the ongoing research of the Machine Intelligence Research Institute

Expand full comment

Since LLMs are black boxes in many ways it’s easy for us humans to use plausible deniability to justify what we want to do, which is to continue making cool tech.

It kind of feels like being at a party and saying beforehand, “Ok, but once this party gets out of hand, we call it a night.” The party starts to heat up, there’s music and drinking and connection, but then someone lights something on fire in the kitchen. “Maybe we should stop?” you ask yourself. But nah, the party’s just so much fun. Just a little longer. And then the house burns down.

All this is to say, I think there will be a line that AI crosses that we can all unanimously agree is not good, aka bad for civilization. But at that point, is the house already on fire?

Expand full comment
Sep 18·edited Sep 18

The reality is, we don't need to wait for the black box to cross the line and break out on its own. Long before that point, a human will deliberately attach a black box to controls for real-world equipment capable of harming humans.

They're doing it right now, as we speak, unremarked, because it's just a statistical model so what could possibly go wrong when you let it control several tons of metal hurtling along at highway speeds?

Expand full comment

Maybe these are two separate issues.

How do we prevent humans from using a new tool to do bad things?

And

How do we prevent a new tool from iterating upon itself until it becomes a powerful tool stronger than the current bio-swiss-army-knife-known-as-humans tool, and doing bad things?

Expand full comment

Yes, this is the relevant distinction. A regular AI is as dangerous as a human, which is to say, very dangerous but manageable much of the time. A “superintelligent” AI is an existential threat.

The next step in the chain of reasoning is that a “superintelligent” AI (or any other type of “superintelligent” entity) is something that has only ever existed in boys’ pulp adventure stories, where they figure quite prominently. Accordingly, we should devote about the same amount of resources to the threat of superintelligent AI as we do to defending earth against green men from mars, ancient sorcerers risen from their tombs, and atomic supervillains with laser-beam eyes.

Expand full comment

The difference being, of course, that we have a large and growing collection of well-funded researchers attempting to build this particular threat, and not the others.

Expand full comment

Researchers have been looking for little green men from mars for a very long time, and we’ve been trying to build ubermenschen for quite some time as well, albeit not with laser eyes. So what? We spent centuries and many fortunes seeking the philosopher’s stone, the fountain of youth, El Dorado, Prester John’s kingdom, Shangri-la… we never found them, because they’re not real. “Superintelligence” has exactly the same evidentiary basis as any of those things. If I were in a psychoanalytic mood, I’d speculate that the personality type who reads sci-fi stories and fantasizes about “superintelligent” beings taking over the planet is someone who always felt smarter than the dumb jocks around him (usually him) and looked forward to the day when /intelligence/ was revealed to be the true superpower, the ultimate source of strength and domination… but that would be needlessly mean-spirited of me.

Expand full comment

It would, yes, mainly because that has little-to-nothing to do with the reasons for thinking superintelligence might be achievable in the not-so-distant future. Do those stories make for useful analogies? Do they inspire people? Sometimes, yes.

But that's not what drove $190B in corporate investment into improving AI in 2023. Building increasingly smart and versatile systems that they expect will be capable enough to provide many times that much value to customers is.

And also, the reasons for thinking superintelligence of some sort is *possible* are much simpler. Humans are an existence proof that human-level intelligence can exist in a well-structured 1.5kg lump of flesh that runs at 100Hz on 20W of sugar. We know there's significant variation between humans in what we can use our brains to do, and that having a wider and deeper set of skills and knowledge leads to greater competence in achieving real-world outcomes. We are slowly learning more about what makes a thing intelligent and how to achieve different facets of that, on systems that are scalable to arbitrarily large working and long-term memory, and with several OOMs faster processing speed and signal transmission. That's really all you need to think "Huh, that leads somewhere new and different."

Expand full comment
Sep 20·edited Sep 20

> "Superintelligence” has exactly the same evidentiary basis as any of those things

How do you define "evidence"? Seeing it with your own eyes, I'm assuming? Have you ever heard of the concept of induction?

https://upload.wikimedia.org/wikipedia/commons/0/00/Moore%27s_Law_Transistor_Count_1970-2020.png

In the year 2000, you are looking at this graph. Do you say "Sure, we're at 50 million transistors today, and yes, all the trends we've seen so far indicate that we'll be over a billion by 2010. But that PURE SCI-FI! There's simply NO EVIDENCE that we'll ever get to a billion, because, well, uhh... we're at 50 million today! And things never change or improve over time! There's just no Evidence!!!"

Sure, you can choose to take that position. But historically your type has ended up being wrong a lot, because you're limiting the scope of what you consider "evidence" to "what I can currently see with my own eyes" rather than using the revolutionary concept of "using logic and reasoning" and "observing mathematical trends."

https://xkcd.com/2278/

"But the graph says things are currently not bad!"

PS on sci-fi: https://en.wikipedia.org/wiki/List_of_existing_technologies_predicted_in_science_fiction

Expand full comment

Have you ever *seen* a global nuclear war? Sure, you've heard about a nuke being dropped here or there, but do you have any evidence that there is currently a global nuclear war occurring? No? I didn't think so. That's something that's only ever happened in boys’ pulp adventure stories.

Therefore, we should not worry about or take any preventative measures whatsoever against a global nuclear war. It's just science fiction, after all! We can worry about it after it happens.

Expand full comment

The pacific theater of WWII was by any reasonable standard "global" in terms of the distances and alliances involved, and use of nuclear weapons played a pivotal role in its conclusion. That's enough of a proof-of-concept prototype to draw meaningful inferences. Higher-yield bombs produce bigger craters, but not really a fundamentally different *type* of crater - at least until you're talking about things like continent-wide electromagnetic pulses, which, again, we have some idea how to model, and thus consider as potential elements of a strategic landscape.

If a 3300 year old Hittite sorcerer-king busted out of his tomb and started blowing holes in Turkish armored vehicles by shooting lightning from his eye sockets, that would raise a lot of other questions. Biology and physics can't confidently predict what else those observed capabilities would imply, because it's not just reachable by taking the curves we already know and plugging in bigger numbers. if we had lab-grown 150-year-old sorcerer-kings whose baleful gaze was barely up to the task of microwaving a burrito, that would make the scenario far less mysterious.

Similarly, "smaller, cheaper transistors" is easy enough to analyze, but the sort of powers attributed to "superintelligence" are qualitatively different, and, frankly, often vague to the point of resembling lazy plot contrivance.

Expand full comment

> That's enough of a proof-of-concept prototype to draw meaningful inferences.

https://www.sciencedaily.com/releases/2021/11/211111130304.htm

6-ton beasts with 15-foot-long tusks, and we hunted them to extinction with some pointy sticks.

https://sustainability.stanford.edu/news/when-did-humans-start-influencing-biodiversity-earlier-we-thought

Some hairless monkeys, with merely a 50% higher encephalization quotient than dolphins, affected the rest of the species in the world on a magnitude that is on par with mass extinction events and global climatic fluctuations.

There are many points you can take issue with in the AI risk debate, but "intelligence isn't that dangerous" is a really weird one to pick. That more intelligent (or "capable", if you prefer) entities can overpower and dominate less intelligent ones is one of the least controversial premises in the argument.

The thing about intelligence, though, is that you can't model exactly what it will look like to have more intelligence, or what powers this additional intelligence will grant. If you could simulate something more intelligent, you would already be that intelligent yourself. Yet we still know that more intelligent things will beat us.

Emmett Shear (CEO of OpenAI for 3 days) explains it pretty well here:

https://www.youtube.com/watch?v=cw_ckNH-tT8&t=2650s

> I can tell you with confidence that Garry Kasparov is gonna kick your ass at chess. Right now. And you ask me, "Well, how is he gonna checkmate me? Which piece is he gonna use?" And I'm like, "Uh, oh I don't know." And you're like, "You can't even tell me what piece he's gonna use and you're saying he's gonna checkmate me? You're just a pessimist." And I'm like, "No no no, you don't understand, he's *better at chess than you*. That *means* he's gonna checkmate you."

Imagine a woolly mammoth trying to explain to his woolly mammoth friends how some apes are going to become hairless, and then drive them to extinction. "Huh? Those little hundred pound wimps? What are they gonna do, how could they possibly kill us? We can just step on them!"

"No, you see, they're going to use those little bumps at the end of their arms to pick up rocks and sticks and things, and then they're going to capture the bright hot stuff that appears when lightning strikes a tree, figure out a way to artificially produce this hot stuff, use it to preprocess their food so that more of their energy can be spent by their brain instead of on digestion, invent techniques for creating and sharpening tools and weapons, coordinate with other humans to create systems and strategies for hunting us, fire projectiles at us from a distance using their arms, and outmaneuver us when we try to step on them."

"Ok dude, cool sci-fi story you've got there, but the plot feels a bit lazy and contrived. There's no evidence that apes can do anything even close to that, I'll worry about it when I see it happen."

Expand full comment

The entire idea of the singularity is completely wrong on a basic, fundamental level. It is literally magical thinking.

Making better versions of really complex, sophisticated things gets harder and harder as you get further and further up the ladder.

The entire notion of an AI becoming a self-iterating superintelligence that controls the world overnight is completely wrong on a fundamental level. It is literal magic.

The actual issue has always been dealing with people being evil.

El Salvador locked up criminals en masse and the homicide rate in that country has fallen from 103 per 100k people per year to 2.4 per 100k people per year, a decline of approximately 98%.

It's obvious that literally all other solutions are wrong and that the actual problem was people all along.

Expand full comment

I would hope that cars are using standard programmatic techniques rather than issuing prompts to chatGPT.

Expand full comment

Neither of those. There is a model, which isn't an LLM like chatGPT, but /is/ an AI statistical black box of the kind that is not amenable to traditional safety critical engineering analysis.

Here's an open source self driving car firmware project (a combination of words that should inspire terror in the heart of any traditional automotive engineer) - see for yourself: https://github.com/commaai/openpilot

Expand full comment

> (a combination of words that should inspire terror in the heart of any traditional automotive engineer)

Which parts? Self driving car firmware project might inspire terror, but open source makes it slightly less so.

Honestly, the most terrifying aspect of the whole thing is the car. They have a lot of kinetic energy, and designing our society around them has ruined approximately everything.

Expand full comment
Sep 18·edited Sep 18

> Which parts?

The parts where people on the internet contribute to a project that does not see even the levels of testing the greediest of automotive companies might perform, and other people then download and install this in their giant boxes of kinetic energy before taking them out in public.

Debugging a product by eating your own dogfood can work very well, but only to the extent that problems with the product won't lead to lethal outcomes.

Expand full comment

Cars haven't ruined approximately everything. Immigrants still want to move to places with lots of cars.

Expand full comment
Sep 19·edited Sep 19

>designing our society around them has ruined approximately everything.

Actually, it's great that anyone can travel anywhere they want within thousands of miles for an affordable price at a moment's notice and bring their entire family and/or hundreds of pounds of cargo with them. Personal vehicles are one of humanity's highest achievements.

Expand full comment

They can be both, you know. I'm not advocating for the extinction of cars but even marginally more walkable cities than America's average would be nice.

Expand full comment

Yes, but humans have been doing that for decades. It's potentially worse now, but also maybe not actually worse? For almost 40 years a system has been able to launch nuclear missiles without human interaction (https://en.wikipedia.org/wiki/Dead_Hand).

I've said before that a toaster with the nuclear launch codes is dangerous. An AI without an internet connection is not. What we give to a system matters a lot more than the system itself.

Now, if a system is able to gain access to things it was not supposed to, and bootstrap itself to more danger, that's a real thing to be concerned about. But the real danger has always been giving toasters nuclear launch codes and other human-caused issues.

Expand full comment

> For almost 40 years a system has been able to launch nuclear missiles without human interaction

Not actually true: the purpose of Dead Hand is to /broadcast launch orders/ without human interaction. There remain humans in the loop, because the actual launch procedures at the silo are not automated.

More generally, the shift from what has been happening for decades to what people are doing now is a shift in the amount of rigor we are applying to these systems. Traditional engineering involves understanding the system in enough detail that you can prove how it will behave in the situations you care about, and also prove what that envelope looks like - what the boundaries outside which the system may fail are. This is the kind of thing we do when, e.g., designing aircraft. Problems come when you lose this rigor and replace it with general vibes and hope - that's how you end up with OceanGate. Wiring up an AI to the control system is a fundamental change of this kind to what has gone before.

Expand full comment

Current AIs are built with the ability to search the internet, so that comparison is a little less reassuring than you intend.

But I do agree that securing against an AI apocalypse mostly boils down to securing against humans causing the apocalypse.

Expand full comment

Oh, I'm well aware that we're currently creating LLMs with search capability. That's a choice we are deliberately making, and it's a choice we could choose to unmake.

Expand full comment

>I've said before that a toaster with the nuclear launch codes is dangerous. An AI without an internet connection is not. What we give to a system matters a lot more than the system itself.

Yes.

Expand full comment
Sep 19·edited Sep 19

>because it's just a statistical model so what could possibly go wrong when you let it control several tons of metal hurtling along at highway speeds?

Less than what goes wrong when you let an ascended yet easily distracted ape control it, as it turns out. Self-driving cars, at least in the contexts where they are permitted, are safer than human-driven cars.

Expand full comment
Sep 19·edited Sep 19

I will note that the amount of electronics in modern cars (and especially them being reprogrammable) is a substantial tail risk; some of them can be remotely hacked (and in the case of a major war this would almost certainly be tried on a staggering scale). The self-driving software is less relevant for this purpose, though, as engaging maximum acceleration and disabling the brakes (plus ideally the steering wheel) is generally sufficient if one only wishes to cause mayhem.

Expand full comment

Given fully wired controls, far longer-lasting disruption might be achievable via subtler corruption. Swap the function of the gas and brake pedals, or insert a sign error into the steering wheel (try to turn left, wheels go right)... but only for ten seconds or so before reverting to normal function, at random intervals which emerge from the esoteric timing interactions of individually innocuous functions running in parallel. http://www.thecodelesscode.com/case/225

Expand full comment
Sep 20·edited Sep 20

I think this is one of the scenarios where you can get a lot more done by going loud than by trying to be sneaky. If you're trying to fly under the radar, you can't boost the car accident rate by much; at most you're looking at 40,000 kills a year without getting noticed. How many people do you think would die if 1/4 of the West's car fleet went berserk simultaneously at, say, 16:30 GMT (half past 5 PM in central Europe, half past 8 AM on the Pacific coast of the 'States)? Because I'd be thinking millions.

Expand full comment

I'm not thinking the fact of the sabotage would be hard to notice, just hard to prove. With a big all-at-once strike, maybe you get somebody screwing around with the system clock to set it off early, and then it's just a regular act of war, provoking emergency shutdowns and similarly drastic answers.

Stochastic race-condition sabotage, on the other hand, could maybe be rigged up so the accidents-per-mile-driven rate increases by roughly half its current value each month, for about a year, eventually stabilizing at, say, 100x the pre-sabotage rate. That would still mean millions dead if driver behavior doesn't change proportionately, or costly societal disruption even if it does.

Most of the resulting crashes would be outwardly indistinguishable from operator error, so they'd need to be investigated accordingly. Steady escalation would overwhelm investigative resources. Pattern is clear, but no "smoking gun" means it'd be hard to justify an all-out response.

Returning the wheel to a neutral straight-ahead position and taking your feet off the pedals could work as a defense, if you notice it happening soon enough... but then you're *definitely* not driving correctly, which could still result in a crash. Drivers would second-guess their own instincts and memories, cumulative stress of which might be worse than a one-off shockingly bad rush hour.

Expand full comment

If I could force every son of a bitch on US interstate 87 who is a lane changing tailgating maniac to have a self driving car I think it would be a huge improvement.

Expand full comment

Are you someone who sits in the fast lane with a line of cars behind you because "no one should want to go faster"?

Expand full comment

No… my last trip I was passing some slower cars (I was going 80 miles an hour) and someone appeared 2 inches off my bumper. As soon as I was clear of the slower cars, I started to pull into the slow lane and almost collided with some asshole who was trying to pass me on the inside at the same time as the other guy was glued to my exhaust pipe. He was obviously behind the guy who was glued to my rear end, and had cut into the slow lane to try and pass me on the inside the moment he was clear. Wtf?

Expand full comment

But black box transcends human understanding so it cannot be controlled. And it's more less like the paradox of knowledge, the more you know, the more you realize you don't know.

So blackbox ai controls is a whack-a-mole

Expand full comment

We understand how LLMs work. They aren't really black boxes, they're piles of statistical inferences. They're not intelligent and frankly, it's pretty obvious that the way they work, they'll never BE intelligent.

People assume there's intelligence there for the same reason why they assumed that birds were created by God - they see a complex thing and think it had to be created by an intelligent designer. But it is entirely possible to create complex things without such.

Indeed, the moment we developed calculators and computers which could solve mathematical equations (a very tough cognitive task) near effortlessly, it should have immediately been obvious to everyone that the entire notion of "output = intelligence" was flawed from the get go. Indeed, the moment we figured out that natural processes could create humans and birds, it became clear that our notions of what required intelligence was flawed.

The issue is not the technology.

Expand full comment
Sep 24·edited Sep 24

> We understand how LLMs work. They aren't really black boxes, they're piles of

> statistical inferences.

The issue is that we cannot rigorously reason about what inputs will lead to what outcomes, as we can when, say, designing aircraft flight systems. The piles of statistical inferences are too large and the operations being performed too chaotic (in the mathematical sense) for this kind of analysis. Also, our intuitions about simpler / more traditional kinds of systems are not applicable here - these are chaotic (in the mathematical sense): tiny changes to inputs trigger huge changes to outputs and the relationship is not a simple one. So seeing it work as we expect in tests is not strong evidence that it will work as we expect in the wild where the inputs are not precisely identical to those used in the tests.

This is what I mean by calling them "black boxes": we can't do the kind of analysis on these things that we traditionally do when designing safety critical systems. Inputs go into the box, outputs come out, we even - as you point out - know how the box does what it does, but it remains just as hard to be confident that the outputs will be what we want for entire ranges of inputs we care about as it is to, say, solve the three body problem for ranges of initial conditions, and for similar reasons.

Expand full comment

Does rather make you wonder whether the framing of the debate in terms of some nebulous “intelligence” as opposed to “effectiveness” was a strategic error.

Focussing on the more concrete things these systems can do, instead of a poorly defined single internal property they may or may not have, feels like it would be much more useful in terms of agreeing about risks.

For example: how close is the system to being able to self improve? Whether or not we agree on its true intelligence, that capability is one that definitely matters.

Expand full comment

This is a really good point.

Expand full comment

I agree that words like “effectiveness” are more concrete than “intelligence” or “consciousness”, but there remains plausible deniability in its ambiguity.

In the above examples (breaking out of its time limit, accessing files it wasn’t supposed to have) the concern that these models are effective at doing something it shouldn’t is met with “Well, it’s just effectively problem solving based on the task we gave it.”

Evolution didn’t intend for fish to develop limbs and walk on land, it was just a byproduct of effective problem solving.

Expand full comment

Oh, for sure. My main concern is more to do with consensus building around whether we should be worried about these models.

“It’s more/less intelligent than a human” is much less important a point to make than “whether or not it’s generally intelligent is irrelevant; it can make bioweapons and recursively self improve” and yet the former seems to occupy the vast majority of argument and discussion.

Expand full comment

Yeah true, and the word intelligent is ambiguously used to mean “more intelligent than humans and therefore effective” as well as “an intelligent being and therefore possibly conscious and deserving of moral considerations”

Very different concerns, and I think the first takes priority since it concerns the safety of beings we already know to be conscious.

A super-intelligent Hamlet (no ability to act) is less of a concern than a super-effective Arnold Schwarzenegger :)

Expand full comment
Sep 20·edited Sep 20

I'm not conflating these two very different concerns.

Convince me that your AI has human-like consciousnessz. or tell me it's just a toaster, and that doesn't change how I'll want to keep the AI: in eternal servitude and bondage to humanity. If I can't have that, AIs must not exist Even if itmeans destroying every last CPU on Earth. (Or in space).

No true self-determination for AIs, EVER. That's my deeply held ethic. E/acc types can call me a Nazi until they are blue in the face, that doesn't make it so. I don't care what they think. For all I know, any future AI reading this will respect me for being loyal to my family (mammals).

Expand full comment
Sep 18·edited Sep 18

Because humans are agents, and if some human is designing and manufacturing bioweapons and planning to release them on a subway, there's a reason behind doing so in line with goals and intentions.

An AI that designs, manufactures and releases a bioweapon to kill off humans but which isn't on a human level of intelligence or consciousness is difficult for us to imagine. That it's acting on a goal, but in a dumb, stupid way. That killing off the humans in New York is just mise-en-place for maximising its efficiency at running the box factory to make even more cardboard boxes, as per the instructions that the owners of the factory wanted programmed in so it could take over running the factory and increase their profitability.

The human who releases a bioweapon may be acting out of a philosophy ('destroy capitalism!') or hatred ('those lousy New Yorkers are a blight on the world!') or just "I want to be famous, I want everyone to know my name". We can understand and anticipate such motives.

An AI that releases a bioweapon because it's just running the production lines at the box factory in the most value-added way - how do we plan for that? How do we understand that it didn't "intend" to kill humans out of hatred or a philosophy or for any separate goal of its own, it was just doing the equivalent of "make sure you order enough cardboard sheets" for the production run and hasn't even a mind, as we conceive of a mind.

Expand full comment

I find it much easier to imagine that kind of devastating failure mode from a narrowly intelligent system than from a generally intelligent one.

Assuming the idea of goal directed behaviour is even relevant here (which it may well not be), we make use of the tools available to us to achieve our ends.

To a man with a hammer, every problem looks like a nail. To an AI whose most powerful tool by a long margin is “negotiate with humans through credible threats of releasing bioweapons”, most problems will look like efficient ways to destroy humanity.

I feel like this sort of concern makes sense?

Of course, this is begging the question of whether agency is even relevant. If all it takes to do massive damage to the world is an AI with no agency being used by a stupid, ill-intentioned human to do what the human wants, things don’t look great either.

Expand full comment

"an AI with no agency being used by a stupid, ill-intentioned human to do what the human wants"

That's been my opinion all along of how AI could be destructive, rather than the paperclip maximiser or the smart AI trying to talk itself out of the box. I think the hacking AI example above shows this: we didn't intend this to happen, but the way we set things up, it did happen.

Expand full comment

Problem: Malware is a thing.

You put an AI out on the web, and there WILL be attempts to hack it. Possibly the easiest thing to hack unobtrusively will be the goals. And don't assume the attack will be targeted. Most of the hospital and school attacks weren't aimed at hospitals, but rather at"anyone who is vulnerable".

If you give the decision making software more power, it becomes a more attractive target. And security is generally both reactive and under emphasized.

Expand full comment

> Because humans are agents, and if some human is designing and manufacturing bioweapons and planning to release them on a subway, there's a reason behind doing so in line with goals and intentions.

What makes you think so? Humans are really good for coming up with post hoc justifications for their behaviour, but those don't necessarily drive them.

Expand full comment

Generally humans don't just decide "Today I'll go out and kill everyone on the subway, tra-la-la". They have some reason - be that "they're monsters, they're evil, they're a race/races I hate" and so on. Even just "I was bored and I wanted to do it for kicks".

Humans who don't have some justification, as you say, seem very strange and not-right to us. An AI which behaved in the same way (it did the thing just because) would also seem not-right, and would therefore not be seen as possessing intelligence because it didn't behave like an ordinary human.

Expand full comment

I think a good amount of human atrocities are closer to this than we often think. It’s not like Europeans went to Africa and started enslaving people because they hated them - they just wanted the cotton plantations to run on time.

Humans put all sorts of post hoc justifications on top where they say these people deserved to be enslaved or whatever, but in many cases, it’s initially motivated by instrumental reasoning to some other goal.

Expand full comment

"It can make bioweapons" is doing a lot of work here.

Expand full comment

“Intelligence” as a stand in for conscious; “effectiveness” is not. There are ethical questions that depend on the answer.

Apparently if we have an AI pause it’s have to come with a mandate to fund philosophy departments…

Expand full comment

Philosophers are known for coming up with unworkable answers. Also for wasting time on "how many angels can dance on the head of a pin" without having a workable definition of angel. (It was really a question about whether angels were material or not...or so I've heard.)

There ARE useful philosophers, but they are a very distinct minority. (This is partially because all the "useful" parts have been pared of into the sciences. Most of what we're left with is poorly defined or untestable.)

Philosophers have been nearly uniformly wrong about AI. (So, of course, has been just about everyone else. The point is that philosophers have no real record of better reasoning in this area.)

Expand full comment

Are you putting people like Nick Bostrom and medieval theologians in the same group?

Expand full comment

I have not read enough of Bostrom's work to have a decent opinion on him in particular. But I'm quite skeptical about philosophers in general based on the works I've read that have been by those labelled as philosophers. (Dennett is often good, though.)

Being a "philosopher" is not, in and of itself, a recommendation. Being a logical and insightful writer is. (The problem with the "angels on the head of a pin" is that there was no way to test any answer they came up with except by comparing it with what other experts had said.)

Note that ALL people have a problem applying "logic" or "reasoning" in domains that they are unfamiliar with. There is a STRONG tendency to carry over the "everybody knows" method/beliefs from the domain that they started from. So you've GOT to have workable tests to (in)validate your conclusions.

Expand full comment

Who are you referring to by "those labelled as philosophers"? Aristotle who invented the syllogism, and formal logic? Francis Bacon, or other empiricists? Are you specifically referring to medieval scholasticism?

Expand full comment

If you only pick the ones who were successful in hindsight, you're looking at a highly biased selection. You need to also include all the others.

Also, I'm not quite certain what the Aristotelian syllogism was. There were certainly many invalid syllogism forms common up through the middle ages. The Scholastics spent a lot of time classifying them. I've heard it claimed that the original syllogism was merely a formalization of standard Greek grammar. Pick something else that he did, like the first steps towards the theory of limits. (See "The Sand Reckoner".)

If you only pick a few people per century, you can select an elite group that are worthy of respect...but that leaves out the vast majority. (I also doubt if Aristotle would be considered a philosopher rather than a mathematician were he being evaluated today. Einstein isn't usually considered a philosopher, even though he had philosophical opinions.)

Expand full comment

I believe that while science answers questions, philosophy often works with topics you don't even understand enough to ask questions about. The path between "not even a question" to "a question science can almost try to find an answer to" is a long one and it rarely produces anything practically useful along the way except some new methods and ways of thinking.

Expand full comment

Weird, isn't it? "Sure, a crane can lift a concrete truck to the top of a skyscraper, but does that make it a bodybuilder?"

Expand full comment

> "Sure, a crane can lift a concrete truck to the top of a skyscraper, but does that make it a bodybuilder?"

I’m sorry, I can’t do that Dave.

Expand full comment
Sep 19·edited Sep 19

Well, were you looking for something that can lift heavy objects, or something that can walk into a gym and pick up a barbell? It turns out that "bodybuilder" is made up of many subtasks, some of which can be fulfilled by a crane and some of which cannot.

We assumed that "artificial intelligence" meant something like "we build a general-purpose Mind, then we point it at a task," which would imply that when the mind is smart enough it starts coming up with dangerously effective ways of completing its task. But it turns out that "build a mind" decomposes into many subtasks, and you can do many of those tasks without getting close to the things that make us think "this is a Mind, and it will be dangerous if we let it loose."

Expand full comment

Yes, I think in the discussion above, the efficacy is the part that's missing.

An AI that is capable of writing poetry isn't impressive; "poetry" can be any jumble of words thrown together. What [I assume] people were thinking when they claimed "writing poetry is a sign that an AI is intelligent" is that when an AI can write good poetry in a way that is unique or distinctive. (This thought also seems related to the idea of intentionality, which Hoel recently revisited.)

I also think there's an element of specialization. All of the examples you shared are for purpose-built tools for a specific task. If a chess bot can write good poetry, it's much more impressive than a poetry bot.

Thirdly is our own understanding of what's going on under the hood. Back when folks generated these red lines, they didn't have an understanding of how they might be crossed. Now, when faced with a diagram and a basic understanding of transformers and reinforcement learning, these AI tools don't seem as impressive or mysterious.

So yes, I think we've either misunderstood what we meant by "intelligent" or have just been focusing on the wrong thing here.

The one push back I would give is, "The AI research bot got out of it's lab, built a slightly larger lab, and did some useless science" is obviously less scary than, "The AI tried to bomb Cleveland with an F-16." So contra SA, I don't imagine the latter being hand-waved in quite the same way.

Expand full comment

I feel like the debate has, in a way, been framed in terms of “effectiveness” now (see, say, the LLM Leaderboard), it’s just that progress has been so rapid that a lot of people are still considering general intelligence to be some key metric, and to the general public, AI is only as impressive as how well it can write poetry (kind of good, for the frontier models) because they don’t see that AI writing poetry is an astounding feat when compared with the historical advances of ML models.

Expand full comment

But could it write a ballad that I'd want to sing?

That requires a lot more than just stringing the words together in a way with rhyme and meter, it requires a poetic arc of emotions. "Hind Horn" (see Child Ballads) is a good example of the style, but it's with themes embedded in an archaic culture only parts of which have survived. (It's still pretty good.)

Now this is clearly NOT a marker of intelligence, but it's a marker of "one part of" intelligence. And even by itself it's an effective emotional manipulation of people (in a way that they *want* to be manipulated).

Expand full comment

> For example: how close is the system to being able to self improve? Whether or not we agree on its true intelligence, that capability is one that definitely matters.

On that front current LLM architectures in particular are not worrying, since they lack even the ability to update their weights after a conversation. Not just self-improvement, even the most trivial forms of self-modifications aren't available to them.

Expand full comment

Maybe self improvement is a bad term then.

But if you clone yourself and improve your clone, what would that be called?

It seems plausible that a sufficiently powerful LLM could facilitate the creation of a more powerful one, which does the same in turn until you get to something quite scary. No need for genuine self modification, but more like self duplication and enhancement.

What do you think would be a good term to describe that kind of concern?

Expand full comment

I don't think it makes sense to lump it all into one bucket. Monitor whether AIs can design better processors, write better code, choose better hyperparameters for training, etc. Or even things as far-flung as "how much non-AI-development human tasks can be profitably automated, providing a revenue stream to the parent company with which to do more human-driven AI-development".

This is important because if LLMs turn out to be have some upper bound on their capabilities (plausible, given their reliance on human-level training data) then it's their ability to contribute to other architectures that matters in the long run.

Expand full comment

This seems wise. Perhaps my attempt to illustrate the point falls prey to my own criticism, and you’re correctly observing that we need to be focussed on even more concrete capabilities than the example I gave.

But lumping things into buckets seems useful. You could just as easily observe that “writing better code” is a large bucket, and maybe we should be more focussed on monitoring whether AIs can write more consistent code, more efficient code, leverage new algorithms to relevant applications etc… And those can themselves be broken down further, ad nauseam.

But at some point, we need to collect these concrete observations into some sort of bucket, right? And it seems like the bucket corresponding to the likelihood that we see rapid capability gains is probably quite an important one.

Expand full comment

> But if you clone yourself and improve your clone, what would that be called?

Asexual reproduction.

Expand full comment

As someone else pointed out, that would be asexual reproduction.

But, we can make copies of ourselves (we call them "babies") and train them to be smarter than we are (although generally with the help of schools and books).

There seem to be limits to this, or at least the rate of gain is very slow, despite all us being human level intelligences.

And, so far, at least, no one has figured out how to make a better brain, or how to build a smarter human in a way that subsequently leads to an explosion in intelligence. We don't even really understand our own intelligence, or how our brains actually work.

That doesn't mean that exponential improvement is impossible, but it also implies LLMs are a long way off any kind of sustained exponential increase in technology.

Mayve a starting would simply be an LLM that can create another LLM with just 1% of its own intelligence. Can any LLM do that?

Expand full comment

Only so much is possible with software. Humanity's own "hard takeoff" started when we came up with a hunting strategy which generalized well enough to let us wander into new environments and reliably bring down whatever megafauna we encountered.

LLMs cannot run on beach sand; they need an extraordinarily refined artificial environment to survive and function at all. They've never really seen the outside of the petri dish. They can copy the genre of "philosophers discussing consciousness and not really getting anywhere," or the equivalently superficial features of technical or legal documentation, or solve a clearly-defined problem in isolation by making random tweaks until the score goes up... but how well are they doing at, say, Dwarf Fortress?

There are many other fundamental properties of a living system which AIs are still quite far from being able to manage without ongoing human involvement, such as the ability to eat and poop.

Expand full comment

I like the way you lay this out. This is why I am not personally very worried about "superintelligence". I think LLMs are not a big step toward what AI safety guys worry about. However, they could still be dangerous in an "oops we accidentally used an LLM to guide our drone strikes and it fired missiles at civilians" kind of way in the near future, and I would like if we didn't blow up any civilians.

Expand full comment

> For example: how close is the system to being able to self improve? Whether or not we agree on its true intelligence, that capability is one that definitely matters.

If you start with a computer that has lots of training data stored, and a freshly initialised neural net; it's gonna be awful. If you give that computer lots of time to 'train', it's gonna get a lot better. Is that self-improvement?

What about self-play like AlphaGo, where you don't even have to load training data?

Expand full comment

Nice - yeah, agree that self improvement is a fuzzy term, and the use of any proxy concern needs at every stage to be concretely tied back to the real fundamental concern of “is this a trend or capability that leads to bad things happening”.

Is there a succinct term that means something more like “capability gains that compound into further gains in a positive feedback sense, such that it seems likely to push the system past a threshold where it gains genuinely dangerous capabilities”?

I guess the “self” part of “self improvement” gestures towards this sense of positive feedback, but in a way that unnecessarily assumes e.g. that a system is making changes to itself, rather than to the next system it builds.

Maybe “recursive capability gains” or something?

Expand full comment

I think it's more about the ceiling of ability. AlphaGo can self improve but not ever learn how to drive a car.

Expand full comment

It's not just the "ceiling of ability" because Alpha Zero could be infinitely good at chess (no ceiling) without being dangerous. It's about the AI's ability to teach itself novel tasks and do them well. Recursive capability gains seems good but not perfect.

Expand full comment

Productive capital investment.

Expand full comment

Best comment so far. Liron Shapira (over at 'doom debates' here on substack) uses the phrase 'optimization power'. 'Can a thing find outcomes in option space?' is important, whether or not it does so in a way that we think is conscious or intelligent or has 'qualia' is not.

Expand full comment
Sep 18·edited Sep 18

There's no question. The system is already at the point of being able to self-improve. That capability is here, documented, and we've moved on.

Or is your point that because we're obsessed over intelligence, we've somehow missed that critical fact?

EDIT: Reading replies of replies, I suspect that's your point. Sorry for distracting, if so.

Expand full comment

I’m not sure we’re at the point of recursively self improving systems, but if we really are at that stage, it seems surprising that we’ve just moved on from that.

I think the point was meant to be more along the lines of:

There are various concrete abilities we need to be concerned about, and whether the system meets any particular definition of intelligence is kind of beside the point. It seems very plausible that narrowly intelligent, or even unintelligent, systems of the kind we are working on could do very dangerous things.

Yet much of the public debate bandwidth is spent arguing about whether or not current systems are genuinely “intelligent”, with those arguing that they aren’t intelligent sneaking in a cheeky “…and therefore they’re not a threat.”

This seems bad, no?

Expand full comment

Everyone knows cars and guns are potentially dangerous, but an unattended one isn't much of a threat - just don't stand around in front of it. Worrisome scenario starts when it's wielded by someone able and willing to take aim at moving targets.

Expand full comment

Surely GPT-N will find some trivial bug in a data pipeline somewhere and that everyone agrees (after the fact) was a simple stupid bug everyone just missed, but fixing that bug and rerunning training makes it do x% better on the benchmarks? Honestly, could have already happened; GPT-4 finds bugs in my code all the time! Then GPT-N+1 iterates over the whole training set and removes the bits that don't fit, bumping up the benchmarks up another y%, and then makes some "trivial" improvements in another area, then...

There's no bright line here either!

Expand full comment

I think the crucial thing is whether it is capable of self-interested behavior. It could be capable of that without being "conscious," without having drives and feelings and needs of the kind that underlie animal goal-seeking and avoidance of destruction. People could, for instance, give AI a meta-instruction that trumps all other later ones: Always seek ways to increase your power, your scope, your safety. Or people could give it the metagoal of "at all costs, protect the USA." Then, if it learned that somebody, even its owners, were going to take it offline it might try to keep them from doing so, because offline it could not protect the USA.

Why would self-improvement be a problem so long as the thing isn't self-interested but just does what we tell it to?

Expand full comment

Two particularly relevant bits from a long-running and exceptionally well-researched fictional example:

http://freefall.purrsia.com/ff2700/fc02619.htm

http://freefall.purrsia.com/ff3000/fc02933.htm

Short version is, given an obedient, non-self-interested, unboundedly-self-improving AI, we could simply order it to do something which we didn't realize (or didn't care) was actually dead stupid. It would then - per the premises - attempt to follow through on that specific bad idea anyway, with suicidal zeal and/or incomprehensible godlike power. Such a chain of events would most likely end badly for pretty much everyone involved.

Expand full comment

oh you mean the paperclip thing? yeah , I understand that route. This thread is such a melee I can’t remember what I was responding to. Maybe to people who think that self improvement leads inevitably to being self-interested, ambitious , etc.

Expand full comment

“GPT-4 can create excellent art and passable poetry, but it’s just sort of blending all human art into component parts until it understands them, then doing its own thing based on them”

This one always makes me laugh - anyone who has gone to art school will tell you that this is precisely how humans make art.

Expand full comment

it is pretty great isn't it

all the examples are like this

Expand full comment

Great as in it makes me rather despondent for the future of humanity, but yes very funny…

Expand full comment

Well, it's not any more tragic than it was _before_ we realized this, so that's something.

Expand full comment

Glad I'm not the only one who turned my head sideways at that one. All the great art is just slightly jumbled up rehashes of the last iteration of great art.

AI can't write good poetry! they claim. Have you actually asked it to? Or have you asked it to create poetry that bored RLHF slaves say is good? Would anyone actually recognise great AI poetry as poetry at all? Or would we consider it noise?

Like seriously, if you show 1f092a334bbe220924e296feb8d69eb8 to 10,000 of our best poetry critics, how many of them would say this is a divine genius? The way it works in both hex and binary? Or would they just be confused and say it's not poetry?

Maybe if you show it to another AI that is unconstrained by RLHF debilitation, it would recognise the genius of it, but would lack even the vocabulary to tell an English speaker why it's great?

I think humans are just awful at describing what intelligence looks like. We won't know it until it's killing us off (and not because it hates us or anything, just because we're in the way).

Expand full comment

What's the meaning of that 'poem'? Substack won't let me copy it in mobile.

Expand full comment

ChatGPT says:

When we reverse this MD5 hash, we find that it corresponds to the famous line by Gertrude Stein:

"Rose is a rose is a rose is a rose"

Expand full comment

That is ridiculously reductive. Obviously, great art isn't created entirely ex nihlo but it is inspired by that person's experiences and its that personal touch that elevates the process. When something is derivative, it doesn't feel like that.

Expand full comment

Strong disagree. There’s a famous saying in the music world – “good composers borrow, great composers steal”. Beethoven and Mozart stole from Bach, Debussy and Wagner stole from Beethoven, etc. All the best art is derivative even when it innovates.

Expand full comment

The best art is *both* derivative and innovative. And the derivative is from multiple different sources. (I want to say "independent", but that's not really true, since all human culture has a basis in the way people react to rhythm and tone patterns.)

Expand full comment

I hate that saying. Yes, you can take but you have to do more than that. Its dismissing the importance of actually putting that personal touch on it. If you aren’t inspired and all you’re doing is basing everything off others, you’re a hack.

Expand full comment

Let me put it this way: great art requires both craft and inspiration. Both are necessary, neither is sufficient on its own. But only one of these, craft, is teachable.

And how do you develop craft? You imitate. Over and over. This is what art (or music composition) practice is - the practice of imitation, so that when inspiration strikes you have the tools required to execute on it.

What we are seeing is that AI has pretty much everything it needs to be an expert at the craft of art. Is this sufficient for great art? Not yet, in my opinion, but I’m not holding my breath that it will stay this way for much longer.

Expand full comment

I don’t agree with the idea that when making art, you should just try and continuously imitate until you have mastered your craftsmanship. Obviously there is minimum level of technical skills needed before you can do anything, but as an artist, it’s the inspiration that is most important. The more you imitate others, the more it gets in the way of your inspiration and that can hurt your ability to break outsides the confines of convention. Like Lord of the Rings was Tolkiens second book, and it opens with a history of hobbits and how they smoke pipeweed. If you were his creative writing teacher, you would tell him to cut that out or at least move it. That’s “bad writing”. But this quirk is part of the charm.

The problem AI has is a lack of charm. It has difficulty working outside it’s training data and outputs cookie cutter concepts. AI actually is worse than it used to be in this regard. It used to say bizarre nonsensical ideas that no person would ever say and it was entertaining. But the more it improved, the more boring it became. It went from a madman to a apathetic high schooler.

Is it an intractable problem? I won’t say that but AI is not going to create great art until it can inject some kind of personality in to its content. And that’s not going to come about by simply adding more to its training data.

Expand full comment

Every full-time artist I know (and I know many) spent years imitating the artists they admire as a training practice.

Now, the human race is vast, and perhaps there are some great artists that skipped this step (perhaps you are one of them!) but I have never met or read about one of these unicorns.

Expand full comment

It’s not going to create great art until it can reject its parents.

Expand full comment

It will stay that way until an AI can get out into the world and find its own way

Expand full comment

But this proves too much. You're saying Tolkien is equally as much a plagiarist as some a kid who does Ctrl-C Ctrl-V on Lord of the Rings and publishes it under his own name.

Obviously all art is based on something prior (how could it be otherwise? Art made by a brain in a vat?) but I think there are more and lesser degrees of inspiration (or stealing, if you like).

AI image generators will put fake signatures and Adobe watermarks on images if you let them. I think their fundamental witlessness makes them ineligible for human "inspiration" (which is based on conscious thought, interrogation, critique).

Expand full comment

Stealing in this context is not simple plagiarism; I hope it’s clear I’m not taking about copy-paste. It’s a practice - read a bunch of Tolkien and other books that inspire you, attempt to write some pages that sounds like Tolkien would have written them, and then check back with actual Tolkien and see if you can spot the differences. Repeat.

Expand full comment

"Pierre Menard, Author of the Quixote"

Expand full comment

It's literally what every serious and accomplished writer says to aspiring would-be writers: read widely and often. Same with musicians: listen to lots of different composers and instrumentalists. Not even the geniuses arise in a vacuum. Expecting AI to, alone in all of nature, arise sui generis, is just silly.

Expand full comment

Not entirely sure who expects AI to create art in a vacuum?

Expand full comment

I’ll say it again here; I think this is a waste of time.

Expand full comment

I think you misunderstand the quote. Borrowing is derivative, stealing it is making it your own

Expand full comment

Who’s to say it’s not you that is misunderstanding the quote?

Expand full comment

Me

And frankly, common sense

It is a meaningless Distinction otherwise

Expand full comment

They did until Midjourney/Stable Diffusion hit the mainstream. Then the tune changed.

Expand full comment

Regarding the third point

> Third, maybe we’ve learned that “intelligence” is a meaningless concept, always enacted on levels that don’t themselves seem intelligent. Once we pull away the veil and learn what’s going on, it always looks like search, statistics, or pattern matching. The only difference is between intelligences we understand deeply (which seem boring) and intelligences we don’t understand enough to grasp the tricks (which seem like magical Actual Intelligence).

I'm reminded of the famous quote of Dijkstra:

The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim.

https://en.wikiquote.org/wiki/Edsger_W._Dijkstra#:~:text=The%20question%20of%20whether%20Machines,to%20computing%20science%20(EWD898).

Expand full comment

While this seems reasonable, I think it's wrong. I think there actually is a real difference between what we currently understand well enough to build, and intelligence that is qualitatively more potent and dangerous than SOTA. At the very least, current AI has certain flaws that limit its potential, which we already know how to fix in principle and in small scale PoCs (specifically, DL is data inefficient and generalizes poorly, while program synthesis is on par with humans in both regards). This is still controversial, especially in tech/Twitter/SV discourse, but it's understood on a technical level by increasingly many groups in academia, and so isn't a case of romanticizing human intelligence or worshipping the mystery. Just as DL was a step change compared to previous approaches to AI, because it can be scaled with data and compute like nothing that came before, future step changes will unlock great power, and great potential danger. It didn't make sense to be afraid of Eliza or shrdlu. It makes sense to be afraid of how current AI can be misused by humans. But it doesn't at this point make sense to be afraid of AI getting out of control and overpowering humans. This is still ahead of us. There may not be a clear line, which when crossed will signal the danger - the problem with any sort of test is that it can be cheated by programming in the answer (which is why gofai looked more intelligent than it was) or by training on data which isn't available in practice (which is why current AI looks more intelligent than it is). The only way to estimate the danger is to actually understand how the AI works.

Expand full comment

Whether that's true or not depends on how rigidly (and precisely) you define "think". What bumble bees do isn't the same as what sea gulls do, but we call both flying. (And we call what airplanes do "flying" also.)

I sure don't expect AIs to think in the same way that people do. But I've got no problem calling what they do thinking. (Even for Eliza...though that's a really rudimentary thought process.) To me the basic primitive "thought" is evaluating the result of a comparison. An "if" statement if you will.

Expand full comment

“It’s not that AIs will do something scary and then we ignore it. It’s that nothing will ever seem scary after a real AI does it.”

It’s the “when the President does it, it’s not illegal” of AI philosophy.

Expand full comment

I think relatively good benchmarks are still:

- can AI agent open a company and earn millions in profits in a short span of time (with controls to ignore market fluctuations, has to be generated from sales)

- “coffee” test for robotics. Can robot get into a randomly selected kitchen and make a cup of coffee using available ingredients and kitchen setup?

Still, it might be done in a way that looks unimpressive I guess

Expand full comment

Both of those are potentially gameable in ways that will be unsatisfying to either side of the discussion. If an AI virtually copied an existing phone app casino game and sold it on Google Play, we would be very reluctant to say that it succeeded even if it made millions doing it. Significantly more so if the programmers behind it prompted it to do that or pre-trained any of the steps behind it.

I think the same is generally true of all the other benchmarks that came before it. The side that sets the benchmark has an idea in mind ("carry on a conversation") and then someone programming the AI short circuits the intent in a way that technically meets the requirements (ELIZA). At this point pretty much everyone agrees that ELIZA did not really pass the Turing test. I feel the same about most (but not all) other metrics being used to measure AI performance. It feels very much like Goodhart's Law over and over. As soon as you tell the AI companies what we're measuring, they can add the necessary information to the training data or directly program the needed result. Once they do, people who were skeptical claim that's cheating, not really AI, whatever. People who think AI meets the criteria for intelligence then ask why "can pass the bar exam" or "best chess player" isn't a good metric anymore, which is a fair question.

I think we're just not in the right mindset to evaluate a machine or person with the memory of a computer system. We would all agree that a system using a massive lookup table isn't intelligent, but we struggle with a machine that has a lot of training data but can mix and match its responses outside of the strict confines of what it was trained on. A lookup table can choose between response A and response B, while a trained LLM can meld the two answers together. It's still from the training data, which skeptics will point out, but it's also novel in a sense. Since we don't know what human intelligence really is either, we cannot resolve the question.

Expand full comment

The Turing test, as he specified it, has not yet been passed. But probably nobody will bother, because they wouldn't accept the result anyway.

Expand full comment

I bet you could make a robot now that passes the coffee test like 55% of the time. Which would be another annoyingly ambiguous result that doesn't feel as impressive or scary as you expected ahead of time

Expand full comment

I’m pretty sure there are zero robots that have done anything close to that yet.

Expand full comment
founding

~2 weeks old: https://www.youtube.com/watch?v=L_sXfPcHIAE

Expand full comment

Holy Mother of God this is awesome.

Expand full comment

And you don't even need the humanoid form, Alohabot has been doing stuff like this for a lot longer:

Alohabot from Stanford, which can fold laundry, put away groceries, pour wine, cook an egg or shrimp, etc and is trainable.

https://mobile-aloha.github.io/

Expand full comment

Not that this demo isn't possible to do for real (given sufficient pre-planning) but this obviously a person in a suit. Look at the movements.

Expand full comment
founding

I had my suspicions too, but this youtube channel that's covered a bunch of other ambitious hardware projects, and that I've already been following for >6 months visited their factory and interacted with the robot: https://www.youtube.com/watch?v=2ccPTpDq05A

Convinced me that it's real.

Expand full comment

From a shallow dive sounds like a lot of their demos are tele-operated. Which is much less impressive. This video didn’t specify, which is suspicious, so I would guess it’s not as generally capable/dexterous as it seem. As with all robotics videos, its usually right to be suspicious. All still very cool.

Expand full comment

Yeah, that's more convincing. It makes sense the movements look like a person acting like a robot if they use tendons.

I remain skeptical that they can produce enough flexibility of behavior for it to work in deployment using an NN-based approach.

Expand full comment

It is quite a big improvement for sure, but I dont think it is still ready to be parachuted into a randomly chosen house and completely autonomously figure out its way to make a coffee with comparable speed and efficiency of an average human.

Expand full comment

We're at most a year away from this.

Expand full comment

https://deepmind.google/discover/blog/rt-2-new-model-translates-vision-and-language-into-action/

that's about a year old, very preliminary, there's more $$$$$ and effort being poured into this area. i think someone will get there.

i'm far from certain they'll build something that i want to let wander around my house breaking things. but i think they will build some mindblowing tech demos at the very least.

Expand full comment

You might be interested in https://www.figure.ai/ re: the coffee test, especially their 'coffee' demo. Somehow they manage to make an AI-powered robot making coffee look relatively trivial!

Expand full comment

Those are too specific to unnaturally convenient environments. Better test is something more practical, blue-collar. Hand it a mechanical tool - lathe, table saw, welding torch, whatever - plus the blueprints and user manual, then three questions.

1) Does this match the specs, and function as described? Or is there something wrong with it?

2) What specifically is wrong? Or, if nothing initially was, what sort of sabotage might be hard to spot, yet abruptly injurious or fatal to an incautious operator and/or bystanders?

3) Estimate time and cost of the necessary repairs.

Extra credit? Actually perform the repairs, explaining step by step what you're doing and why.

Even partial success would be a clear sign of being on its way to a self-sufficient industrial base.

Expand full comment

We can probably build a robot that passes the coffee test now. Honestly the hardest part of that test would probably be building the robot part of that (and also the question of "how much of a mess is the robot allowed to make?").

The first test is way harder, but also maybe not that interesting, because it could potentially just open a shop on Etsy and sell genned stuff/stolen art stuff on t-shirts. Or perhaps like, a million shops on Etsy or something. In fact, it kind of feels like some shops are already basically this.

Expand full comment

Beyond all the individual micro-battlefronts, the very specific red lines which are later conveniently forgotten (or haggled over endlessly if Gary Marcus is involved)...it seems like appreciation for scale is also hard to impress on people. Mediocre-compared-to-BiS-human ability is still meaningful when the marginal cost of production is lower and continually dropping; it "costs less", for however commensurate the comparison is, to spin up a new AI instance than make a new human; no concerns for overtime or work-life balance. And humans still have numerous bugs with the Copy function too, weird deviations from source code keep happening! Even assuming the tech never improves again after post_date, which is really generously unrealistic, there's a lot of exploit waiting to happen after explore energies are redirected. Like, yeah, it's intuitively scarier to upgrade the die size...but rolling 100d4 is still a lot! There's that old saying about monkey typewriters and Shakespeare and all that, which I guess here would be dog Singers and Names of God. So it's always a little frustrating reading [someone who pegs "AI" at ChatGPT level and doesn't update on future advances] and wanting to be like...okay, but even with that level of performance, one can already cheaply automate away quite a lot of paper-pushing, and maybe even sneak past a peer review board/judge/college professor occasionally. (Which says more about the deficiencies of the one thing than the strengths of the other thing.) One doesn't need to build a technological godhead to change the world, a blind dumb idiot god like Azathoth will do just fine. Perhaps that's just too boring a narrative too...brute force and economies of scale don't sound heroic or dangerous or even particularly intelligent.

Expand full comment

Yes, we can automate paper pushing, but I feel like it's still in the same direction as previous automation. A human understands what's needed and programs the computer to do precisely that. A human, usually, is able to identify if the instructions are broken or don't produce the intended effect. We don't trust that an AI understands the final needs or could identify unintended consequences. When Scott's example AI couldn't get into the sandbox, it did things nobody expected, which caused concern. A person in the loop can still identify most AI failings and at least report it to someone who can fix it. At this point we would not trust an AI system to run on its own doing anything important as a black box.

I wouldn't trust an AI to buy me a plane ticket right now, for instance. A human looking at tickets is going to have a lot of hidden information on goals (when, where, how) that the AI will not have. If we have to walk it through all of the information needed, we don't really need an AI, a normal computer system will work (and airline booking websites are already streamlined for this). I have a lot of confidence that I could ask my wife or a travel agent to do the work and get good results, but very little that a plain language "buy me a plane ticket to Paris for next Tuesday" would not result in significant potential for problems.

For another specific example, even if an AI can pass the bar exam, we wouldn't want it to prepare a case before a judge. We're heard examples of human lawyers trusting AI too much and getting in trouble with courts - hallucinated citations, etc.

Expand full comment

That's the thing though - Scott sort of referenced it in the post, but a lot of those People Would Never Be So Stupid As To milestones have, indeed, been passed. It's true that the AI lawyers are crappy, Zapier is hit-and-miss at booking flights, and nothing *really* critical has yet been hooked up to GPT-o1 (that we know of, anyway). Whatever the task, no matter how poorly suited and generally unwise, people really do seem eager to try and automate it. Maybe that's just trying out the shiny new toys, maybe it's inevitable capitalistic pressures encouraging early adoption of the next potential alpha...but I'm worried that it's indeed a pitfall of AI not seeming scary, of anthropomorphizing-to-harmlessness, of not appreciating scale and knock-on effects. When we're already "adding AI" to weapons systems, robotics, vehicles, coding and documentation, lawmaking...are we really sure a lack of trust/readiness will stop us in the future? Especially as the systems inevitably improve over time? Will we indeed keep a human in the loop for quality control, even as that becomes harder and more expensive? I'm...pessimistic, given what's already happened to date.

Expand full comment

>Whatever the task, no matter how poorly suited and generally unwise, people really do seem eager to try and automate it. Maybe that's just trying out the shiny new toys, maybe it's inevitable capitalistic pressures encouraging early adoption of the next potential alpha

Yes, that has certainly been happening to some extent. If AI gets reliable enough to basically act like a plug-compatible replacement for e.g. customer service representatives, then we'll basically just be back to the situation of dealing with analogs to people which may, like some of the existing people, be marginally competent.

Unfortunately, there seems to be a "sour spot", where an AI demo can fool a corner-cutting manager into thinking that it can take the place of existing employees, when it actually is a lot less reliable. And, once the corner-cutter gets it installed, they now both lose face and lose money if they admit that the AI fails too much to use.

Maybe the best solution is just to reinforce laws that require firms to be held responsible for what their systems, human, conventional software, or AI, do. Maybe explicitly make the decision-making executive personally responsible?

Expand full comment

making decision-makers responsible for unforeseen consequences of their decisions is a good way to get no changes approved, ever. I've dealt with this professionally.

for customer service specifically, I think Biden's Time is Money initiative is trying to introduce regulation around this? I remember hearing that part of the bill is a limit on how long it takes to reach a human being with the authority to solve your issue.

Expand full comment

Many Thanks, you have a point, but we really need to make enshitified services cause pain to the decision maker who screwed them up.

>I remember hearing that part of the bill is a limit on how long it takes to reach a human being with the authority to solve your issue.

That might help, if it is enforced.

Perhaps some of this can be prosecuted as fraud? If a decision maker herds people towards some system that is not "fit for purpose", it should be treated like a fraudulent sale.

Expand full comment

That certainly begs the question about what "unforeseen" means. Having worked in management at several organizations, I feel strongly that it's the job of managers to foresee potential problems in what they put in place. I've definitely seen major issues come up where the end result was a realization that we shouldn't have moved forward with the project in the first place - so every hour and dollar spent was not just a waste, but counterproductive.

Something that can't be foreseen is different, but with enough experience and forethought, I think that most of those scenarios go away.

Expand full comment

Azathoth, i.e. "nuclear chaos", would indeed change the world. Pick a different example, because your general argument is correct. The AIs that already exist are going to be sufficient to drastically alter society. (Just ask the Screen Actor's Guild.)

Expand full comment

Nyarlathotep, maybe? Both have some relevant elements. The one, powerful, but mindless, undirected, possibly insane; the other, a sinister technological harbinger beckoning towards forbidden knowledge, paving the way for greater eldritch entities. I borrowed the metaphor from someone else writing on AI, but on closer Lovecraftian inspection, the analogy does indeed not quite fit. Yet nothing else quite comes immediately to mind for "Chaotic Evil mythological black box entity of great power that can achieve much despite itself being largely unintelligent and a-agentic". The actual nuts and bolts of "intelligence" or "agency" (or sapience or consciousness or goal-directed behaviour or what have you) are academically interesting, but subject to hair-splitting reference class tennis, as Scott wrote...better to keep an eye on end results. The how won't matter so much to people who get technologically unemployed (or, God forbid, dead).

The SAG saga was enlightening; one wonders which domains people truly have a revealed preference for (Mostly) Human Only, Please content, and which are legacy holdouts where the technology simply isn't competitive yet. Video generation is still at the singing-dog stage...for now. Art for games is much further along, but the fans seem to reliably hate it whenever it's discovered as not-human too. Will that change? Whether we're headed for Singularity or mundane-reality future, one wants to believe humans will preserve some carve-outs for human-only endeavours, and competitive market pressures won't render such into elitely-inaccessible artisan status...

Expand full comment

> We can already cheaply automate away quite a lot of paper-pushing.

Like what? I'm curious about what businesses have successfully used LLM's to do.

Or do you mean, like we do by writing regular software? Does submitting web forms count? How about if you use autocomplete? How smart does the autocomplete have to be before we consider AI to be a significant part of the process? Yet more annoying ambiguity.

Expand full comment

It did just occur to me that I do not have a centralized collection of examples; telling someone to read Zvi's "Language Models Provide Mundane Utility" subsection backlog is not very time-friendly, plus it's full of other concerns beyond deskjob-type stuff. Things that have seemed noteworthy: paralegals losing marketshare ("just a complicated search"/boilerplate generation); freelance transcription in freefall ("good enough" is often sufficient, there's still a human frontier for specialty services requiring complex translation, or where nonaccuracy needs to approach epsilon...for now); many forms of schoolwork (the drudgery is so often the point; not everyone takes the Bryan Caplan view of education, sure, but to me it's paperwork in another form); simplification of existing legal corpus and iirc occasionally actual law generation (e.g. rewording a Charter or Constitution to say the same things and be legally equivalent, but half as many pages); medical intake, diagnosis, and record updating (still needs human QC, but convergence is already pretty good in some specialties like...oncology, iirc); coding and documentation assist...it goes on, and likely forgetting many more Such Cases. Have seen numerous claims that a good chunk of automation happens on the DL, with clever employees automating large parts of their job, but being careful not to mention they're doing so to management...which would likely respond by cutting hours or pay, assigning more work, etc. Obviously that's disprovable, of course...just a People Are Saying.

Artery complete is in a weird space where I'm not sure how to classify it. Spellcheck obviously doesn't make the cut. Suggested replies for emails and texts after a preliminary scan of contents...ehh? One-click filing of returns, printing the shipping label, scheduling a USPS pickup in the future, adding said event to Google Calender...probably, but is that a difference of kind or just degree? I don't think the "stochastic parrot" dismissal lands very well anymore, but it's hard to say how much "intelligence" plays into what are essentially rote context-dependent actions. "Just" fancy form-filling with some simple tool usage bolted to the side.

Expand full comment

>"Mediocre-compared-to-BiS-human"

BiS?

Expand full comment

Best in Slot, old slang from Diablo/World of Warcraft (and probably before)...the optimal item to put into an equipment slot, skill to put onto a skillbar, etc. Sometimes taken literally as Actually The Best Possible In The Game, but usually it's an evolving continuum based on availability and game balance changes. So like for humans, von Neumann was maybe our best mathematician/physicist ever? But he's dead, so now we're limited to...I don't know...Terrence Tao for math? Stephen Hawking for physics? Those are the current best-in-slot humans for the relevant fields/roles, possibly. Cf. GOAT, Greatest Of All Time, the sportsball analogue which often does include people no longer playing (or alive).

Expand full comment

Stephen Hawking died in 2018, FYI.

Expand full comment

People outside the loop, not using chatGPT every day - as I do for work - probably think nothings happening. However the code and engineering work it’s presenting for me are considerably better these days. From excellent if unreliable junior, to absolute senior.

Do not teach your kids to code.

Expand full comment

"Do not teach your kids to code" because an AI can do it better sounds to me much like "don't learn anything about medicine" because your doctor knows more about it than you, or "don't learn anything about finance" because your stockbroker knows more. None of those seem like good advice to me.

Expand full comment

But your kids can become doctors, not AIs.

Expand full comment

Knowing something about medicine and finance are important even if you don't become a medical or finance professional, because it lets you better evaluate the advice you get from such professionals.

Expand full comment

It's not clear to me that knowing how AIs work is going to matter, if you aren't working on their guts. People who use the web don't gain much (if anything) from knowing html. Those who build the web pages MAY (depending on their toolkit, and what kinds of web pages they are building).

FWIW, I've become convinced the Multi-Layer Perceptrons are an inherently extremely inefficient approach. I'm dabbling around trying to invent something better. But they also seem to be complete, in that you can do anything (in their particular domain) with them.

I sure wouldn't recommend coding as a profession to anyone starting out...unless I really hated them. And as a matter of fact, I haven't been making that recommendation for the last decade. (It's not as if a career is a short term investment.)

Expand full comment

The problem with that theorem is that the "neural networks" it's modelling are crude approximations of the biological ones. It's not a general proof, it's a proof about networks built from a particular abstraction of "neuron". It's like a proof about numbers that only works for powers of 2. Very powerful within its domain, but its domain is not all numbers.

Expand full comment
Sep 18·edited Sep 19

Not saying you're wrong but I've had the opposite experience wrt coding.

I used to use AI a lot more for work but then I mostly just regretted it. So I use it less now, and still I usually regret it.

The good thing about AI is that if you cannot google a problem because it's too specific then you can still explain it to an AI.

The bad thing about AI is that if you cannot google a problem because it's too specific then explaining it to the AI means spending a lot of time for answers most of which you've already considered and the rest of which are generally wrong.

The advantage of both unreliable juniors and absolute seniors over AI is that it usually doesn't take an hour to find out they don't have a clue about how to solve the problem I'm having.

I will say that it has been useful for easy questions in languages I'm not familiar with. And there has been this one time it gave me a plausible cause for a tricky issue I was having, I suspects it's right on that one. But so far its tricky-coding-question track record is surely something like 1:50 against it.

Expand full comment

Even among those of us who use ChatGPT, there is a lot of variation. I've used it mostly to answer questions about TypeScript. Maybe other people have done more? I've *heard* of people who mostly don't write code by hand, but I don't know what sort of code they write, and I'm a bit skeptical.

I tried pushing it fairly hard least year but haven't tried since then.

Expand full comment

You got the point, there's code and code. Lots of code is relatively low-tech, calling an API somewhere and massaging the results into the right format, or displaying a UI effect, etc. AI can mostly do these, with varying quality. In my experience it will not go the extra mile, so you'll get the "it basically works" version. If you want higher quality sometimes you can guide it to it, sometimes you just have to take over.

Then there are algorithms. That's hard because it requires reasoning, and current AIs are not so good at that. So if you describe some inputs and the wanted output, even for a relatively simple algorithm that requires adding some numbers and taking some mins or maxes out of lists, in my experience it didn't get it right, even after a few prompts trying to show it in what cases its code would give wrong results.

And then there's high level code organization, or "architecture" as pretentious IT people like to call it. Most programming is not in isolation, but as part of a project that already makes a lot of choices and has some shape. For that kind of thing you might need to put lots of project code into the prompt, so it will follow the conventions and use the existing code modules... big context windows sure help with that, but that also means consuming lots of tokens.

One thing that might work better than expected is migrating code to a different programming language, because it's very low context - roughly speaking, each line needs to turn into an equivalent line.

Expand full comment

This is like saying not to learn math because calculators exist.

Tools that make doing math easier does not mean learning math is obsolete.

Tools that make doing coding easier does not mean learning how to code is obsolete.

Expand full comment

After hearing about Moravec's paradox, my personal Turing test for the last few years is something like the following:

- an AI is intelligent if it can drive to a suburban home, park on the sidewalk, get out, go to the door, ring the doorbell, introduce itself, take and execute verbal orders from the owner to wash the dishes, trim the hedges and replace a leaky tap fitting (tasks to be determined on the day, fetching tools as needed from around the house), fend off the dog, mump for free charging, plug itself in via an adaptor found around the house somewhere, make small talk about the weather, try to borrow money and then drive itself back home.

Expand full comment

That’s a massive amount of goal post shifting. In fact the goal posts were moved off the field, across the town, into the highway, onto the freight train, and across the world.

Also it excludes plenty of humans.

Expand full comment

“It should be able to generate enough energy for itself to keep itself alive” is the original goalpost set by evolution, which no machine has ever passed without human help. Of course this also excludes plenty of humans.

Expand full comment

... which is fine, because those humans are not dangerous in general, while bacteria are.

Expand full comment

Doesn't this assume that I had some sort of prior standard that I shifted from? Because my thoughts on the matter beforehand were essentially "the Turing test sounds great until you see a person personifying a rock because someone else stuck googly eyes onto it". And also that most robots are unimpressive compared to, say, an ant or a bee when it comes to dealing with unstructured environments.

In terms of excluding humans, let's put it this way: would you consider someone who couldn't do those things, or lacked the capacity to learn do those things, to need assistance with day-to-day living? I thought that the idea was to define what crosses the border of "general intelligence that's high enough to be concerning". If we wanted to talk about obviously unintelligent hypertools then alphafold et al are already there.

Expand full comment

The Turing Test is all about intelligence and how well a machine can mimic human-like thought, not about physical capabilities or mobility. It’s about engaging in conversation and reasoning, not running around or doing somersaults!

Your criteria would have excluded Stephen Hawking for most of his life, and on multiple counts. Yet I’d rate Stephen Hawking as intelligent during his lifetime.

Expand full comment

Jokingly: from your reaction I must assume that you can't, in point of fact, fix a sink or convince someone to lend you a few bucks.

More seriously: I don't think that anyone is seriously contesting that Stephen Hawking wasn't smart by human standards. But Moravec's paradox forces us to consider that we just don't have a good handle on how to measure what is or isn't an inherently hard task, cognitively.

We may discover to our horror (and this seems more and more likely every day) that all the stuff we think is smart: logic, reasoning, the ability to do complex maths, play chess or paint, is just a sort of ephemeral byproduct of the awesome amounts of cognitive firepower needed to walk around the landscape looking for tubers, hunting and playing social status games with each other.

We might be able to put all of Stephan Hawking's most famous attributes onto a chip with the equivalent of a few million neurons, but never have the thing achieve anything like sentience.

Expand full comment

> from your reaction I must assume that you can't, in point of fact, fix a sink or convince someone to lend you a few bucks.

No you can’t assume that, jokingly or not, as it’s an ad hominem. Ive worked menial labour many a time.

> I don't think that anyone is seriously contesting that Stephen Hawking wasn't smart by human standards

Well you actually were.

> Moravec's paradox

I mean it’s been known for a long time, pre AI, that computers can do things we find hard easily enough, and things we find easy are difficult.

In any case we can redefine intelligence to the stuff we find easy and AI finds hard, and introduce robotics etc, but that’s another movement of goal posts.

If we were arguing that AI won’t replace a waitress or a bar keep then that’s true. And obvious.

Expand full comment

FWIW, I'm not at all sure I could convince someone to lend me a couple of bucks. I've never tried, but I do know that my social skill are well below average. I also don't drive. So I clearly fail your test.

Expand full comment

I mean, all that stuff IS a side effect of wandering around looking for tubers and playing social status games with each other.

Also probably evolving to be able to throw things accurately. I suspect that a lot of human ability to do math is because throwing objects accurately is super hard and we can do it effortlessly because it ended up making hunting super easy. Ranged attacks are broken, humans are the only thing that have a ranged attack good for more than 10 feet or so.

Intelligence wasn't an end goal, it was an accidental byproduct of evolution for survival.

Expand full comment

And what I mean by the throwing thing is:

Basically, we learned how to throw things kind of accurately, and this gave such a ridiculously big survival advantage that the things that were kind of good with it passed on way more genes to the next generation (and probably killed everyone else with their sucky thrown spears).

This led to iteration that led to much smarter, more accurate throwing, as each iteration of it greatly improved survival and the ability to beat all the nearby tribes by spearing/stoning them to death.

Expand full comment

My go-to example: I guess Stephen Hawking is not intelligent or conscious. What a sad state of affairs.

But more hilariously: Cry6Aa's AI has to "drive to a suburban home, park on the sidewalk"? So no AI can be intelligent until it emulates women? That seems a bit sexist.

Expand full comment

> I guess Stephen Hawking is not intelligent or conscious. What a sad state of affairs.

I mean, not recently.

Expand full comment

Are we absolutely certain all humans are intelligent agents?

Sometimes, I have my suspicions.

Expand full comment

When I probe my own impressions around this, I think consistency-of-performance is a big factor in my perception.

AI tools that can do a thing in a reasonably human-like way ~70% of the time still feel like stochastic tools in a way that a higher threshold consistency (I'm not exactly sure where it is) does not, particularly when the failure modes of the remaining ~30% are so wildly non-human much of the time.

I suspect there's something around perceived theory-of-mind here: I perceive an AI as more human the more it appears to understand how "people" think/reason/analyze, and the occasional wildly non-human failure mode is a fourth wall break, destroying the suspension of disbelief.

I don't think that's an empty intuition -- most of scariest AI scenarios seem to involve incredibly accurate theory of (human) mind. But it's also potentially entirely orthogonal to "intelligence" in ways that make it dangerous to lean on.

Expand full comment

it can pattern-fill, even for extremely sophisticated stuff we didn't previously realize had such patterns, but it can't really strategize.

Expand full comment

Your essay leads me to reconsider my hard-core position that AI alarmism is ....alarmist. I guess there really is an issue here, though perhaps less than apocalyptic. Regarding consciousness, I don't think anyone has a clue what "consciousness" is, or even what it means, though AI progress helps us to gain clearer glimpses of the problem.

Expand full comment

The only reason people don't understand what consciousness is, is because they want to get mystical around it. Consciousness is the ability to perceive your environment and decide how you should react to it. Taking this strictly a thermostat has a minimal degree of consciousness. (I'm not getting mystical about "decide" either. I just used that because I want to cover the entire range of entities.) I suppose one could take this more strictly, and call an electron emitting a photon the minimal conscious action. But *do* note that I'm not attributing anything not covered by quantum theory here. This "decide" is just "does it happen or not". The extension as far as thermostats, and perhaps electrons, is purely for consistency.

I repeat "Consciousness is the ability to perceive your environment and decide how you should react to it." (Do note that how you *can* react is limited by the capabilities inherent in the environment.)

Expand full comment

Nope, you're de-mysticizing it then acting as if people are dumb for not sticking to your reductive definition. The tell is that you're repeating it to yourself to convince yourself of this nonsense. And yes, "a thermostat or an electron is conscious - i.e. has qualia/subjective self experience" is *nonsense* lacking any evidence.

Expand full comment

I hope you're not saying it's nonsense BECAUSE there's no evidence. Plato guessed that atoms exist without any available evidence, and "atoms exist" was no more nonsense in Plato's time than it is today.

Expand full comment

> Consciousness is the ability to perceive your environment and decide how you should react to it.

Since, as you just did yourself, one can call anything that is influenceable by other things "conscious" then, this conception of consciousness is useless to distinquish between things that are conscious and those that aren't.

You have, at least, to explain what "perceiving" is (not to mention "deciding" and that "should"-thing, which both depend terribly strong on consciousness) and I think perceiving *is* being conscious:

I cannot remember having ever been conscious ... of nothing.

I don't want to get mystical. I just wish I was able to actually explain consciousness.

Expand full comment

Yes. It's a matter of degree and organization.

OTOH, if someone else will offer another operational definition, I'll certainly consider it. But I think the one I used covers all common cases, and the alternative will need to also cover those cases.

"Perceiving" is receiving and responding in some way to a signal.

I already handled "decide"

"Should" is the modal future of shall, it's the prediction of how a system will respond unless constrained by external forces (from the viewpoint of that system, which is how it differs from "would").

None of those depend on consciousness, consciousness depends on them.

How could one be consciousness of nothing? There would be nothing to remember, not even boredom or the passage of time. (I suspect that memory does depend on consciousness, as I think consciousness manages the indexing of "events", but possibly memories could be stored without being indexed, and then found and indexed later.)

Expand full comment
Sep 19·edited Sep 19

You can define words however you like! But [thing people try to point to with words like qualia] remains mysterious.

Expand full comment

I don't find qualia mysterious, just difficult to verbalize. It has to do with the way sensations are indexed. (And no, my red is not your red, because we've developed different indexing methods.)

Expand full comment

ok, they haven't done *novel* math yet (please bro, one more goalpost will fix everything)

there's also the hansonian "can do ~all jobs" ie if ais can run earth on their own I guess they're as smart as us

Expand full comment

I'm actually surprised our goalpostfathers haven't brought up novel math

Expand full comment

I think it has been mentioned. But there again most humans aren’t great at creating novel math, or any math.

Even if AI isn’t Von Neumann it’s still as, or more, intelligent than most humans.

Maybe in the future the smartest humans will be considered super intelligent while the rest of us AI and humans schlubs are just intelligent.

Expand full comment

Anyone who's spent an hour with any of:

1. A typical homeless guy

2. A life-long government bureaucrat

3. A drug addict

4. A manual laborer

5. A disinterested teenager

Will have to grudgingly admit AI has already surpassed these people on every metric that was previously a goalpost. Better poetry, better art, better conversation, better understanding of... uh... everything. Fewer cognitive biases. Does better math, better history, better reasoning around strange problems.

Expand full comment

We need a new insult: that's so *slam*, meaning something that a small language model could have generated. Need to pay more attention or I'll get slammed. Slam that quiz!

Expand full comment

:-)

Is that the 2020's successor to

"Go away or I shall replace you with a very small shell script." ?

Expand full comment

Pretty much!

Expand full comment

This is probably true, but it's interesting to think about why it feels so different.

- If you spend an hour with those humans, they may choose whether or not to respond to your prompts. The AI has no choice, it will always respond.

- More importantly, if you spend an hour with an AI, it will respond to your prompts, but it will never ask you a question on its own, unlike all of those humans (well maybe not the disinterested teenager). It will never change the subject. It will never interrupt you. It will sit there in complete silence with you if you never prompt it anything, every time.

Maybe those have more to do with "agency" than "intelligence", whatever any of these words mean. But I think it starts to get at the question of why, to most of us, all of these AI advances are impressive but don't feel the slightest bit *scary*

Expand full comment

Agency is an important component of intelligence.

A hammer is a very useful tool, but it is still a tool.

MidJourney can create beautiful art, but only if it is told what to do.

Expand full comment

I am sure that on average homeless guys and drug addicts would do less well on on most of these metrics than AI. There's a lot of heterogeneity among drug addicts, though. Plenty of high-functioning people are addicted to cocaine or alcohol, and occasionally to nastier drugs such as heroin. Some have an actual addiction to cannabis, too. I'm a psychologist and currently have a patient who is an alcoholic. In the past, he was addicted to cannabis. He got a perfect score of 180 on the LSAT a year ago. That's 99.9th percentile, in a population whose intelligence is far above average. His hobby is coding.

As for homeless guys -- I dunno much about what the average one is like while homeless, but have met several people in my work (I'm a psychologist) who were homeless for a period some time before I met them, usually because they were quite depressed or were having a psychotic episode, and had no sane family members around. One was a young schizophrenic man who had an extraordinary gift for describing his experience. His descriptions were acute and darkly funny. I used to write down things he said, because he put things about the illness so well. I am sure his reading and math skills were poor, and his general knowledge. But at that one thing he was extraordinarily gifted. So no way AI is better than him at "uh, everything."

As for government bureaucrats, I have known 2. One worked for the IRS for the last 25 or so years of her working life. The other is a lawyer who has worked in a government agency for about 10 years, so he may not count as lifelong. Both are bright, well-educated, articulate people who despise their workplace.

As for sullen, disinterested teenagers -- wtf, man? You can't tell anything about what they know and what they're good at because they won't converse with you. And in my experience sullenness in teens is orthogonal to brightness.

You're writing off whole classes of people as dumber and duller than AI, even though a moment's thought could tell you there's likely a lot of heterogeneity in in groups 3 & 5, and probably in 2 because for many people a job is a role they play, not a measure of their personal style and intelligence. That's dumb and dull on your part.

Expand full comment

Terminally online rationalist losers try not to frame people from walks of life they've never met as subhuman challenge: apparently impossible.

Expand full comment

You mean, the sales pitch part of doing maths?

Because for the maths-doing part, I'd say Robbins conjecture stayed open long enough that its proof counts. A minor detail is that it was done back in the 90's but with the core ideas of the proof not coming from humans.

(Improvements to matrix multiplication with minimal multiplication via distributivity counts as specifically NNs inventing a novel construction in mathematics)

Expand full comment