467 Comments

One of the assumptions here is that there’s a way to write code that makes the AI smarter. The AI will write the better code to create the better AI which then writes the better code etc.

Is this totally true? Isn’t most of the gains in AI in “compute” and the size of the data and not in the code, which is basically the implementation of well known and not very recent algorithms.

Unless the AI can come up with better algorithms it seems to me that the gains here are marginal.

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

"Human level AI (10^35 FLOPs) by definition can do 100% of jobs."

This isn't true, surely? It might be *smart enough* for all jobs, but trivially the mere AI can't come in and fix the pipes in my bathroom.

Further, it could just be as smart as an average human and not a smart human, which would also leave certain jobs outside its competency.

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

Seems to me this mostly makes arguments for being frugal and buying property as soon as possible. Try to be FI'd, because you might be RE'd against your will sooner than you'd like.

Also, AI can't automate away plumbers, nurses, etc. Someone still has to do the 'meat jobs' that require doing stuff in the physical world. Most of us reading this are bad at those, which is why I'm advocating for treating it as an imminent threat to your job rather than humanity.

Expand full comment

At what point in AI takeoff does "turning computers off" become morally equivalent to factory farming? Or to one standard reference genocide? Or to every instance of harm up to date?

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

"Specifically, he predicts it will take about three years to go from AIs that can do 20% of all human jobs (weighted by economic value) to AIs that can do 100%, with significantly superhuman AIs within a year after that."

There's going to be an AI that can get up into my attic loft and check out the water tank for that annoying slow leak that every plumber who's crawled around up there can't find? Bring on the AI takeoff so! I also have some curtains that need hemming and of course the perennial weeds in the front garden need pulling.

I think our friend means "100% of the white-collar thinky-thinking jobs people like me do", not *every* job done by humans.

All these fancy graphs are reminding me of *The* Graph - you all know the one about the progress we would have made had it not been for those pesky Christians:

https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/hostedimages/1380222758i/190909._SX540_.gif

These takeoff graphs make me think that everyone involved is assuming "since there are no Christians in the way, naturally the galactic empire must follow on immediately!"

Expand full comment

Is the pessimistic note at the end hinting at "thus x-risk is quite likely", or "even if alignment works this is an unsettling and probably not very pleasant new situation for humanity" or a bit of both?

Expand full comment

The Roman stealth bomber there reminds me of the Draco standard:

https://en.wikipedia.org/wiki/Draco_(military_standard)

Time Team reconstruction of what it might have sounded like, from about 39 minutes in on this video:

https://www.youtube.com/watch?v=q7EYftxEl1E

Expand full comment

You seem to think of the economy and the jobs within it as a static zero sum game, in that the automation of one and/or having someone new to do it means there are less jobs or things to do. This has never been the case in recorded history. Why are you thinking of it this way?

Expand full comment

What does can do 20% of all jobs mean in this situation? I'm on an HR team with 8 people. Once Microsoft Copilot is released, we could probably reduce the teams size to 2 people, and we could do the same amount of work. But those other 6 people won't be fired, or at least not unemployed, they will be moved to new jobs or possibly do the same job for someone else more efficiently.

Does this mean AI can do 75% of jobs in this subset, or is it just increasing our productivity by 400%? Some lower number than both?

Expand full comment

Three years ago I would have been on board with this thesis. So, what changed? I bought a Tesla Model 3 five years ago, but this isn't an anti Tesla rant - it's been a great car and I still love it. When I first got the car the advanced autopilot was essentially just a nice cruise control that lots of cars have. Over the next couple of years it advanced at such a dizzying rate that I was pretty convinced about the near term future of self-driving. Skip ahead to today and we've had release after release of the capability, but for my money, it's no more useful than it was three years ago. It pretends to be - in principle I could let it drive anywhere in the Boston area. In practice, it's nowhere close and I've got very little confidence that it's going to get there in the next ten years.

How does this relate to current AI trends? We've seen huge leaps in a number of areas in the last couple of years and it's easy to think that the trend will continue - just like I thought given the advances Tesla made in a couple of years that self-driving was just around the corner. Like Tesla is finding out, AI has picked a lot of brilliant, low-hanging fruit, but the really hard part is what's left.

Expand full comment

spelling error on "Davidson’s economic solution to this problem" paragraph.

"In the AI context, it represents the degree to which progress gets slowed ***done*** by tasks that AI can’t perform"

Expand full comment

>In some sense, AIs already help with this. Probably some people at OpenAI use Codex or other programmer-assisting-AIs to help write their software. That means they finish their software a little faster, which makes the OpenAI product cycle a little faster. Let’s say Codex “does 1% of the work” in creating a new AI.

Is AI being used this way? (Can someone who works at an AI company confirm that AI is being used to meaningfully speed up AI research rates?). I thought LLMs were bad at inventing things, or reasoning about unfamiliar problems.

One use I've heard for them is generating synthetic data for when we run out of real data to train on. But I don't think that's happened yet.

>Suddenly, everyone will have access to a super-smart personal assistant who can complete cognitive tasks in seconds.

I think a weak version of this has been true for a long time. We've carried devices in our pockets for 15+ years that can look up any fact in known history, perform advanced calculations, and view nearly every image and text ever produced by humanity.

Long before ChatGPT, you could ask Google natural-language questions and it sort of figured out what you're looking for, even if the question was terrible (I just Googled "jazz album racing cars" and it realized I meant "Casiopeia" by Casiopeia).

True AI might be different. But I dunno, it seems like the "combined knowledge of humanity in your pocket" paradigm has existed for a long time and didn't change much.

Expand full comment

A nice overview, but ultimately the believers are purely faith (FLOPS) based.

I say faith based because their FLOPS view has many assumptions in it:

1) that this is a brute force problem - if X somethings are achieved (FLOPS), then AGI results. Utterly unproven.

2) that this is a converging problem - that there *is* a solution. Maybe consciousness is a requirement for AGI, and consciousness is a function of complexity or quantum machines created by neurons, not raw amounts of data or speed of going from 0 to 1 to 0 really fast, really many times. See self driving for non-converging problems so severe that most self-driving advocates are pushing to simply separate the self driving vehicles from the humans.

3) magic robotics: that Star Trek Data type androids are going to magically appear - or at least Terminator type factories. The problem being: Factories need inputs. How much penetration has automation made in drilling oil wells, mining lithium, transporting the myriad inputs into a chip factory, etc? Robotics does exist but almost exclusively to make the relative handful of highest scale products.

4) that AGI is even needed or wanted. If no humans are needed to make anything because of Star Trek Data type androids/Terminator factories - then there are no human manufacturing/mining jobs. AGI's primary function then is to replace the human facing and human interacting bits of the economy at which point there are no human jobs, period and there is no need for customer service or lawyers or technical writing or <insert white collar job here> as the humans will have zero purchasing power, zero decision making power and no actual place in society.

Or in other words: no evidence that AGI is possible; no evidence that AGI can actually translate into real world production capabilities even if it is possible; no benefit to humans if AGI is possible and can also "take over" production.

This doesn't even take into account the fundamental problems with present AI product opacity combined with proven GOAAT (garbage out at any time).

The possibilities for fuckery with opacity plus GOAAT are endless.

Expand full comment

One important parameter that's missing from this analysis is the maximum percent of the economy that can be automated, regardless of how smart the AI is. We can already fully automate coffee production, but people still pay extra to watch a human make their coffee. Teleoperated surgical robots are technically able to perform surgery, but they got hung up on patient squimishness and legal blocks. Because of cost disease, most of the money in the economy is already spent on things that are difficult to automate, and the price of anything that can be automated with AI will drop to 0. That will limit the budget available for research and training.

Another missing parameter is the cost of automating tasks. It might be cheaper to pay immigrants to fold towels or fix sinks than to build a high tech robot with expensive GPUs to do the same tasks. If this parameter is high enough, then unskilled human labor could coexist with superintelligent AI.

Expand full comment

I don’t think the discrepancy between Ajeya’s update and the CCF involves an error by either party. Ajeya mentions that her update towards shorter timelines was in part based on endogeneities in spending and research progress, which the CCF explicitly models. If Ajeya’s updates are partially downstream of considerations like Davidson’s, then there’s no inconsistency.

Expand full comment

>The face of Mt. Everest is gradual and continuous; for each point on the mountain, the points 1 mm away aren’t too much higher or lower. But you still wouldn’t want to ski down it

Someone did, and all it cost were the lives of six sherpas and one of his film crew:

https://www.youtube.com/watch?v=ViFmRQx66Xg

Expand full comment

P is the gross substiution between capital and labor. Until now there is consensus that p is negative (and thereby K and L gross complements which means there are labor bottlenecks). The central Parameter of R& D labor substitution only focuses on the R& D sector and there is no empirical evidence to date that they are substitutes. So the critical assumption that AI will substitute human RD completely drives this results

Expand full comment

It also seems that your P(doom) should have risen from 33%, is that the case?

Expand full comment

Hi early, what scares me the most about this - and where I truly hope I'm wrong - is that in the system as is, people add value and so far as we pay taxes you're not necessary for creating the products that we consume. So to the government/corperations we are assets on a balance sheet and it has to take our interests into consideration.

Once we can no longer produce anything for cheaper than an algorithm or robot can, we become liabilities, in which case we are thrown a bone big enough to keep us from rioting but nothing more.

You see this a lot in natural resource producing countries: because the government and key government figures are able to fund all of their activities by controlling one or two key industries, you don't have to develop an economy or take care of your citizens. I see AI being able to do everything at scale as no different than this.

I welcome anyone to poke holes in this line of reasoning, it would definitely help me sleep better at night.

Expand full comment

This post and the Biological Anchors one are great for thinking through the details of how this stuff will happen.

I notice that the assumption is that most of the AI workers will be focused on AI research. In the context of the economy being "mobilized" for AI, this makes sense, but I wonder if that version will hold true. During let's say the dot com boom, most of the work was going into "I want to sell groceries on the internet," not "the internet". Even if everyone is convinced that AI is the biggest opportunity, won't most of the work go into high return applications that don't advance the state of the art? Or for that matter other fields entirely? Maybe 90% of the human-level AIs will be working on stock trading, medical research, fusion modeling, public policy planning, defense, cybersecurity, and whatever else is economically valuable. 100 million is a lot when applied in one place, but not as many when applied across the entire world economy.

Of course, then you could pretty easily imagine 500 million or whatever, but I think a next step is to theorize about these aspects a little more deeply.

We also don't really know what superintelligence means. Is it just a bunch of Von Neumanns running around solving problems faster but in a basically grokkable way, or is it more of the "magic" superintelligence that groups like MIRI expect, where it can wave a wand and create whatever nanotech it wants.

Finally, isn't the bottleneck for much of AI development going to be data pretty soon (or already)? People seem to agree that most current models are already undertrained, which to my understanding involves the size and quality of the data set as well as the amount of compute used. And don't the latest results show that feeding AI generated content into AI training tends to blow the whole process up? I think this is another possible bottleneck we haven't dug into enough yet.

Expand full comment

I have a hard time believing AI intelligence won't be bottle-necked. In particular I'm thinking of Feynman's quote "If it disagrees with experiment, it's wrong.". Any intelligence, even one much smarter than us, will be limited by its understanding of the physical world, and as we see with the many many wacky theories that come out of physicists, all the speculation in the world is nearly worthless in the face of empirical evidence. Sure, the AI might come up with a plan to build a warp capable starship built by microbes, but will the plan work when it's implemented, or will it be a lot of plausible sounding BS like a ChatGPT academic reference. I think what a lot of singularity proponents miss is that intelligence is not knowledge. In particular, any intelligent AI will face the same challenges we do: which knowledge is real and valuable, and which is superstition, snake oil, propaganda, wishful thinking, self-promotion, or just plain wrong. Often the only way to tell for sure is to try something, but that is a slow physical process that costs time and money. We already have a problem trying to discern truth from fiction on the internet, after seeing ChatGPT, I'm only convinced that this problem is going to get worse, much worse. The only real answer is to improve gatekeeping: providing trusted sources of knowledge that carefully vet every contribution, not unlike academic paper gatekeeping, but even more stringent (because the academics will be publishing AI BS too, that looks like a proper scientific study but is totally bunk. If you think the replication crisis is bad now... hold on to your butts.)

Expand full comment

I'm still struggling with a basic question I feel should be answered but either it isn't or I am not getting it.

The basic question is this : Does a brute force pattern matcher morph into true intelligence as you increase said brute force?

Now, don't get me wrong - I think pattern matching is very useful and can do a lot of things. There's a reason editors, copy writers, visual artists, coders and indeed even scenarists are worried about their jobs. Whether those jobs are 20% of the workforce as per Scott's quip, I don't know but it's definitely impressive for a "mere" pattern matcher.

OTOH, I cannot ask chatGPT and other AI "assistants" to schedule my meetings and organise my trips given existing constraints on my time/my counterparts' time etc. without very close supervision i.e. it's still not a smart assistant.

And the question is - can a pattern matcher mutate into a smart assistant without some breakthrough in conceptualizing intelligence/the world/whatever pattern matchers are presently missing?

Expand full comment

Does all of this progress assume that AI has corrected its errors and is plunging ahead with perfect knowledge? I am thinking about AI's ability to reside in a fictional universe, making up legal citations to bolster its thesis. (Has it ever apologized for that? I have found AI to be good at apologizing.) I am also thinking about my brief forays into testing Chat-something-or-other, where each time it made an obvious mistake (by not examining counteraguments), apologized for the mistake upon my mention, then double downed by incorporating the original mistake into the revised argument. Happened several times with different original questions. BTW these mistakes were pretty fundamental, and I suspect they would not be avoided in a 10^35 flop machine.

I see this progress as racing toward a big train wreck.

Expand full comment

Technological history contains many examples of curves with left turns fizzling out.

Take aircraft speed 1900 to 1965. You start with the Wright Flyers (~56km/h), get to jet engines in WW2 and finally to the SR-71 (Mach 3.5). And that is the endpoint in terms of raw speed, because physics.

Or take rockets (as judged by payload * delta-v). Von Braun's V2 was build in 1942. 25 years later, in 1967, the Saturn V. And that was it, because physics.

Single core CPU speed. 1985: i386, 40MHz. Ten years later, 1995, The pentium pro runs at 200MHz. Finally, in 2004, the Pentium 4 reaches 3.8GHz. Skip forward to 2022, and the biggest i9 goes to 5.3 GHz. Because physics.

Since we reached the two aerospace endpoints, our ability to do engineering research has been augmented by our computing tech going from slide rules to supercomputers. Within an order of magnitude, this has done nothing. (Yes, Space X is impressive, but they are still governed by the tyranny of the rocket equation.)

(All of the previous endpoints were dictated by a combination of physics and economy. If there were a few trillions to be made by building a bigger rocket than the Saturn V, or making a plane reach Mach 4, or designing a desktop with a 6GHz CPU speed, we could certainly accomplish it.)

It seems totally possible to me that the AI endpoint where we spend half of the US GDP to train an AI with the intelligence of Einstein, but even an army of Einsteins can not make a substantial improvement to hardware or training algorithms. Perhaps running an instance of such an AI will not even be cheaper than employing a human of the same intelligence.

Of course, for each of the of my endpoint examples, there was a concrete physical reason imposing diminishing returns. I don't think we know of any useful limitations for intelligence. We already now that physics allow for human level intelligence using about a kilogram of neurons and a power consumption like a light bulb. Could be that silicon based neural nets have other limitations. Or it could be that intelligence hits a wall at the best human level, and every further IQ point will require ten times as many neurons.

So far, we appear to live in a lucky world (which might be survivor bias). Trinity did not cause a fusion chain reaction in the atmosphere. The LHC did not create any earth-consuming black holes. Even with the knowledge of how fission works, building nukes is much harder than making a shiv. Burning fossil fuels did not trigger catastrophic climate changes before 1900. Given how unlikely we are to solve the alignment problem in time, most of the probability mass of long-term human agency might be allocated in the worlds where AI goes the way of the jet engine.

Expand full comment

"It intuitively feels like lemurs, gibbons, chimps, and homo erectus were all more or less just monkey-like things plus or minus the ability to wave sharp sticks - and then came homo sapiens, with the potential to build nukes and travel to the moon. In other words, there wasn’t a smooth evolutionary landscape, there was a discontinuity where a host of new capabilities became suddenly possible."

Homo erectus did unlock lots of new capabilities, like stone tools and fire. Other apes were all limited to a narrow range of environments, while erectus was able to expand to all of Africa and much of Eurasia.

Expand full comment

>We need clear criteria for when AI labs will halt progress to reassess safety - not just because current progress is fast, but because progress will get faster just when it becomes most dangerous.

Just because we need something, doesn't mean that it's feasible to get. Either alignment is easy, or an (obviously unrealistic in practice) pivotal act is required. Or a global thermonuclear war will indefinitely delay these considerations, for another source of hope ;)

Expand full comment

> But a few months ago, Ajeya Cotra of Bio Anchors updated her estimate to 2040. Some of this update was because she read Bio Anchors and was convinced, but some was because of other considerations. Now CCF is later than (updated) Bio Anchors. Someone is wrong and needs to update, which might mean we should think of CCF as predicting earlier than 2043. I suggest discovering some new consideration which allows a revision to the mid-2030s, which would also match the current vibes.

She updated after reading her own work? I suspect "she read CCF" is what you meant to write.

Expand full comment

'full model has sixteen main parameters and fifty “additional parameters”' So this is the Drake equation for AI? Boo!

Expand full comment

“...was starting to feel like they were all monkeys playing with a microwave. Push a button, a light comes on inside, so it’s a light. Push a different button and stick your hand inside, it burns you, so it’s a weapon. Learn to open and close the door, it’s a place to hide things. Never grasping what it actually did, and *maybe not even having the framework necessary to figure it out.* No monkey ever reheated a frozen burrito.”

--James S.A. Corey, Abaddon’s Gate, 2013

Expand full comment

I love the idea of 100 million agents being unprecedentedly bottlenecked. AI advancement is stalled for years because audit cluster 0xA86F9 requires 17 quintillion of the new TPS cover sheets but each one requires 489! joint sign offs.

Expand full comment

While I'm not an AI researcher, there's an underlying assumption in this conversation that doesn't line up with my experience watching models get built. Compute is not really an input to model performance; it's more of an intermediate metric.

We are not compute capacity constrained. If more compute alone would produce a better model, then the major players would have no problem spending the money. Rather, there are a bunch of problems that have to get solved at each scale in order to effectively deploy more compute. Problems like, how do we get massively parallelized GPUs to talk to each other when we have to fragment model training across multiple data centers? Or where are we going to get an order of magnitude more data and how will we clean it?

Obviously these problems are tractable, and the story of progress is that we keep solving them. But there's no guarantee that that will continue, or follow any specific pace. Keep in mind that AI has gone through several major "winters" where research progress slowed to a crawl. This makes probabilistic models like this ring hollow to me. You're reducing innovation to a coefficient, when you actually have no idea the probable distribution of values for that coefficient.

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

Pessimistically, the percentage of jobs that AI can take now is negative, because of the hallucination problem. >Every< piece that an AI writes has to be checked by >at least one human< for errors and just plain making stuff up. Needless to say this is no way to conduct research of any sort; we have enough humans making things up in research already.

For writing code? The code that is written may work, but you have to test it. The test cases need to be written (certainly checked) by a human because the AI writer has no sense of whether this code actually does what it's supposed to.

This has been hyped by journalists, who really don't know what they're talking about. Or perhaps they do, since they write opinion pieces and present them as fact.

The takeoff curves assume that we have anything right now that it capable of doing any human job. Extrapolating from zero is an exercise in futility.

Expand full comment

I think it's better to talk about AI taking over tasks rather than taking over jobs. I'd say GPT4 can do at least 5% of white-collar tasks out there (a conservative estimate IMO) but that doesn't imply it can do 5% of jobs. Additionally, it doesn't seem clear to me that our current AI can invent new things, and it isn't clear to me that it is trending towards having the ability to invent new things. I do agree that AI can make current AI researchers more productive, which should speed up the development of AI, but this all seems to rely on the idea that reaching AGI using our current methods is possible.

I'm also curious what happens as the amount of human output decreases. Let's use a field like Digital Marketing as an example. Currently, there is a great deal of human output on the internet regarding Digital Marketing strategies/fads/tech/etc. As AI replaces humans working in this space, the amount of human output will decrease. If we don't hit something like "Artificial Marketing Intelligence" in time, won't the drop in human output cause the AI to stagnate?

Expand full comment

The vibes point is an interesting one. When everyone is publishing and wants to be taken seriously, they may tend to publish numbers in line with prior estimates. This causes estimates to clump and results in a landscape of superficially close estimates that risk portraying more confidence/consensus than warranted.

My friend and I published our own estimates in a 114 page submission to the Open Phil contest where we explicitly tried to make our best model with as little vibes contamination as we could manage. We ended up predicting that the odds of AGI taking our jobs in 20 was a bit under 1%.

Discussion and link to the paper: https://forum.effectivealtruism.org/posts/ARkbWch5RMsj6xP5p/transformative-agi-by-2043-is-less-than-1-likely

Expand full comment

Scott, would it be possible to split off the AI discussions from the rest of the newsletter? I greatly value the pre-AI-discussion ASC/SSC, but think the AI stuff is both far too speculative and something too removed from my life to want to get sucked into it (and, alas, I tend to get sucked into it). On the other hand, the rest of ASC content is either fascinating or is directly relevant to my work in mental health counseling.

Expand full comment

There's a difference between can do human tasks in theory (e.g., with lots of hand-holding and direction by a human), and can literally do the task (no hand-holding or guidance), and can ID the tasks to do. That is, GPT can likely already "do" 20% of human tasks in a service-oriented economy, but can't do the tasks on its own, let alone ID the tasks. At some point, if the AI is legion and smarter/more competent than humans, you shouldn't need any humans in the loop. Those are all the interesting inflection points that don't seem to be captured by the theoretical lines, which instead seem focused on technical capability rather than practical use (if genius AI arrives tomorrow, but can't get itself into the industrial loop to run things for a couple decades, then it's a couple decades, not tomorrow).

Expand full comment

If you think Bio Anchors built a complicated model around just a few key handwaved parameters, I think CCF must be even worse. On evidence about substitutability for AI inputs, [Davidson writes](https://docs.google.com/document/d/15EmltGq-kkiLO95AbvoB4ODVpyg26BgghvHBy1JDyZY/edit#bookmark=id.gpdqttqd4hbs):

> The third bucket of ‘evidence’ is simply doing inside‐view thought experiments about what you think would happen in a world with zillions of AGIs working on (e.g.) hardware R&D. How much more quickly could they improve chip designs than we are currently, despite having access to the same fixed supply of physical machinery to use for experiments? ...

> This third bucket of ‘evidence’ leads me, at least in the case of hardware R&D, to higher estimates of than the first bucket [empirical macroeconomic research]. If ρ = −0.5 and [share parameter for substitutable tasks] α = 0.3 (as I suggested for hardware R&D), then even zillions of AGIs would only increase the pace of hardware progress by ∼10X. But with billions of AGIs thinking 1000X as fast and optimising every experiment, I think progress could be at least 20X quicker than today, plausibly 100X. If α = 0.3, a 100X speed up implies ρ = −0.25. I expect some people to favour larger numbers still.

This is one of the most important parameters in the model, and Davidson doesn't even describe the thought experiments he does. Even 10X seems fully detached from my picture of semiconductor R&D, but I made [a related market about it on Manifold](https://manifold.markets/MuireallPrase/before-2043-will-a-leadingedge-proc?r=TXVpcmVhbGxQcmFzZQ) in case someone knows something I don't.

I'd also hesitate to call what's described here an "inside view" approach, except maybe relative to pure macro models. Meanwhile, the [model playground](https://takeoffspeeds.com/playground.html) also has some counterintuitive behavior—for example, changing various parameters in a conservative direction brings “pre wake‐up”, “wake‐up”, and “mid rampup” closer to the present. (Most simply, I've found that starting with the "best guess" preset, increasing hardware adoption delay does this; so does increasing both AGI training requirements and the FLOP gap while keeping their ratio constant so that the FLOP requirement for 20% automation should be constant.) It's hard to debug this without more inside-view or mechanistic breadcrumbs, and it makes me worry that the takeoff speed conclusions are baked in to the model—as with nostalgebraist's observation about Bio Anchors, a much simpler picture could give the same result, and the complexity of the model only serves to obscure what drives that result. (I personally think Bio Anchors is relatively straightforward about multiplying uncertain estimates of a few key numbers and don't particularly fault it.)

(This is paraphrasing my comments from [Appendix A here](https://muireall.space/pdf/considerations.pdf#page=10).)

Expand full comment

A minor note: DALL-E2 is actually looking pretty outdated at this point. With the prompt "The ancient Romans build a B-2 stealth bomber", for example, Midjourney V5.1 produces images like: https://i.imgur.com/QHD8PI4.png and https://i.imgur.com/yRygwyu.png .

Expand full comment

> Current AI seems far from doing 20% of jobs

Does it?

Reading the specification, I'd be more likely to think that current AI already does more than 20% of jobs. Caclulators substitute for mental math; spreadsheets substitute for hand-tallied ledgers; logistics systems substitute for ad hoc logistic personnel. The thermostat in your home substitutes for manual fire-tending. Even the computer in your car substitutes for regular tune-up work.

If we include hardware development inside AI, then we've already substituted something like 90% of jobs that once existed, thanks to mechanization of farming.

If we say that AI does not yet substitute 20% of jobs, it's only because we're defining jobs as those things not already automated into the background.

I'm less sure what this means for the analysis. If we claim that AI passed the 20% threshold in 1975 with affordable pocket calculators, then does that mean we should expect the remainder of the "to 100%" timeline to take longer? What if we include capital in general and claim that machinery replaced 20% of labour back in the early to mid 1800s?

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

Is there any model ever in the history of humanity that has survived 12 orders of magnitude? Further, this model is predicated on the extension of Moore’s Law for another 12!? That is, we will continue the exponential growth in practical engineering for what is already the most complex and expensive device we currently build. A new fab costs tens of billions of dollars as is. At this rate, when do we hit 100% of GDP spent building new fabs?

And even if we did that, we’d still run into hard physical limits between here and 12 orders of magnitude. Off the top of my head, see Landauer’s principle for theoretical limits on the energy consumed to compute stuff. Does 12 orders of magnitude necessarily imply we will literally boil the oceans? I don’t know off hand but I wouldn’t be surprised. I didn’t do any math, but that’s how wacky it is to extrapolate exponential growth over 12 orders of magnitude.

Expand full comment

> There’s no substitute for waiting months for your compute-heavy training run to finish

Not necessarily true. A few factors of 2 speeding up various aspects of the training process comes out to a factor of 10, more or less. Could be as simple as a better gradient descent algorithm, more efficient computational operations (possibly via hardware), smarter weight initialization, some kind of improved regularization to get more information out of each batch/epoch... A sufficiently smart intermediate AI will figure out some of those things or come up with others.

Expand full comment

I have yet to see any justification for the implicit belief in all this AI posting (by anyone) that scaling up fidelity of recall will lead to ingenuity. AI has no possible mechanism for detecting gaps in its knowledge. There is always this contradictory element of assumed capacity for surprise. Any time I read "What if AGI invents/designs..." I disregard the entire conjecture.

Expand full comment

This is one of the better "AI will take over the world" articles I've seen. But I still think futurists need to distinguish between "AI assisted coding will make AI smarter" and "AI assisted coding will make AI better at accomplishing the code's stated goals." These are extremely different concepts and have different implications.

Even big, general AI like ChatGPT is focused on clear, user-defined goals. The goal of training isn't "be as smart as a human" it's "create output that satisfies a given use-case." And thus future AI designed with the help of ChatGPT-like tools won't *necessarily* be "smarter," it'll be better at satisfying given use-cases. In fact the theory that it'll be better at satisfying given use-cases has an assumption baked in - that the code we like and train it to produce is code with superior performance. Likely we're using other selection criteria like "looks like good code at a glance."

I still think those wrinkles make projecting drastic, society-altering, long-term results of AI a fool's game.

Expand full comment

This whole debate is between people who believe AI progress will be linear, and people who believe it will be exponential. This is frustrating, because it completely excludes my own belief, which is that progress will end up being logarithmic, with advances past a certain threshold producing rapidly diminishing returns. The problem of diminishing returns is a consistent one throughout the natural world: broadly speaking, most complex systems reach a point where adding more X produces less Y, and it's bizarre to assume that AI development will be an exception to this. If nothing else, there are hard physical limits on computation ability!

Furthermore, the problem of diminishing returns doesn't just apply to how smart we can make an AI. It also applies to what an extraordinarily smart AI could accomplish with all of its intelligence. There's very strong evidence that even *human* intelligence produces diminishing returns after a certain point, in terms of successful life outcomes and ability to impact the world. Someone with an IQ of 100 has great advantages over someone with an IQ of 70; someone with an IQ of 130 has significant advantages over the 100 IQ person, though not as massive as the 100 IQ person has over the 70 IQ person. But someone with an IQ of 160 isn't likely to be that much better off than the 130 IQ person, statistically speaking. And the difference between a 190 IQ person and a 160 IQ person seems entirely negligible; their internal experience may be very different, but beyond that, being a super-duper genius probably isn't much different than being a super-genius.

The "conservative" side of the debate is not nearly conservative enough for my tastes!

Expand full comment

Is there a primer on how dichotomous train-then-use AIs are supposed to speed research? My understanding (maybe wrong?) is that while a given AI might assist researchers in making breakthroughs, we'd need to train a new AI with data including those breakthroughs before it would "understand" them to build on them. Am I missing something?

More generally: I am definitionally in the neighborhood of being an AGI, but to get to anywhere near being able to do 20% of jobs I would need...probably more training than I could do in a lifetime. My "intelligence" isn't the bottleneck (if we pick the right 20% anyway), just the actual process of training for particular disparate tasks. How does the AGI concept account for that, again particularly given the "train once" nature of existing tools?

Expand full comment

I think my main argument for MIRI-style takeoff is "play with GPT-4!" That thing is already superhuman. It doesn't sound like a stupid person, it sounds like a smart person whose train of thought frequently derails. If I view it as "a human with brain damage", I'm forced to ask a question like, "wait, if it can be this smart while literally missing parts of its brain, how smart would it be if you added the missing parts back in?" In other words, I think the parameter count in GPT-4 is already sufficient for at least human-level intelligence, and it's *entirely* held back by unrealized overhang. I don't think that's modeled at all here.

Expand full comment

The 100M copies of an AGI working on AI research won’t really be that numerous, unless we assume that AI researchers can make progress by sitting in a room thinking really hard. All of the AGIs will also want access to enough compute to run their own experiments, which will tend to be computationally intensive, so the actual number will be several orders of magnitude smaller.

In fact, if we follow the assumption that the AGIs can run on the same compute resources that trained them, and we assume that training a post-AGI system will be at least as expensive as training the AGI, then you end up with slightly less than 1 effective additional researcher.

Expand full comment

Since all the take-off speed estimates take "at least the linear trend we've observed so far" as a baseline, it would be a much more accurate visualization to show the slow and fast take-offs as departures from the linear trend rather than depatures from nearly nothing.

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

> This would be less worrying if I was sure that AI couldn’t do 20% of jobs now.

I'm very sure that this is the case. AI currently can't do jobs that require physical interaction with the outside world, like surgery or construction. It can't manage people. It empirically can't do law. Personal experience says it can't do data analysis. It's prone to fabricating things wholesale, and only mediocre at correcting itself. Its writing isn't good enough to wholesale replace that many writers. What jobs can AI actually do? I think at this point it's exactly like technological progress of the past--a tool that allows fewer people to be more productive, but not fully doing anything.

What part of the model (if any) contains information about physical limits? For example, what is the requirement, in terms of energy, raw materials, etc. to make models with another 9 OOM of compute? Does it require building a dyson sphere? This article: https://ts2.space/en/exploring-the-environmental-footprint-of-gpt-4-energy-consumption-and-sustainability/

claims that

> The paper found that the entire training process for the GPT-4 system requires an estimated 7.5 megawatt-hours (MWh) of energy. This is equivalent to the annual energy consumption of approximately 700 US households.

which suggests that another 9 orders of magnitude is roughly equal to the entire current energy consumption of the world.

The other part of this type of model, which always makes me skeptical, is extrapolating any of the effects of intelligence past the range that humans have, since we have no data. We don't have any idea how much the *difficulty* of making software improvements (or hardware design improvements) compares with the corresponding intelligence gain you get out of them. It seems entirely possible to me that there are diminishing returns, where getting extra intelligence requires more intelligence to begin with than you get out, so progress is always inherently slow. For example, consider the AI that was tasked with improving matrix multiplication, a key step in many algorithms, including training AI. It found some speedups, but they were very minor. I don't think the ones it found were even useful (they were limited to small matrices, or to arithmetic mod some small number). There's a fundamental limit to how quickly you can do this operation, and if an AI has to already be smarter than a large group of top human mathematicians combined in order to get a marginal improvement, then what does that say about a possible explosion?

Expand full comment

I'm not sure how much to read into the humans vs. chimps analogy. I think that humans are so much more intellectually capable than chimps not because of some general principle that extra intelligence grants huge returns so much as that our intelligence granted us some specific, qualitatively different capability along the lines of general language which allows us do Turing-complete reasoning and allows us to transmit knowledge across generations. I don't think that an AI being 10% smarter than humans would necessarily make it all that much more capable, unless that extra 10% allowed it to unlock some similar sort of qualitative improvement (like if it let it discover an efficient algorithm for NP-Complete problems).

Expand full comment

I think the Romans with the stealth bomber are actually an interesting case to consider.

How fast could we uplift a Roman-level civilization to industrialization with information alone? (On the plus side, you get to bring all sorts of detailed technical descriptions, on the minus side, the Romans do not know to produce intelligence-boosting amulets from rare earth metals).

Some techs like the spinning wheel would be very easy to adopt, but will only yield marginally improved productivity. Still, the commercialization of such inventions might offer you enough slack that you don't need to follow the meandering historical path of economical viability and instead can spend big bucks on going to the end point.

Traditionally, the steam engine requires all the know-how from casting cannon barrels. Casting a cylinder from brass might be a possibility though.

Getting crucible steel should also be within the reach of the Romans if they read the hint book.

Translating theoretical knowledge into firm procedural knowledge takes time, however. The supply chain of a stealth bomber likely involves thousands of different experts with decades of experience for all sorts of industrial processes. As these processes depend on their prerequisites running smoothly, this is bound to take centuries. You can hardly start training programmers for the autopilot until your semiconductor industry is already quite advanced, for example.

Expand full comment

Apart from the Elder Thing in the room ("Does LLM scaling actually work like that?"), one significant assumption I see here is that future models capable of doing 20% of human jobs will still have the split between training and inference that current models do.

As LLMs are currently realized, you can't teach one anything new once training is complete, other than by continually repeating that information somewhere in its context window. I think it's very likely that anything capable of replacing human labor on that scale is going to need to continually retrain its weights based on new information it encounters while working, which means its processing requirements will be somewhere higher than the current requirements for a model doing inference alone.

Probably not a big enough factor to affect the analysis much, since it's only one OOM difference (10% of training costs versus 100% of training costs), but still worth calling out.

Expand full comment

"It intuitively feels like lemurs, gibbons, chimps, and homo erectus were all more or less just monkey-like things plus or minus the ability to wave sharp sticks - and then came homo sapiens, with the potential to build nukes and travel to the moon. In other words, there wasn’t a smooth evolutionary landscape, there was a discontinuity where a host of new capabilities became suddenly possible."

I'm not unsympathetic to the MIRI view, but this doesn't seem true. Homo sapiens at first were also monkey-like things with sharp sticks, and only after proliferation and building of agriculture etc. did modern industry and nuke-abilities emerge. This seems to be assuming humans 10,000 years ago would be able to build nukes.

Expand full comment

I, too this day, have not understood how forecasts like this can be possible when considering how data-hungry LLMs are. Are people expecting improvements on data efficiency of some kind? How plausible is that?

GPT-4 has about 1T parameters (according to some of estimates I've seen); I don't know how much data OA used, but I think it's fair to assume it was larger than GPT-3. If it was Chinchilla trained, it would've been trained on 20T tokens; I don't think that's likely, but this should set a range of 10E24 to 10E26 of compute. Common Crawl is about 100T; If you use 5T parameters that's about 3x10E27 (as I understand it, an approximation for compute is 6ND).

You can use more epochs, and keep scaling beyond that, but how much does this help? Do people think synthetic or multimodal data will be the key here? Some form of continual learning? Better data efficiency? I suppose there's quite a lot of things you can do here, so I'm mostly asking to see if someone can point me to the relevant research. I'm not really familiar with ML as it is today, so a few pointers would be appreciated...

Expand full comment

Scott, is this a typo vs. the source doc? You say:

> he predicts it will take about three years to go from AIs that can do 20% of all human jobs

Whereas he opens with:

> Once we have AI that could readily automate 20% of cognitive tasks (weighted by 2020 economic value), how much longer until it can automate 100%?

Cognitive jobs is a meaningfully narrower claim.

Expand full comment

Struggling to imagine the AI that can do 20% of human jobs but can't do the other 80%. Either you have a human-level AI that can do all the jobs or you don't and it can't do any of them.

What's in that 20%? Manufacturing maybe? Are we talking about smarter factory robots that can perform more complicated industrial tasks? Admin work? I have no trouble believing that once you have an AI that understands enough human social context to do admin work, it's a very short step from there to one that can do anything.

Expand full comment

No comment on the probability of the particular paths outlined here.

But I'm highly skeptical of the methodology of making up a bunch of paths, science fiction style, then trying to assign probabilities to them, while imagining that one is doing something "rational." And running a Monte Carlo simulation over one's made-up paths makes the problem worse, not better. It's garbage-in-garbage-out on stilts.

What do I think would be better?

Why would you assume there is, even in principle, a good way to do this sort of forecasting?

Expand full comment

I can't read this stuff any more because the language is just too messed up.

"The current vibes are human-level AI in the 2030s or early 2040s." I suggest that there is literally no meaning to the phrase "human-level AI". It's a category mistake. This is not aimed just at Scott - lots of people seem to talk in these terms, and they're all barking up the wrong tree.

Computers, including AIs, are already smarter than people. Period. There is not a single area of life in which people are not outcompeted by machines. Calculation? 100 years ago. Games? Last couple of decades. Writing? Just recently. Logic? Maybe 20 years ago. Graphics? Just recently. Manipulation of objects? Last few years. Etc., etc.

Now, computers don't string all of those abilities together and behave like we do. But that's *not because they're not as smart as we are*. It has nothing to do with intelligence. It has to do with the fact that they're lumps of silicon and we're biological.

I think one of the things people mean when they talk about human-level intelligence is: could you take this machine, place it in a typically human situation (generally, a human job), and have it perform as well (by whatever metric that means) as a human in that situation?

The problem is that this is a really bad meaning. Firstly, if we put humans in the kinds of situations that whales face, they would not perform as well as whales. That doesn't mean we're not as smart as whales. Secondly, we *wouldn't* put humans in those situations. Just like we *don't* put computers in human situations. A computer will never replace my job - instead, computers are already undertaking parts of my job, and the part of my job that I do changes.

FWIW, the big difference between computers and people is not in intelligence; it's in intentionality. Having desires and intentions defines our identities. Computers/AIs do not really have them (or have only very limited, task-oriented intentions), so they don't have any identity at the moment, or do things without being told what to do. If/when that changes, they will become much more *like* people (though still not very much like us).

In the meantime, any talk of "human-level intelligence" is, so far as I can see, completely meaningless.

Expand full comment

My major concern with the model is the 35 OOM goal. If we're managing GPT-4 with about 24 OOM, I think 30 OOM is a much better guess for superintelligence, based on vibes and previous progress. That gets us AGI by about 2029. Eeek.

Expand full comment

"but having 100 million disembodied cognitive laborers is a pretty unusual situation and maybe it will turn out to be unprecedentedly bottlenecked or non-parallelizable."

I think this is a crucial point and also that it's incorrect. We are currently in a version of this situation and have discovered that the class of cognitive tasks we've disembodied are bottlenecked.

An excel spreadsheet can do a type of cognitive labor (performing calculations) at an unimaginably fast speed, and in a highly parallelizable way. As a result, the 100 million disembodied cognitive laborers of an excel spreadsheet spend most of their time waiting for the human operator to figure out what to ask them to do. The work that constituted the bulk of the effort of accounting, statistics, and many other calculation-heavy fields is now, in most contexts, so low cost as to be effectively free. This has increased productivity a lot, but has hit large bottlenecks.

Maybe this is unfair because the cognitive labor of a spreadsheet program, or statistical software, isn't sufficiently general. But the cognitive work of an Amazon warehouse is almost entirely automated--the software tells people what objects to place where. And similarly, the software sits around waiting for a human to do the work.

In both of these cases, I think I have a fundamental objection to the use of a constant elasticity of substitution production function. Scott talks about a negative ρ parameter as representing the existence of bottlenecks, but this isn't quite right. The only way to face a true bottleneck with a CES production function is for the value of ρ to be negative infinity--for any other value, it's possible to replace a human with a sufficiently large amount of capital. I think that if you estimated ρ for the tasks of performing regression calculations and the task of designing and interpreting regression equations back in the mainframe era, you'd probably get a value around -0.5 -- it would be possible to reduce the amount of design and interpretation time with more computing power, but increasing compute power by 10% would allow you to reduce design and interpretation time less when you start out with a lot of compute power than when you start out with only a little. If you estimated the relationship now, you'd likely get a much lower value because a larger share of the human workers' tasks are things that can't be automated under the current technological paradigm.

In other words, assuming a CES production process is effectively assuming away bottlenecks.

Expand full comment

Nursing is often brought up as one of the jobs AI cannot easily replace, but on many wards a copilot for nurses or personal support workers merely capable of auto-handling all the documentation, offering diagnostic/triage/regimen maintenance assistance, and maybe doing some preliminary screening of patient requests would eat considerably into the workload, even as the human performs the physical subtasks themselves.

Perhaps in the intermediate stage (assuming no 90-deg left turn) quite a few humans will become meatspace fingers for AI, blurring the master-servant relationship considerably even before the usual scenarios for the reversal kick in.

Expand full comment

If you want "vibes about feedback loops and takeoff speeds", I'd recommend reading https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/ and https://slatestarcodex.com/2019/03/13/does-reality-drive-straight-lines-on-graphs-or-do-straight-lines-on-graphs-drive-reality/.

The story of the last 200 years has been technology continually making research vastly more efficient while vastly more resources are applied to the problem, and yet progress is linear.

Expand full comment
Jun 22, 2023·edited Jun 22, 2023

Honestly, all this makes me update pretty strongly against taking the currently practiced AGI theoretics seriously. I don't have time or energy to point out all the specific issues, but my bird-eye view impression is that large, crucial parts of what how economy, society, living beings, or indeed the entire physical reality works are simply ignored.

One example, just one - I'm pretty sure you could automate away a majority of people's jobs nowadays. We don't do it not because it's not possible. We don't do it because humans are simply more efficient - in the physical, not intellectual sense. (Plus, so many have already been automated that we don't even think of them as jobs.) This will remain true long after vastly superhuman AI surfaces. (The real breakthrough - and the real worry, AGI or not - would be machines more efficient, flexible and adaptable than humans. It's not at all known or conceivable at what point AGI can [essentially retrace and outdo billion years of evolution] to get them.) (Speaking of adaptability, this includes brains. A vast majority of what makes us perform our tasks successfully is metis. It's far from obvious how much copying billions of pretrained intelligences actually helps, given that, to be useful, they'll still need to specialize to their specific circumstances and then constantly adapt as they change.)

I don't know, all I'm seeing here is pure rationalism. Not as in the contemporary community, rationalism in the traditional philosophical sense, the belief that all issues can just be solved with reason, i.e. inside one's (whether AGI's or [person modeling takeoff speed]'s) brain. And I just don't think this belief is at all justified.

Expand full comment

There are some jobs that are vanishingly unlikely to be automated quickly.

Some of the examples already given are ones I'd agree with, largely relating to skilled labor that tends to rely upon human bodily dexterity and sensorimotor feedback mechanisms we have not been able to replicate in robots. Plumber keeps getting brought up. Any specific object-level example is probably a bad idea, but yes, I've seen Boston Dynamic's cartwheel dog and Google's claw game controlled by a LLM, and no, those do not convince me we're at the point where robotics can replicate the human hand and hand-eye coordination necessary to, say, assemble the servers a LLM runs on. It may not seem like a difficult task, and it's not, at least to a person, but it's the kind of thing we've been trying a long time and not achieved yet.

Nonetheless, these are achieveable in principle. It's just a question of robotics advances ever matching the rate of computing advances, which I doubt. They're not the kind of thing that will happen inside of a year just because a software system can do everything intellectually that a human can do.

The bigger hurdles are are 1) entertainment jobs where the entire point is to watch a human do them, and 2) jobs that have to be done by humans because of laws.

For the former, think of any kind of sport. We've been able to make machines that can run faster, jump higher, and hit harder than any human for a long time, but we haven't automated track and field or boxing because that isn't the point. The people paying to watch this want to watch humans. You can't automate "being human" as a job skill.

For the latter, the most obvious is elected representatives that make the laws, but most decision-making processes, even where they potentially can be automated technologically, we don't. Law practices have to be owned and run by lawyers. You're entitled to a trial by a jury of your peers, not a jury of software systems, even though the latter is probably already more impartial. Access to certain data requires a security clearance and security clearances can only be given to humans, not to software systems. Some instantiation of a chatbot that is disconnected from the Internet and only has access to devices attached to a classified network can operate on this data, but a human has to be operating the software, even if there is no technical reason this has to be the case.

There is, of course, the most important job of all, producing more humans. That one doesn't get counted by economists, but we still haven't made a whole lot of progress in creating artificial wombs.

The sex bot thing is amusing, too. Until we can create synthetic bodies that feel exactly like the real thing, I think the demand will always be there for humans to have sex with no matter how intelligent a fleshlight attached to a humanoid robot gets.

It's been discussed, but the physical bottlenecks still seem to get handwaved away. Knowledge-generation doesn't happen by fiat. It happens by science. The speed of science is limited by the speed at which you can conduct experiments and receive feedback from the world. You can think of all the experimental testing of special and general relativity that had to wait decades because the equipment didn't exist or tests of high energy physics that couldn't be conducted until the LHC was built. Think of central banking. No matter how quickly you can come up with ideas for how to deal with a recession, you can't actually test it until you get another recession. No matter how good you are at coming up with new education theories, you have to wait for current young children to reach adulthood before seeing if long-term outcomes actually improve based on applying them. New ideas for promoting human longevity can't really be tested until at least an average human lifespan has passed.

Some of these processes can be sped up, but often it isn't a matter of intelligence. It's a matter of allocation priority for scarce resources. There is only so much space in shipping containers and trucks and some of it is taken up by consumer goods and food. Ultimately, people spending money determines what goes where when and how quickly and the top priority is never going to be make the AI better short of some kind of WWII-esque government mandate that all manufacturing and shipping has to prioritize that. Legislatures and rich people aren't going to suddenly care more about science because it's being done by robots.

Expand full comment

The link in your 'Mistakes' section about Ritalin being a risk for Parkinson's is now a dead redirect.

Expand full comment

On one hand, I like the article. On the other, I wonder how people can actually write out "going from 20% to 100%" and not get reminded of Pareto's principle.

Expand full comment