A.I. and Architecture: the Future is Faster than You Think
A.I. Experts Revise all Their Estimates for the Arrival of Human-Level A.I. Architects, Pop Stars to Face Challenges.
I try to refrain from posting updates on every paper, but damn, this recent paper by researchers at UC Berkeley et al updates us all on how the real A.I. experts see the A.I. takeover approaching. It’s a healthy 31 pages but don’t worry, I read it, so you don’t have to.1
The Big Deal:
The study, conducted by AIImpacts.org, surveyed 2,778 A.I. experts who collectively revised their estimates about the timeline for the arrival of high-level A.I. by years, and in some cases, decades. They have moved up the schedule on A.I. generally, but also on some specific outcomes, such as predicting that there’s at least a 50% chance that A.I. will be able to do the following by 2028:
Play Angry Birds at a superhuman level
Fake a new song by a specific artist (this is great - I can finally get more Johnny Cash songs)
Win the World Series of Poker (this is not great - I’ve been practicing hard and 2028 was supposed to be my year).
Aside from buoying my music collection and ruining my poker aspirations, what are the big implications? Let’s explore – but before we get too deep, there’s a few definitions to know:
A Few Definitions to Know:
To understand the impact of the study, we have to understand a few terms:
Aggregate Forecast for 50th percentile arrival time:
This means that when taken on average, the experts’ opinion is that there is at least a 50% chance of the technology arriving by that date. Some experts will say sooner, and some will say later, and who you listen to probably says more about you than it does about them.
High-Level Machine Intelligence (HLMI):
The authors defined it thusly:
“High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.”
Full Automation of Human Labor (FAOL):
The authors defined it thusly:
“Say an occupation becomes fully automatable when unaided machines can accomplish it better and more cheaply than human workers. Ignore aspects of occupations for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.”
What the Survey Forecasts
Although I’m looking forward to generating my own, original Taylor Swift songs, HLMI and FAOL represent the truly jaw-dropping dimensions of the study:
HLMI:
The Aggregate 2023 Forecast for HLMI dropped from 2060 to 2047, a drop of 13 years. What’s really wild about that is that between 2016 and 2022, the forecast had only slightly budged, from 2061 to 2060. Whatever happened in 2023, it was enough to get these A.I. experts to really shake up their estimates. Moreover, the aggregate 10% chance of HMLI dropped to 2027. So, at least a handful of experts think that we’ll get it quite soon.
FAOL:
The Aggregate 2023 Forecast for FAOL dropped from 2116 to 2164, a drop of 48 years. In a sense, the future just got 48 years closer.
Legitimately, making a forecast for 2164 is kind of a cheat. We’ll be dead, and so will all of our grandchildren. There’s a million things that could wipe out the human species between now and then, or save it. It’s no big risk, reputationally speaking, to predict something happening in 2164. So maybe the change we’re seeing is just some experts revising their forecast from ‘fuckit, I dunno’ to ‘yeah, 100 years, maybe?’ But even that speaks volumes. It means that the idea of machines replacing all human labor just migrated from ‘nope, never’ to ‘ehhhh, maybe’ in the minds of a lot of experts.
What the Survey Means:
In my disaster work, I was often confronted by disaster skeptics who wanted to know when. When is the earthquake going to happen? When is the next storm? When will you shut up about disaster, Eric?
When facing something potentially catastrophic, we want to know when, because it fulfills several psychological imperatives:
It allows us to excuse inaction in the present. If the earthquake isn’t happening for 20 years, then I don’t have to deal with it now, obviously.
It allows us to contextualize any preparation or resilience action in priority with other things. If the next big storm isn’t coming for another three years, then I’m going to deal with this other crisis over here, because that’s a problem today.
I always resisted answering that question for two reasons:
I didn’t know the answer. No one can predict disaster. We can only recognize when the conditions for disaster have been set, and advocate around action.
Even if I did know the answer, it would have been an unhelpful one. Humans are natural procrastinators, and giving someone an excuse to avoid or delay action (which one does when speaking as an expert) legitimately puts lives at risk.
That being said, there is some utility in knowing how long we have until the clock strikes midnight. And that’s part of what’s been so frustrating about the whole A.I. conversation: some people want to tell you that A.I. will never change architecture, ever, and that it’s just a fad. Others will tell you that A.I. is going to replace every human architect next week, and that you should start looking for another career.
I personally believe the latter, but when that comes true has meaning. If it’s not true for another 200 years, then why would anyone alive today be concerned about it? For that matter, why am I writing this post?
To be perfectly clear: the A.I. experts in this study don’t know when the changes are coming, either.
To be perfectly clear: the A.I. experts in this study don’t know when the changes are coming, either. Even when one takes the average, one finds enormous disparity and diversity in how different A.I. experts are reading the future. There’s no inherent reason we should believe the ‘average’ opinion as opposed to the outliers. Consensuses are wrong all the time.
What is true, and clear, is that the opinions have shifted. The overwhelming majority of experts have moved their opinions on the arrival of A.I. forward in time, suggesting that they, like the rest of us, were so moved by what they saw in 2023, that they had to revisit their assumptions. And the change of heart that they had in 2023 was dramatically more powerful than anything any of them had seen in their careers. Maybe we understand Bill Gates2 a little better when he wrote:
“In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary." (GUIs being no. 1 and ChatGPT being No. 2)
If you’re anything like me, you’ll read this paper and immediately think ‘I can’t wait to see how far they move up the deadlines next year!!’ Will they revise it again next year? Will the future continue to speed up? I dunno, maybe. Actually, probably. But if you design today for the earthquake, then it really doesn’t matter if the earthquake happens tomorrow, next week, or years from now. You’ll be ready.
But you should
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun



