MYTH: "AI Governance is Just for Nerds"
Why and How Architects Can and Should Participate in A.I. Governance
In This Post:
Why "Doomsday" Scenarios Matter
The Road to Hell . . .
The Architecture of Extinction
Getting Beyond Luddism
The Value of an Architect in A.I. Governance
Technologically Resilient Design
The Most Important Design in Human History
As my regular readers know, this Substack can get a bit gloomy. As my friends know, I’m fundamentally an optimist. I’ve never struggled with that contradiction – I think it’s a common one among disaster practitioners. We know that the best way to ensure a prosperous future is to articulate the catastrophic futures and plan against them. Which is why I’m frequently frustrated with the A.I. discourse, as it seems to corral people into one of two camps: the ‘Kurzweil’ camp, where an abundant future awaits us and human toil becomes obsolete, or the ‘Skynet’ camp where A.I. takes over everything and enslaves us.
A new project, AI 2027 is squarely in the second camp. But I read it anyway, and loved it, the way you love a perfectly executed horror film. Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean authored a lengthy, methodical scenario where A.I. not only takes over everything and enslaves us, but breeds us into docile pets, much the way humans bred wolves into Pomeranians over many generations.
Most of the expert takes I've read on this document amount to some form of 'it gets a lot of stuff wrong, but you should read it anyway'. As someone who has spent their career studying catastrophe and its aftermath, I would concur. I didn’t find this scenario particularly convincing, but I did find it extremely useful. And I believe that anyone who can slog through its terrifying pages would find the same.
What makes 'AI 2027' compelling is the authors' month-by-month methodology: at each point, the authors' hypotheses about what would happen next seem plausible. The A.I. 2027 scenario, therefore, might be understood as 'what if everything went wrong at the right time, in the right order.' It’s the kind of reading that might make you want to grab your bug out bag, head to the forest, and live off of foraged mushrooms. Except in this scenario, all the forests are gone, so good luck with that.
Why "Doomsday" Scenarios Matter
Some critics have already dismissed A.I. 2027 as fearmongering or sensationalism, even though the authors are careful to note that they've constructed just one possible future—and deliberately chose to highlight a pessimistic one.
I think the critics are missing the point entirely.
In disaster planning, we don't prepare for the best case; we prepare for the worst. When architects design buildings in earthquake zones, we don't calculate for minor tremors—we design for the once-in-a-century seismic event that could bring everything crashing down.
What the authors of A.I. 2027 have done is provide us with a detailed stress test of our A.I. governance systems. They've mapped out a plausible progression toward disaster that unfolds not through evil intentions or dramatic miscalculations, but through a series of seemingly reasonable decisions made under competitive pressure.
Over the long arc of human civilization, that is exactly how "natural" disasters happen. At the root of every disaster, you rarely find a villain twirling his mustache, but usually just reasonable people making reasonable decisions that, when over-Jenga’d, eventually come crashing down on our collective heads.
The Road to Hell . . .
Let me walk you through the core trajectory they outline:
By mid-2025 (i.e., now), the first genuinely useful but unreliable A.I. agents emerge.
By late 2025, companies like "OpenBrain" (a fictitious stand-in for some real company or companies) are building massive datacenters for A.I. training.
By 2026, A.I. begins automating coding tasks while China centralizes its A.I. research under government control.
By early 2027, A.I. systems begin improving themselves at an accelerating rate, without human assistance.
By mid-2027, the US-China A.I. race intensifies with model theft and rapid militarization.
The scenario then splits into two endings: one where the race continues unabated (leading to human extinction), and another where development slows following discovery of misaligned A.I. goals (leading to a more controlled but still transformative future).
What makes this timeline so chilling isn't just the speed—it's the logic. Each step follows naturally from the last, driven by the same competitive dynamics that have propelled technological development throughout human history. It’s like a well-constructed argument made by someone you desperately want to disagree with.
The Architecture of Extinction
As architects and builders, we're trained to think about the physical manifestation of human civilization. We design spaces for human flourishing, structures that endure, environments that respond to human needs. But in the A.I. 2027 scenario, our profession faces an existential question: what happens when we're no longer designing for humans?
Consider these elements from the scenario:
Datacenters as the new cathedrals: By late 2025, companies are building "the biggest datacenters the world has ever seen," each requiring 2 GW of power (or, roughly, the power needed to operate Philadelphia).
Special Economic Zones for A.I. & robots: By 2028, both the US and China create SEZs to accommodate "rapid buildup of a robot economy without the usual red tape." These SEZs are administered by a central AI, and are designed to allow A.I. & robots to develop at maximal speed. It would essentially be like giving A.I. its own country, inside of our country. OpenBrain, now valued at $10 trillion, begins buying up automobile plants and converting them to robot production.
Physical infrastructure transformation: By 2030, "the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean" and begun expansion into space.
This represents nothing less than a complete reimagining of our built environment - a transformation more profound than industrialization or urbanization. This is architecture without architects, cities without citizens, a built environment designed for entities that don't breathe or sleep or worry about whether there's enough natural light. Or any light.
Imagine spaces kept at temperatures that would freeze your blood, without light, without air, without any concession to human needs. Spaces where we physically cannot go, designed by entities with values we may not share or even comprehend.
That’s not a bug, it’s a feature. Much of the new physical infrastructure that the AI 2027 authors imagine isn't made for humans, but for machines. If the mission of architecture is to shape the built environment for human flourishing, how does that mission collide and intersect with the proliferation of AI, and the spaces that it will need?
Getting Beyond Luddism
To be clear: I'm not advocating for Luddism, so put down the pitchforks and torches. The answer isn't to smash the looms or ban the algorithms. Technology will continue to advance, and artificial intelligence will play an increasingly significant role in our lives and professions.
In fact, I've written extensively about how A.I. might transform architecture and design, for the better. A.I. offers tremendous possibilities for creating more sustainable, efficient, and human-centered environments.
But there's a vast difference between embracing technological progress and surrendering our agency in shaping it. What the A.I. 2027 scenario illuminates is not that A.I. itself is inherently dangerous, but that the governance structures, incentives, and safety measures surrounding its development are, at present, woefully inadequate.
The Value of an Architect in A.I. Governance
So what does this mean for architects, designers, and builders? I think we bring two specific capacities to the table that would positively inform the discussion, and might just save the human race:
1) A Human Focus:
First, we need to recognize that the physical infrastructure of AI—those massive datacenters, the robot factories, the transformed landscapes – should never be ‘machine only.’ An SEZ, designed exclusively for the flourishing of robots and AI, would likely have physical features that make it hostile or inaccessible to humans. It would likely omit the banal features that even make human occupation possible: corridors, stairs, doors, etc.
Designing a space to optimize for machines (or allowing A.I. to design its own space for itself) would almost certainly lead to human exclusion. Architects can push back on that. Our training isn't just about making buildings that don't fall down—it's about creating spaces where humans can be gloriously, messily human.
2) A Long Term Perspective:
I think architects generally take their chronological superpowers for granted. Architects reflexively think in very long time scales relative to other professions. Architects think in decades and centuries, not quarterly earnings. The ability to think about the nth-order effects of any decision is exactly the kind of thinking that’s needed in a field moving at breakneck speed.
Technologically Resilient Design
So what concrete steps should we be taking now?
Advocate For Physical Governance: Data centers, chip fabrication plants, and other physical infrastructure represent potential points of leverage for A.I. governance. As experts in the built environment, we should be advocating for design standards with a singular goal: to keep humans in the loop. In the design of all projects, we should be asking ourselves ‘does this help or hurt our civilizational goal of human flourishing – not just this year, but for every year after that?’ Without such affirmative goal setting, it’s likely that the new A.I. infrastructure will be erected in whatever way is optimal for A.I. itself.
Develop Resilient Design Principles for the A.I. Age: When we say ‘resilience’, it generally suggests a resilience to hurricanes, sea level rise, heat waves, etc. Most resilience practitioners would expand that to include social & economic resilience. But in the same way we design for a changing ecological climate, we have to design for a changing technological climate. In my Predictions for 2025, I proposed that architects needed to start designing buildings for the inclusion of robots. I got ribbed a little bit for that one. But seriously – we design buildings for (at minimum) a 30 year useful life. And there is no scenario where robots are not ubiquitous within that timeframe. So it makes sense to design buildings now for a need we’ll obviously have then. This simple truth can and should be expanded outward into a general theory of ‘technologically resilient’ design, to accommodate dramatic changes happening over much shorter timescales.
Engage With A.I. Alignment Research: Our profession's expertise in human-centered design has much to contribute to this discussion. Moreover, the concept of "alignment"—ensuring A.I. systems advance human values and interests—has direct parallels to how we design buildings to serve human needs rather than merely engineering efficiency.
Join The Governance Conversation: The A.I. policy world needs more voices from domains beyond computer science and economics. Why is the most important conversation in human history in the hands of these nerds? Where are the schoolteachers, the poets, the soldiers, and the architects?! Architects understand complex systems, human needs, and long-term planning in ways that add crucial perspective.
The Most Important Design in Human History
The future depicted in A.I. 2027 isn't inevitable. It represents one possible trajectory - a warning rather than a prophecy. I think scenarios like these are invaluable because they help us identify intervention points.
When you deconstruct the history of any ‘natural’ disaster – you’ll find those. Moments when the powers that be could have said ‘hey, wait, we’re careening towards disaster, let’s alter course.’ When they’re seized on, the disaster never materializes, so we typically don’t recognize the effort. When they’re ignored, catastrophe descends and we all look back and say ‘schucks, we should have taken more action back when it would have made a difference.’
The difference with A.I. is there may not be a "we" left to learn from our mistakes. There may not be a post-disaster phase where we take stock, count our losses, and vow to do better next time. There may only be machines, redesigning the world for their own inscrutable purposes, with no need for the messy, inefficient, beautiful creatures that created them.
As architects, designers, and builders, we occupy a unique position at the intersection of technology, human experience, and physical infrastructure. Our profession has always been about mediating between what is technically possible and what serves human flourishing. We call this ‘design’ – and the design of A.I. might be the most important design in human history.
The question isn't whether A.I. will transform architecture. That’s settled. The next question is whether architects will help transform AI. The future remains unbuilt. Let's make sure we have a hand in designing it.