How A.I. Will Change the Way We Feel about A.I. in 2025
Design in ‘25: The Near Future of Architecture & A.I., Pt. 6
In This Post:
Unpacking 2024 . . .
In 2025 . . .
Networks of Agents, Agents for Rent, Mercenary Agents, the Agent Economy
Agent Manager Becomes a Thing
A.I.’s Get Emotional
‘Human’ gets Boutique
A.I. Fiction Gets More Nuanced
Unpacking 2024 . . .
In 2024, I predicted a few different ways that A.I. will change the way we feel about A.I. – A.I. and humans are in a co-evolution, and every new development in A.I. changes our own feelings about it. I proposed a few specifics: that A.I. would suddenly be both more, and less, visible, that A.I. Agents would takeover in multiple forms, and that multimodal A.I. would just become ‘AI.’
‘More and Less’ visible is a tough thing to pin down, but one of my favorite examples came out of my former employer:
UC Berkeley just announced plans to record all future lectures by default, in each of its 182 general assignment classrooms enabled with Course Capture. Pretty wild that the place where the free speech movement originated kinda crab-walked into a surveillance state.
I'm sure he means well, but when Erfan Mojaddam, RTL deputy chief academic technology officer and director of learning technologies and spaces, opines that the move:
". . . removes an additional burden of remembering to manually start and stop a recording for each session and uploading it . . ."
It kinda sounds like every other time a group was told 'nothing to see here' while some other group took away a little bit more of their freedom and/or privacy. He also seems to think that faculty will consider this a fair trade: having every word and movement recorded, to avoid the terrible burden of pressing ‘record.’
As an educator, this scares the shit out of me. It seems like only a matter of time until there is so much recorded content out there that training an A.I. to teach this class, or that class, will be a no-brainer for institutions trying to cut costs.
The A.I. Agent takeover definitely occurred, and is just getting started. Honestly, I didn't foresee just how fast the Agent takeover would happen . . . and assumed that 'agents' would come in the form of 3rd party applications that governed your various applications, rather than being a part of them. But major tech players like Microsoft and Google have now fully baked agents into their software. You can actually build your own agents in Microsoft 365 Copilot, as well as offering an increasing library of pre-built agents for enterprise use.
Finally, multimodality has definitively become the norm. So much so that the idea of an A.I. that's just a chatbot seems quaint. What would you do with it, lol?
In 2025 . . .
Networks of Agents, Agents for Rent, Mercenary Agents, the Agent Economy
In 2025 look to see the A.I. Agent takeover continue and evolve into a full blown ‘Agent Economy’ emerge. Inevitably, we’ll see the rise of Mercenary Agents or Agents for Rent.
When you need the office copier fixed, you call someone with expertise in fixing copiers. You effectively ‘rent’ that person for a couple of hours, rather than buy a new copier, or hire a fulltime repairman.
If someone develops an agent that is particularly good at handling a specific type of problem, there’s an opportunity to rent out that agent to anyone who’s got the same problem, but isn’t interested in investing their own capital to develop their own agent.
We will all have general-purpose agents to do general-purpose things, and many of those agents will come pre-baked into applications and devices that we already use (again, see Microsoft’s burgeoning library of agents). But for special use cases, it will pay to have that A.I. Agent that knows exactly what it’s doing. And building that agent might be prohibitively expensive. Renting that agent for a limited engagement seems like the optimal solution.
Agent Manager Becomes a Thing
I also think a new role will start to emerge: Agent Manager. In the same way that a human manager manages human employees, we’ll need someone in the office (yes, architecture firms too) to coordinate the efforts of different agents.
My personal philosophy on managing people has always been simple: articulate the vision, hire amazing people, get out of their way, only intervene when they start to get in each other’s way. I reckon it’ll be the same with an agent manager. You’ll build, or contract for, multiple agents with specific roles: one to do the accounting, one to do the new business development, one to do code review, etc. When the efforts of these agents start to collide in unproductive ways, someone will have to step in and do something.
This won’t be a ‘code’ or ‘techie’ or ‘nerd’ position. An agent manager will have to have technical expertise, certainly. But they’ll also have to have all the traditional skills of the manager: the ability to zoom out and see the wider vision, the ability to communicate that vision to employees (agents), the ability to weigh conflicting, non-measurable criteria for success.
A.I.’s Get Emotional
In 2025, we’ll collectively confront the idea of A.I.’s being emotional. Not acting emotional. But having actual emotional responses to events, outcomes and requests. The latest research on frontier models points to different LLM’s ability to ‘comprehend, mimic, and convey emotion.’ Early reactions from the general public will be of disgust – people are generally repulsed by the idea that a machine could have genuine emotions. But that misses the point. Whether the emotions are real or fake, the fact that they can be simulated perfectly dramatically expands their usefulness in critical situations. They can be used in medical or therapeutic practices, to provide company to the lonely, the isolated, or the aging. They can support children when parents are away. Having A.I.’s that ‘get emotional’ will eventually be understood as a good thing.
‘Human’ gets Boutique
As the glut of A.I.-driven content explodes, there’ll be a new call for ‘human-derived content’ akin to the ‘reality-show’ craze of 1990-Present. We embraced reality shows, because sitcoms were too scripted. And then we found out reality shows were scripted, so now follow ‘influencers’ as they go through their days. Most of those are scripted, too, but that’s also beside the point. There is no need for traditional human influencers anymore – not when an AI generated influencer can pull in $11,000/month, and attract 3M followers. If you want to make money being an ‘influencer’, or you want to use someone’s influence to sell products, why not spin up 100,000 influencers and just see which ones take off in popularity?
Concurrently and paradoxically, a human influencer’s future popularity may depend on their ability to project authenticity – to prove their humanness. The same will apply to products and services. “Made by Humans” will be a tag people look for. It won’t necessarily convey that something was made better, or more cheaply. It will appeal to a growing zeitgeist – a widespread sentiment that just longs for something human-made.
A.I. Fiction Gets More Nuanced
AI, in fiction, has typically been fantastical, and largely negative. We’re imprinted with ideas about A.I.’s that enslave or kill humanity (e.g. HAL, Skynet, etc.). Positive, heroic examples, like Ironman’s Jarvis are few and far between. But even Jarvis was powered by a mystical space rock, and in my mind leans more towards fiction than science.
Contemporary fiction seldom portrays the nuances and complexities of rising A.I. well, but I think that’s changing. Her (2013) did a pretty good job, especially considering it was made 10 years ago. A 2024 film ‘Afraid’ starring Johnathan Cho and Katherine Watterson did better. [SPOILER ALERT] Despite a few obvious horror movie tropes, I thought the film did an excellent job of describing just how easily A.I. enters our lives. In the film, a normal family is gifted an advanced A.I. powered assistant (think Alexa, but way more powerful). It gradually takes over their lives, not through brute force, but because they want it to. The AI, named ‘AIA’ (pronounced Aya, lol), helps the children with their homework, helps the father get a promotion, and helps the mother finish her doctoral thesis. AIA even helps the mother reconnect with her deceased father by using all his online lectures and correspondence to create a live, simulated avatar of him. Whatever reservations the family initially had are rapidly wiped away by the seduction of convenience.
The integration of A.I. into human existence is going to be complicated, and fraught with nuanced moral dilemmas. I think that in 2025, you’ll start to see fiction start to explore those challenges in a more careful way, hearkening back to sci-fi pioneers like Asimov and Phillip K. Dick.
If you enjoyed that, be sure to check out Part 7 of this series: 'How A.I. Will Change the Conversation in 2025’ and subscribe below for all future updates.


