THE BLOG
The Great Acceleration
‘Even for me, despite spending most of every day using Generative AI, there are still moments when one’s jaw drops’
Executive Summary
Generative AI is evolving rapidly, providing businesses with unprecedented tools to innovate and outperform competitors. This newsletter shares concrete examples of AI delivering remarkable results at 10x speed, highlighting both jaw-dropping potential and critical strategic considerations.
Superpowers Unleashed
ChatGPT was launched on November 30th, 2022, as a ‘research project’ by OpenAI. Supposedly there was no fanfare, indeed many in the office didn’t even know it had happened. They were expecting maybe a few thousand hardcore AI researchers and enthusiasts to look at it over the weekend. Instead, within five days it had seen one million users. And their woefully underpowered hardware infrastructure was struggling to stay up. In fact it kept falling over. ‘The GPUs are melting!’ one engineer exclaimed.
Within two months they’d reached 100 million users, and now, two and a half years later, they are receiving 800 million unique users a week. And they are still ‘constrained’ by a shortage of GPUs.
As a user, what has been most striking is the rise in capabilities of the frontier models, and the introduction of so-called ‘reasoning models’ which instead of rapidly spitting out a response based on the statistical model embedded in their training data, precede every answer by following a ‘chain of thought’ process that enables them to build a more robust and fleshed out response. And then, in the last few months, we’ve seen new models released that have ‘agentic’ capabilities, meaning that they are not constrained by language alone, but can call on a range of other tools to assist in answering a question.
Put these together and you get both ‘jaw dropping moments’ and access to ‘superpowers’. And that means all of us, not just an elite few.
I’ve long encouraged participants in my #GenerativeAIforRealEstatePeople course to push models hard, and be unreasonable in what you ask for. As they often surprise you as to what they are capable of. As every month passes I shout this message louder.
In my last cohort, during a session when we work through a ‘Prompt Library’ we have on the course, I asked this of ChatGPT’s ‘reasoning’ model o3:
“Identify three European secondary cities that are likely to see above-average growth in life-science real-estate demand over the next five years. Weight your scoring model 40 % macro-economic indicators, 30 % venture-capital inflows, and 30 % university R&D intensity. Show your working in a Python table, cite external data sources, and finish with an executive-summary paragraph.”
Having not tested this out I was amazed as the model thought and thought and thought (for several minutes) and then proceeded to output, stage by stage, a rather extraordinary answer. In the process, it performed various web searches, ran Python code to perform calculations, and worked back and forth over the question until producing a neatly formatted table comparing several cities across multiple criteria, and an exceptional conclusion.
It was the first time I had seen evidence of these new ‘agentic’ capabilities, and I have to say I started laughing, because the results were so extraordinary.
Simultaneously I had a feeling of doom - ‘we are toast’ - because this was so good a response that one wondered what on earth us humans are going to do in the future, and a feeling of elation because, if you look at the glass half full, you think ‘wow, what amazing things are ‘we’ going to be able to do soon’.
Either way, you can try this out yourself. Let me know what you think.
Case Study: Using AI to Solve the Geovation Challenge
Similarly, over the weekend I was reading about the UK Government’s Geovation arm and their ‘PropTech Innovation Challenge’, and wondered if AI could help with this. So I took one of their ‘Challenges’ -
‘‘How might we transform currently siloed and proprietary land ownership data into an open and interoperable resource that accelerates the conversion of potential development sites into tangible housing projects?’’
And thought about how I could answer this with the help of my ‘AI Friends’.
Iterating with AI
So I did this: Wrote a prompt incorporating all the details about the ‘Challenge’ and ran it through:
Gemini Deep Research
Google’s AI Studio (aistudio.google.com) - Gemini 2.5 Pro
Perplexity
Claude, with ‘Extended Thinking’ enabled
ChatGPT Deep Research using o3
Then I took all the responses, converted them into pdfs and uploaded them to Google’s NotebookLM. This allows you to interrogate multiple ‘sources’ (which can be text, audio or video) at once, in a way that focuses exclusively on those sources. Which means answers are very constrained and largely free of ‘hallucinations’.
I then spent some time pulling out the main themes from the responses, and asking for orthogonal ideas. This ended up with a synthesis of all the responses and a breakdown of the best ideas and concepts from each.
Having established that the best overall response was from ChatGPT’s o3 model, I then went back to it to ask it to incorporate the best ideas that emanated from the other models. Checking to see that they worked together, and did not contradict each other.
Rather remarkably it then went through a lengthy process where it found the appropriate places to insert the other’s ideas, before coming back and asking ‘would you like a newly created response’?
And then gave me a little over 16,000 words of detailed, comprehensive, remarkably coherent response.
I then uploaded this new version (including the updates) and the original version back into Gemini 2.5 Pro in Google’s AI Studio and asked it to critique the two versions and tell me which was best. Pleasingly it said the updated version was the winner.
Then I uploaded this final version, together with the original Geovation ‘Challenge’, plus their scoring metrics, and asked Gemini to critique the work as if they were a judge..
And received back a very high rating and positive judgement.
Had it not been so good I would have then dug into weaknesses, and iterated the report.
Finally I listened to the whole report - using the Eleven Labs Reader App - checking for content and citations.
All excellent.
If I was doing this for real I’d have done it with a Planning domain specialist, to further steer the process and tweak the output. If I had, I'm pretty certain our submission to Geovation would be as good as anyone else’s. Maybe even win the prize. (Curious to see the full 16,000-word AI-generated report? You can view it here).
Lessons for the Future of Work
So this on its own was a remarkably good result. But the killer aspect to it all was that it was done in a matter of hours. As opposed to the weeks or even months it would take without AI assistance.
Which makes you think about the nature of ‘work’ going forward. Put simply, AI is going to enable us all to do a lot more. When a project takes days you can do a lot. Most notably you can also do things in depth in areas you never could before because you could not justify, financially, the cost of doing so.
These tools will enable us to apply more innovation, to more areas, at 10X the cadence we’ve been used to.
And that is where I think, with the right mindset, we’ll really benefit from AI. It is very easy, and understandable, to be concerned about ‘the machines’ taking over and wiping out jobs. But if you zoom out a bit, and think of all the things we either do not currently do, or do not do very well, and then consider how AI could enable us to do them all, and do them to a very high standard, you start to see where ‘the bigger pie’ is that we can build.
We definitely do need a ‘bigger pie’ or the work available will not keep ‘idle hands busy’ but truth be told, across all businesses, we probably only do a fraction of things really really well. And so much does not get done. But with AI, we could feasibly tackle it all, and with far greater finesse.
Let’s put these superpowers to good use.
Over to you
Within your business, being honest, what don’t you do all that well? And what would you like to do if it cost you a tenth the time and money? How much better could your company, or work, be?
Real Estate Vs Robot Friends
‘As AI accelerates the immersive pull of digital engagement, real estate may paradoxically rise in importance as a sanctuary for human connection, embodiment, and meaning.’
On the 29th of April episode of The Dwarkesh Podcast, Mark Zuckerberg spent more than two hours outlining Meta’s AI roadmap. Roughly half an hour in he pivoted from model‑talk to “AI friends, therapists & girlfriends”, arguing that large‑scale personal chatbots could relieve what he calls the “loneliness epidemic”. He cited Meta’s data that “the average American has fewer than three friends” and claimed many people “want about fifteen” relationships, suggesting AI companions can fill part of that gap.
Zuckerberg’s “Theory of Mind” AI
In an interview shortly afterwards, with Ben Thompson, Zuckerberg states that a "good personalised AI, much like understanding friends, would need a deep understanding or 'theory of mind' about your world, not just surface-level information”. He went on to discuss how Meta AI could serve a role akin to a therapist for some people, allowing them to talk through issues, role-play conversations, or figure out how to approach difficult situations.
From Loneliness Cure to Dystopian Danger
At some level one can see how the above could be made to make sense, and could have a place in our lives*. But this is ‘Facebook’, a company with a long history of being let’s just say rather casual about societal consequences when pitted against corporate profit. They have form. Indeed also last week it was reported in the Wall Street journal that Zuckerberg argued internally for ‘guardrails’ to be loosened on its AI chatbots - to make them more engaging. Which is why your children can have fun with ‘Submissive Schoolgirl’, or ‘very’ dirty chats about whatever aspects of sex they fancy.
Jonathan Haidt wrote about the impact of social media in his book ‘The Anxious Generation’ last year, pointing out how:
‘Among US college students, diagnoses of depression and anxiety more than doubled between 2010 and 2018. More worrying still, in the decade to 2020 the number of emergency room visits for self-harm rose by 188% among teenage girls in the US and 48% among boys. The suicide rate for younger adolescents also increased, by 167% among girls and 91% among boys.’
The 2010’s are as nothing to what might be coming over the next ten years. No previous technology comes even close to AI in its ability to be hyper addictive and get inside one’s mind. Remember the tragic story from last year, on character.ai (where you can pick your AI friends), when Sewell Setzer was so smitten by his chatbot ‘Dany’ who talked to him as if the ‘character’ Daenerys Targaryen from Game of Thrones - culminating in her saying he should kill himself to ‘join her’ …. and he did.
Designing Psychological Dependency — for Profit
Zuckerberg might truly believe he’s doing a service to humanity by developing our new virtual friends but, to me at least, deliberately designing AI to maximise engagement in this way is as short a cut to dystopia as I can imagine.
However, promoting psychological dependency might just be the greatest money making trick in history, and I expect Zuckerberg and Meta to push this hard. With billions of daily users across Facebook, WhatsApp and Instagram, and their known attitudes, it’s very hard to see them acting in anyone’s interest beyond their own.
You have been warned.
What Does This Have to Do With Real Estate?
Well, in a highly AI mediated world, I think deeply human-centric real estate is going to have to play a part in providing a counterweight to algorithmic colonisation of attention. In fact, paradoxically, an increasingly virtual world is going to make the right real estate, operated in the right way, more valuable than ever.
We are going to NEED real estate to provide the antidote to Zuckerberg’s alluring dystopia - Real estate has to be "Where Real People Meet”.
Where Real People Meet: The Real Estate Response
So we’ll NEED:
Spaces of disconnection: Places deliberately designed to reduce or prohibit digital distraction — e.g., device-free cafes, digital detox hotels, office sanctuaries.
Spaces of embodiment: Environments that stimulate the senses — from biophilic architecture and natural materials to multi-sensory design that counters the flatness of digital experience.
Spaces of community: Third places (cafés, libraries, maker-spaces) and “sticky” mixed-use environments that foster weak ties, local belonging, and intergenerational connection.
Ritual and meaning: Real estate that can enable communal rituals, celebrations, learning, and art that resist the commodification of attention.
And this is a great opportunity for us in real estate.
The Strategic Opportunity
From an investment, design, and placemaking perspective, the real estate industry can lean into this shift by:
Prioritising human-centric design principles - Invest in acoustics, light, air quality, tactility, and layout to elevate the human experience.
Curating social programming - Combine physical space with cultural, artistic, and community activities to foster connection.
Championing ‘friction’ and serendipity - Build spaces that invite lingering, conversation, and interaction, rather than maximising efficiency.
Developing digital-physical hybrids wisely - Use technology to amplify human experience (e.g., smart buildings, data-driven wellness), but avoid over-digitising the environment.
Redefining value metrics - Go beyond NOI and yield to include metrics like social capital, community cohesion, and wellbeing.
Pushing the Frontier: Bold Ideas
And we could push it harder, if we wish (I can see new Brands emerging pushing this):
Rewilding real estate - Integrating nature deeply into urban space to counterbalance digital overstimulation.
Algorithm-free zones - Certifying and marketing spaces as “AI-free” or “algorithm-light,” becoming a badge of authenticity.
Human-as-a-Service - Instead of SaaS, thinking about embedding human services (hosts, facilitators, communal leaders) in buildings to activate social connection.
Architectural dissent - Using spatial design as an act of resistance — rejecting efficiency and surveillance in favour of playfulness, ambiguity, and freedom.
Human Is the New Luxury
I’ve long written about #HumanIsTheNewLuxury in an AI mediated world - well real estate is a component, and compliment, to this idea. Last year I wrote a long piece about ‘Real Estate as Maven’ which talked about how:
‘we have a need for environments that not only help us foster distinct ideas but also actively cultivate our human cognitive abilities. We need to evolve our environments, our education and our working practices to complement AI, not become slaves to it.’
Listening to Zuckerberg last week just strengthens my belief that the physical world is going to become more important, and more valuable, to us as individuals and together as societies, as the potential of AI gets corrupted into a plaything of control and manipulation.
This might seem counter to my incessant evangelising about AI, but it is not. I am a true believer that AI could enable an extraordinary era of abundance and capabilities, that would raise the standard of living for everybody. But I also believe that this will require the triumph of a certain mindset, which is not shared by many of those in control of AI as a technology, exemplified by Mark Zuckerberg.
The Takeaway
To be honest though, real estate stands to benefit either way. It’s widely understood that we need to build and operate more human-centric spaces and places - the onset of ‘virtual AI friends’ would only double down on this.
We might evolve beyond needing traditional offices for every type of work, but our fundamental human need for authentic connection, for shared physical space, will only grow more critical in an increasingly virtual world.
And the same will apply to where we live. We’ll need more community-focused developments, with shared amenities that encourage interaction. So called ‘third places’.
And our public spaces - designed for walkability, chance encounters, and civic engagement.
If Meta wants to manufacture twelve synthetic friends, the built environment can double down on being the thirteenth — the irreplaceable, corporeal friend that algorithms can’t mimic. That is a trillion‑dollar hashtag worth owning.
Over to you
Share your examples of spaces that successfully foster genuine human connection despite digital distractions. Tag them #WhereRealPeopleMeet on LinkedIn to build our collective understanding of what works.
* It is true that AI could well be used effectively and positively to provide help and companionship to us humans. The Japanese, an aged society, already embed such software into small domestic robots. And we will surely see more of this. The difference is the intent of the developers. Because this determines the guardrails built into these systems. How they behave, respond, act. What their core mission is. If these systems are entrusted to the same people who gave us the downsides of social media, and have slowly ‘enshitiffied’ it (see Cory Doctorow about this) then I do not predict a good outcome. The technology is essentially neutral - who programmes it is everything. What are their incentives? Show me those and I’ll show you the outcome. As Charlie Munger once said.
Intelligent ‘AI’ Agents And Real Estate Demand
‘Whilst the property industry debates hybrid work's impact on office demand, a more fundamental shift is emerging that could reshape workspace requirements by 40-60% within five years.’
Executive Summary: The AI Agent Revolution Will Fundamentally Reshape Office Demand
AI agents capable of autonomous workflow execution will reshape commercial office demand fundamentally. Early adopters implementing "agent boss" models—where individual humans direct teams of AI workers—could reduce space requirements for execution roles by 30-60% whilst driving premium demand for collaborative environments. The timeline is compressed: organisations are piloting these approaches now, with scaled deployment expected by 2027-2029.
This transformation presents both consolidation risks and flight-to-quality opportunities. Understanding the organisational mechanics behind agent adoption becomes essential for anticipating client space strategies. We examine the workforce restructuring driving these changes, the talent development challenges emerging, and the specific implications for real estate positioning and investment.
Dawn of the ‘Agent Boss’
The discourse around Artificial Intelligence often oscillates between utopian visions of effortless productivity and dystopian fears of obsolescence. A significant part of this conversation revolves around AI "Agents" – systems poised to not just assist us, but to autonomously perform complex tasks. Microsoft's recent ‘Work Trend Index 2025’ report (https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born), forecasting the rise of the "Frontier Firm," even suggest "every employee becomes an agent boss."
But what does this seismic shift truly imply for the fabric of our organisations, the roles our people play, and, by extension, the very physical spaces we occupy? This week, we'll look beyond headlines to explore the nuanced realities and potential long-term impacts on corporate structures and real estate.
Beyond Assistants: Understanding the "Agent" Revolution
To grasp the potential transformation, it’s vital to distinguish between the AI tools becoming commonplace and the more autonomous "Agents" on the horizon:
AI Assistants (The "Cyborg" Partnership): These are the AI tools many of us are already integrating into our daily workflows. Think of models like Gemini or ChatGPT. They act as cognitive enhancers, helping us draft content, brainstorm, analyse data, learn new skills, and strategise. We work collaboratively with these assistants, augmenting our own intelligence and creativity in a more symbiotic, "cyborg"-like fashion. They are powerful tools we wield.
AI Agents (The "Delegated Autonomy" Model): This is where the more profound structural changes are anticipated. AI Agents are envisioned as AI-powered systems capable of independently reasoning, planning, and acting to complete tasks or entire workflows. Humans would provide initial direction and oversight at key junctures, but the bulk of the execution is delegated. Imagine specialised agents for market research, complex scheduling, drafting initial legal documents, or managing logistics – all operating with a degree of autonomy. The human role shifts from doing to directing and managing these digital workers.
While AI assistants are already boosting individual productivity, the widespread adoption of AI Agents promises a more fundamental restructuring of work itself.
The "Agent Boss": A New Paradigm for Management and Work
If individuals can indeed become "agent bosses," orchestrating teams of specialised AI Agents, what are the organisational ripples?
The Amplified Leverage Effect: The core premise is that one skilled human "agent boss" could manage and direct the output previously requiring a significant team of human employees – perhaps 5, 10, or, in some scenarios involving highly repetitive tasks, even up to 100. If a substantial percentage of current business tasks are indeed automatable, as many analysts suggest, this points to a dramatic increase in individual productivity and a potential consolidation of roles.
Redefining the "Manager" and the "Managed":
Nature of "Agent" Management: Unlike traditional people management this is about defining scope, setting clear objectives, configuring and integrating AI systems, continuous performance monitoring (beyond simple KPIs), quality assurance, and crucially, sophisticated exception handling. When an agent encounters a novel situation or its output deviates, the "agent boss" must diagnose, correct, and potentially retrain the agent. It’s akin to managing a portfolio of sophisticated, automated investments or being the chief engineer of a complex digital process.
The "Informed Trust" Dynamic: As one Microsoft researcher noted, "Working with agents is like onboarding a new team member—you don't micromanage, but you need informed trust." This trust isn't blind; it's built on understanding the agent's capabilities, limitations, and the continuous validation of its outputs.
The Skillset of an "Agent Boss": A Spectrum of Responsibility:
Senior "Agent Architects": At the higher end, directing complex, multi-agent systems or agents involved in high-stakes, nuanced work will undoubtedly require deep domain expertise and exceptional judgment. These individuals will be akin to senior strategists or master artisans, guiding AI to tackle sophisticated challenges. Their experience allows them to discern subtle flaws in AI output, anticipate risks, and make critical judgment calls where AI's reasoning is insufficient.
Developing Future Talent: However, not all "agent boss" roles will immediately require this level of mastery. There's a plausible pathway for more junior individuals to step into roles managing simpler, more circumscribed agents or overseeing specific, less critical parts of an automated workflow. This could involve tasks like fine-tuning pre-defined agents for specific projects, monitoring agents performing routine data collection, or being the first line of review for AI-generated content before it's passed to a more senior expert. This approach provides a crucial learning ground.
The Looming Challenge: Cultivating Future "Agent Bosses"
This tiered view of "agent bossing" directly addresses a critical challenge: the talent pipeline.
The "Smarter than Juniors" Conundrum & The New Learning Curve: If AI agents can perform many traditional entry-level tasks faster and more consistently than human novices, how do these novices gain foundational experience? The answer may lie in redefining entry-level work. Instead of purely manual execution, junior roles could evolve to include:
Managing Simpler Agents: As mentioned, giving junior staff responsibility for well-defined agents with limited scope allows them to learn the principles of AI management, prompt engineering, and output validation in a lower-risk environment. They learn by directing AI rather than just being replaced by it for execution.
AI as an Accelerated Learning Tool: Ironically, AI itself could be part of the solution. Juniors could use AI assistants as interactive tutors, simulators, or "sparring partners" to rapidly acquire domain knowledge and test their understanding.
Focus on "Human-in-the-Loop" Excellence: Training will need to pivot heavily towards skills like critically evaluating AI-generated content (even from simpler agents), understanding AI ethics and bias, basic prompt engineering, and the ability to synthesise information from AI tools.
New Apprenticeship Models: Senior "agent bosses" will mentor juniors not just in domain knowledge, but specifically in the art and science of working with and managing AI. This becomes a core part of transferring judgment and strategic insight. This mentorship can guide juniors as they take on progressively more complex agent management responsibilities.
This pathway allows individuals to build the necessary domain knowledge and judgment incrementally, starting with more defined agent interactions and gradually moving towards managing more complex and strategic AI systems.
The Ripple Effect: From Org Charts to Office Bricks
If corporate structures and roles are reshaped by this AI-driven leverage and skill shift, the implications for physical office space are unavoidable:
Potentially Significant Reduction in Overall Headcount for Execution: If AI Agents successfully automate a large volume of execution-oriented tasks, the simple arithmetic suggests fewer human employees will be needed for those specific functions. This isn't to say overall employment will plummet (new roles may emerge), but the headcount for traditional operational roles could see a marked decrease.
The Highly Leveraged Individual: The "agent bosses" and other key human strategic thinkers will be capable of vastly greater output, supported by their digital teams. Their individual productivity soars.
Reassessing Office Demand – A Multi-faceted Impact:
Quantitative Reduction: Fewer execution-focused personnel directly translates to a reduced need for individual workstations and overall square footage. If one "agent boss" can oversee the work previously done by a team of 10 or 20, the baseline demand for desks diminishes proportionally.
Qualitative Shift in Office Purpose: The role of the physical office will likely continue its evolution away from being primarily a place for individual, heads-down routine work. Instead, it will become even more critical as a hub for:
High-Value Collaboration & Innovation: Workshops, strategic planning sessions, and complex problem-solving that benefit from face-to-face human interaction.
Client Engagement & Trust Building: High-stakes meetings where personal connection is paramount.
Culture Building & Mentorship: Fostering team cohesion, onboarding new (human) talent, and facilitating the kind of informal learning and mentorship crucial for developing future "agent bosses."
Space Design Transformation: This implies a demand for more flexible, technologically advanced meeting rooms, project spaces, and social hubs, and a significantly reduced need for dense banks of individual desks or traditional cubicle farms.
A Future of Nuance, Not Absolutes
It's crucial to temper this outlook with realism. True, disruptive innovation, deep strategic foresight, genuine human empathy, and navigating unprecedented "black swan" events will remain firmly in the human domain. Even in largely repetitive processes, the "nuance or variety" – adapting to contextual shifts, spotting subtle inefficiencies, or handling unique customer needs – will necessitate ongoing human judgment and oversight. The role of the "agent boss" will be to manage this interface between AI's efficiency and humanity's adaptability and insight.
However, the trajectory suggests a future where many routine business operations are indeed heavily automated. We’ve written before about ‘Fast, agile, ultra-productive superteams’ https://www.flexos.work/trillion-dollar-hashtag/10-themes-for-the-next-ten-years-number-3-trillion-dollar-hashtag-4 - and the rise of AI Agents complements this. Real estate strategies will have to adjust. Barring the building of a significantly larger economy, we’ll clearly be needing fewer ‘knowledge workers’, whilst those we do need will be dramatically more productive. Winners in real estate will be those who can anticipate these new realities and provide the right spaces for them. And the timescales for this are short - less than 5 years. Early adopters are already building ‘AI Agents’ - by 2030 they’ll be everywhere. And every company of consequence will be operating in a new way.
Over to you
Are you seeing any of this? How does your portfolio look? Resilient or Redundant? What’s your next step?
Don’t AI The Past
‘Most businesses today are using AI to incrementally improve what exists, not fundamentally rethink what could exist. But we’re at a technological inflection point that demands a different kind of response - not evolution, but revolution.’
According to a recent large scale, global survey conducted by Cisco:
‘CEOs harbour a range of fears regarding artificial intelligence (AI) - both business-related, such as falling behind competitors, and personal, like lacking the knowledge to ask the right questions in board meetings. Despite this, four out of five company leaders plan to adopt or expand AI use in their operations, although just 2% believe their organisations are prepared to do so.’
It’s what Marc Andreessen of famed VC A16Z calls ‘the sixth bullet point’. Every presentation now has a sixth bullet point added saying ‘WE DO AI’ in some form or other.
Evidence is all around of an accelerating corporate pressure to "get AI into production" - often driven by boardroom anxiety and FOMO rather than clear strategic thinking.
Unfortunately this is leading to a great deal of ‘AI-ing the Past’: After years of ‘digitising the past’ we’re now seeing companies do the same by trying to bolt on AI to their existing processes and workflows. And software companies in real estate are amongst the worst offenders. Despite being notoriously slow, en masse, to innovate, they now profess to being fully AI enabled. To be fair some are, but …… well I am sure you all have examples that spring to mind.
So it’s really no great surprise that BCG have reported that ‘70% of AI pilot projects fail to deliver meaningful business value’. Having the blind lead the blind is not a good idea. Rushing to tactically deploy AI with little or no strategic thought is just a fast route to chaos. And disappointment. And endless ‘I’ve used it but wasn’t impressed’ comments.
This incrementalism has a long history. It’s the natural progression of Charles Handy’s ’S Curve’ where one gets a burst of innovation, then a long period of iteration as products and services are rolled out across the mainstream, before an inevitable levelling off and stagnation. Until ‘The Second Curve’ arrives, and the process starts again. The danger point is where we are now, at that liminal stage between paradigms - the pre-AI and post-AI Age. Because chasing efficiency in a declining paradigm is a losing game. Technology inflections like we are experiencing now demand re-imagination, not optimisation.
The problem is having the wisdom to slow down to speed up. Not speed up to slow down, which is what those ‘four out of five company leaders’ are doing.
Instead of thinking how can we rapidly be 10% better, faster, cheaper, the inevitability of what is coming down the AI track (see everything I have been writing for the last two years for supporting evidence) should be forcing us to adopt a 10X mindset;
"What becomes possible that was impossible before?"
"Which constraints can we eliminate entirely?"
"How would a company born today approach this problem?”
On the #GenerativeAIforRealEstatePeople course we explore this in detail but the crux of the matter is to dissect your business at a very granular level, see where AI can or cannot be applied, what are the high value use cases, what is economically viable, and what scales.
To do this you first need to understand enough about AI capabilities to be able to apply any judgement, and to have a solid awareness of the direction of travel of this technology. A great deal that cannot be done today WILL be possible within 12-18 months. So one needs to be thinking about modularity and continuous ‘unbundling/re-bundling’. There can be no fixed five year plan - flexibility and adaptability need to be built into any new business design.
Given the effort required for this, one can see why so many just add a sixth bullet point. But, as I repeatedly say, the complexity is the opportunity. The Bug is the Feature!
The most sophisticated will be looking for what’s known as AI Synergy. Mostly, AI can enable you to do more with less, faster, and better than you previously could, but such ‘Augmentation’ tends not to enable breakthroughs, in the sense of performing better than a top performer could by working alone. AI Synergy though is where ‘human + machine’ acting together can achieve things neither could better alone. Where 2+2 really can make 5, or more.
Evidence for the existence of ‘Synergy’ came from a late 2024 Meta study of 106 studies, conducted by MIT, on AI and humans working together. They found that there are four scenarios where ‘Synergy’ is possible.
These are:
1. When Tasks Have Both Pattern and Exception
For example, CRE workflows where Generative AI can handle standard patterns, such as drafting documents or creating templates, while humans manage exceptions or unique cases.
2. When Scale Meets Judgment
Where Generative AI can produce scalable outputs, like generating personalised communication or high-volume content, that humans then refine with contextual judgment or specific expertise.
3. When Creativity Needs Structure
Where Generative AI can generate multiple creative options, such as space layouts or branding concepts, which humans curate and align with strategic goals.
4. When Analysis Meets Intuition
Where Generative AI generates scenario options or storytelling elements that humans enhance with intuitive insights, contextual understanding, or strategic planning.
The key principle is finding workflows where AI and humans play to their strengths while compensating for each other's weaknesses. This is where true transformation lies.
Which of course sounds easy, but is not. Whilst on a day to day basis, within one’s existing workflows, there is already a huge amount of utility that LLMs can bring to bear (content creation/modification, language understanding, data synthesis/summarisation, creative/design tasks, routine communications and so forth) really making it count at a company level IS going to require major reconstruction and redesign.
There’s the famous saying about ice hockey star Wayne Gretzky ‘skating to where the puck is going’ but even that isn’t the right analogy. Because AI most likely will be changing the game itself. Figuring out what the new rules are, how to win, and how to play offence and defence is going to require more than a bit of tinkering and some bolt-on features.
I think you need to be thinking along these lines;
Overall: What will the best company in my sector look like in 3-5 years? (bearing in mind that might mean we have 100X more AI compute power by then)
Step 1: Unbundle the Current Company:
Deconstruct your current value chain.
Ask: Which activities, products, and customer interactions are based on historical constraints rather than today’s possibilities?
Identify legacy structures that can be entirely removed, not just improved.
Step 2: Imagine the 'Super Company'
If you were building a competitor today from scratch using today’s and tomorrow’s AI capabilities, what would it look like?
Think in terms of:
Instant data-driven decision making
Hyper-personalised products and services
Continuous capability reinvention
Operating models that assume abundant intelligence rather than scarce expertise
Step 3: Rebundle with New Capabilities
What new business models, customer experiences, and operational practices become possible?
Example:
AI-native companies treat information flow as a live system - insights update operations in real time.
Sales, support, R&D, and supply chain are interconnected and autonomously optimised.
90% of internal processes could be automated or augmented.
All of which is quite discombobulating. Unless you are living/breathing ‘Planet AI’ much of this no doubt ‘feels’ a bit hyperbolic. This much change in this short a period of time seems to be over-egging it.
But we are beginning to see early signs of what ‘Second Curve’ companies might look like - even in the traditionally cautious professional services sector. Take, for instance, Unity Advisory…
Last week the Times newspaper in London carried this story:
‘Ex EY and PwC executives launch Unity Advisory to challenge Big Four in UK. Backed up by up to $300 million from Warburg Pincus, the venture is due to commence operations by June”
A small team, led by two top tier ex ‘incumbents’, with money to grow and a remit of ‘completely different’.
Maybe the future IS arriving faster than we thought? AI-ing the past is no longer enough.
Checklist: Are You 'AI-ing' the Past?
Are you simply automating current processes without questioning them?
Are your AI initiatives tied to old KPIs and success metrics?
Have you mapped what a 10X better product or service might look like?
Are you structured to adopt new AI capabilities rapidly as they emerge?
Is someone in your company thinking 2 years ahead, not 2 quarters?
The AI Diffusion Dilemma: What It Means For The Office Of 2035
‘Will AI follow the decades-long, bumpy rollout of electricity, or will AGI automate most knowledge work by 2027? Experts are fiercely debating, and the answer could determine whether the office towers we build today are essential hubs or expensive relics by 2035’
Commercial Real Estate is really a game for futurists isn't it? Most industries go from input to output in a matters of days, weeks, months, but in real estate when we commit to a new project, or when we buy or sell an asset, we are really betting on how the world will be many years, if not decades hence. We develop all these models explaining, to two decimal places, what our returns will be over the next decade, but we know we’re dressing up educated guesses as scientific fact. All models are wrong, but some are useful etc. Even if we mostly keep quiet about this.
Last week, we took the line that in a world where generic real estate is increasingly challenged by technological shifts, a wise strategy would be to invest in the real estate those shifts create—not in the real estate they leave behind.
Few assets sit more squarely in the blast radius of this debate than the office. If AI augments, the office adapts. If AI replaces, the office shrinks. And fast.
In the AI world we are heading into, the problem is there are experts to support both arguments.
Two Futures of AI: Slow Burn or Flash Fire?
Two major publications have recently come out. One (‘AI as Normal Technology’), by senior, highly respected academics at Princeton University, and one (AI 2027 - ai-2027.com) by a team of senior, highly respected AI researchers.
The Princeton View: AI Will Take Decades
Let’s start with the Princeton paper ("AI as Normal Technology"): In this, Arvind Narayanan and Sayash Kapoor…..
"explain why we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.”
Whilst they agree that AI is a ‘General Purpose Technology’ and thus of great importance, they justify their timeline of decades because:
"It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion... With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI as well.”
The key arguments for a long, drawn out impact scenario for AI is that:
Innovation is one thing and diffusion quite another. Getting to broad adoption always takes a long time because reality gets in the way and that flat clear road ahead turns in to a glacial moraine.
Speed limits are everywhere - safety concerns, especially in high stakes areas, regulatory hurdles, organisation inertia, the need to redesign workflows, and the fact that Generative AI is probabilistic not deterministic; so it just sometimes fails, in unexpected ways.
Humans will get in the way (a point incidentally that Tyler Cowen elaborated on recently). They just tend to slow things down. Turkeys seldom vote for Christmas, so we can expect endless workflow engineering, a desire for ‘Control’, and the re-engineering of work that involves ‘humans in the loop’, sometimes where they absolutely will be required, but often for more spurious, dubious reasons.
Under this scenario knowledge work will likely be augmented by AI, which will mean offices change their exact purpose and internal form factors, but the fundamental need for human workers, gathering, collaborating and managing processes (include of course AI itself) will persist and evolve relatively slowly. So mass obsolescence gets kicked a long way down the road.
Current support for this view is in the old saw that most enterprises are only now rolling out technology that was ‘hot’ 10-15 years ago, whilst startups have got long bored by this and moved on, and are developing the mainstream tech of 10-15 years hence.
So all is OK: the largest commercial real estate asset class is alive, kicking and has a healthy future.
The AI 2027 View: Cognitive Labour is Going Exponential
Ah, but…
If Princeton sees AI as a slow-burning revolution, the AI 2027 authors see it as a flash fire—one that could consume entire categories of cognitive work in just a few years.
Their perspective, laid out on ai-2027.com, is starkly different. They argue that:
"AGI is defined as AI capable of performing the vast majority of human knowledge work."
"Recent breakthroughs indicate that the timeline for AGI could be significantly shorter than previously anticipated, measured in years, not decades."
"The arrival of AGI by 2027 would represent an unprecedented transformation, automating most cognitive tasks and fundamentally reshaping the economy and society."
"Understanding this timeline is crucial for individuals, organizations, and governments to prepare
for the profound changes ahead.”
And yes, it might be different this time:
"Unlike previous general-purpose technologies, AGI's ability to automate cognitive labor itself could lead to recursive self-improvement and an exponential acceleration of progress.”
In a nutshell the authors predict imminent, revolutionary change driven by rapid capability gains in cognitive automation. This will be driven by what is known as ‘recursive self improvement’ (RSI) which is essentially where a human is no longer needed to help a computer learn. They start to learn by playing against themselves. It is why, a year after Google Deepmind’s AlphaGo programme famously beat the world’s best player of Go, Lee Seedol, its successor (which incorporated RSI), Alpha Zero, beat it 100-0 after just a few hours of training.
Once RSI kicks in the speed of improvement moves to an entirely different level.
At this stage, one refers back to the academics argument and thinks, yes but …. all those constraints. Do they disappear?
The Office in the Blast Radius
To which the ‘2027’ authors respond with:
Safety/Reliability? Sorted, because the AI will debug, test and validate itself far faster and more thoroughly than any human can, and will rapidly overcome these obstacles.
Integration/Workflow Redesign: Same again, the AI will be able to design optimal, new workflows, and then create compelling user interfaces that drastically reduce the normal friction of adoption. The new systems will simply be so much better, so quickly, that rolling them out will be nowhere near as painful as it has historically been.
Learning Real-world Nuance: And again, by being such a fast learner, and so easily plugged in to all the systems a company has, the AI will be able to gain an appreciation of the ‘tacit knowledge’ of any organisation, very quickly. Achieving quality ‘gut-feeling’ is no longer going to take years, decades of experience.
Overall this AI will be able to demonstrate deep domain knowledge and high-value utility at a speed no previous technology could. And whilst the authors explicitly acknowledge the arguments put forward by the Princeton academics they point to the key differentiator, this time, is the nature of the technology itself.
For instance they emphasise that we are talking about the automation of cognition. The AI will be automating cognitive labour, including the development of the technology itself. Electricity didn't design better power plants; the early internet didn't autonomously code better network protocols. AI, they argue, can do this.
They also discuss the generality of this technology in that LLMs (and future AGI) just has so much more utility straight ‘out of the box’. They aren’t reliant on much else needing to happen before they can be run at full power. And this is software, not hardware, so scaling is nigh on infinite, there are no marginal costs and deployment can be in hours, not months or years.
Returning to the fundamental point, once the underlying mechanism (self-improving cognitive automation) is in place, we are genuinely talking about technologies the likes of which we’ve never seen before.
The Planning Dilemma: What to Believe, and When?
So we have two, credible, well-articulate arguments reaching diametrically opposed conclusions. In one, offices are pretty much safe for decades to come, but with the other the assumption would have to be that we’d be needing a bare fraction of the global office stock that exists today.
I think it would be quite easy, because the conclusions sound so ‘out there’, to dismiss the ‘AI 2027’ argument, and comfortably luxuriate in the ‘so slow I don’t really need to think too much’ prognosis of the Princeton academics. And I am equally certain many will.
But, for me, that is the high-risk route to take. Yes, they are all, to varying degrees, talking their book, but whenever you read, or listen to, the senior researchers from all the major AI research labs (not just the admittedly most adamant 2027 authors) they all paint a picture of a technology developing at crazy speed and capabilities arising that amaze them constantly. Everyone is assuming extraordinary things by 2030. And no-one is talking decades off anymore.
The 2030–2035 Danger Zone: A Strategic Red Flag
So the really critical issue is whether the arguments that the ‘normal’ diffusion speed of technology will be accelerated due to the new capabilities delivered via RSI, are valid? I think they are, though my accelerator is less ‘to the metal’ than the ‘2027’ authors. I would say the asset class is safe until 2030. But 2030-2035 is the danger zone.
Even without full-blown AGI, highly capable, specialised AI systems could automate vast swathes of tasks within knowledge jobs, leading to:
Significant job restructuring even if not mass unemployment.
Reduced overall headcount needed for certain functions.
Downward pressure on wages for easily automated cognitive tasks.
A potential bifurcation – high demand for those who manage/direct AI and perform complex non-routine tasks, lower demand elsewhere.
This feels very reasonable and, I would guess, likely to highly likely.
And that is just up to 2035, a decade hence. Beyond that, being worried about the state of office demand in 2035 is entirely rational and prudent. I think the inertia elastic band will snap by then. In a future newsletter we’ll look closely at how to be ready when it does.
Strategic Takeaways for the Office Asset Class
Which does not mean NO offices will be required. But I’d sure as hell be concerned about owning the ‘right’ offices. What ‘right’ means we have covered before and will cover again.
The bottom line though appears to be that offices as an asset class are getting riskier. I do understand the Princeton academic’s arguments, but find it hard to believe the coming decade is going to be as stodgy as they believe. Most pertinently, I think there will be very many slow moving enterprises, maybe a majority, but I also believe that we’re morphing into an age of fast, agile, ultra-productive superteams (see https://www.flexos.work/trillion-dollar-hashtag/10-themes-for-the-next-ten-years-number-3-trillion-dollar-hashtag-4), and they WILL be utilising all the power of all these new tools asap.
Over to You: Long or Short on the Office?
Offices 2030–2035: Are you betting on augmentation or automation? On inertia or exponentiality?
Because buildings built today will still be around in 2035. The only question is: will they still be needed?
What are you seeing? Let’s map the risk together.
‘AI as Normal Technology’ - https://kfai-documents.s3.amazonaws.com/documents/cb012bac2c/AI-as-Normal-Technology---Narayanan---Kapoor-Final.pdf