THE BLOG

Antony Slumbers Antony Slumbers

The End of Architecture as We Know It

AI and the Built Environment — an honest reckoning

This week we have a special newsletter. Together with Sandeep Ahuja - co-founder of AI-native Architecture practice cove - we’ve written up our thoughts on how the Architecture profession needs to think about AI.

Every profession, when threatened by technology, reaches for the same two arguments. First: this tool cannot do what we do — our work is too complex, too contextual, too human. Second: even if it can, we will always be needed. Architecture has been reaching for both. Neither is quite wrong. Neither is anywhere near enough.

The honest conversation — the one that is harder to have — is not about whether architects survive. It is about what the profession actually is when the production layer underneath it becomes cheap, fast, and largely automated. What is left when the work that consumed most of the hours is no longer where the value lives? What does the architect become, and what does the built environment gain or lose in the process

Between us, we have been watching this intersection of technology and real estate for a long time — one from the vantage of three decades in PropTech and real estate strategy, the other from building AI systems that are actively redesigning how architecture gets done. We do not agree on every detail of how this plays out. But we agree that the profession deserves a more honest account of what is coming than either its defenders or its critics have been willing to give it.

The wrong question

The industry keeps asking whether AI will replace architects. It generates more heat than light. A more useful question is: what share of what architecture currently charges for will, within five years, not be worth paying for at current rates — and what becomes newly possible and newly valuable in its place?

When you ask it that way, the answers get uncomfortable fast. AI does not replace the architect. But it replaces, comprehensively and at accelerating speed, the work that the majority of the profession’s workforce has historically been paid to do: translating concepts into documentation, applying codes to floor plans, generating massing options, iterating through compliance across building types and jurisdictions. For most firms doing volume residential and commercial work, this is not the margin of the profession. It is the core.

Dario Amodei, CEO of Anthropic, argued in his January 2026 essay on AI risk that we are compressing what would historically be a century of economic evolution into five to ten years. His warning was about white-collar work broadly. It applies with specific force here. The institutional protections architecture has always relied on — licensure, liability, the genuine complexity of construction — protect the licensed professional at the moment of signing. They do not protect the entire workforce that existed to support that professional, and they do not protect the billing model that assumed that workforce would always be necessary.

What the licensed professional actually becomes

Here is where the conversation in the profession tends to go wrong. The assumption, often unstated, is that as AI handles more of the production, the architect’s role shrinks toward oversight — a reviewer of machine output, a checker, a last pair of eyes before the stamp. That is not only insufficient as a vision. It is the wrong model entirely.

The licensed professional in an AI-native practice is not a quality controller. They are the director. They set the vision, define the constraints that matter, make the judgment calls that cannot be encoded — and then deploy AI to execute against those decisions with a speed and thoroughness that no traditional team could match. The work expands in ambition because the cost of iteration collapses. A principal who once had to ration design exploration because every option cost weeks of human time can now ask genuinely harder questions: What does this site want to be? What does this community actually need? What happens if we challenge the brief entirely? The machine runs the scenarios. The architect decides what they mean.

And this is where something that has always been central to great architecture — but rarely named directly — becomes the defining professional asset. Taste. When AI can generate a hundred viable options where a conventional process produced three, the critical act is no longer generation. It is selection and direction. It is knowing which of those options is not just technically correct or financially optimised, but genuinely right — for the place, for the people, for the cultural moment. Steve Jobs once said the problem with certain technology companies was that they had no taste — that they could execute without any sense of what was worth executing. Architecture has always depended on taste at the highest level. AI does not diminish that. It raises the stakes for it, because taste is now the thing that cannot be automated and increasingly cannot be hidden behind production.

This also changes what the stamp represents. When AI produces the documentation, the architect who signs it is asserting professional judgment over a system they directed and a process they are accountable for understanding. That is a more demanding standard of accountability, not a lighter one. The firms that grasp this are building rigorous human-in-the-loop workflows that reflect it. The ones that do not are the ones who will find the liability they thought AI was absorbing lands squarely back on them.

“When AI can generate a hundred options, the critical act is no longer generation. It is knowing which one is genuinely right. That is taste - and it is profoundly, irreducibly human.”

Where the value rebundles

AI removes scarcity of time and cognitive bandwidth. The analysis that took weeks takes minutes. The iteration that required a team of four requires one. But it introduces new constraints in place of the ones it removes: verifiability, governance, accountability for decisions made at machine speed. The contested urban site, the complex adaptive reuse, the project where a misjudgement is fixed in concrete for fifty years — these demand human direction not because AI lacks capability, but because the consequences are irreversible and the judgment calls genuinely cannot be automated. That is where the profession’s value has always lived at its most serious. AI does not erode it. It clarifies it, by removing everything else.

What this also does, on the engineering side, is worth naming clearly. A large share of structural and MEP work is structured, rule-governed production: modelling, detailing, documentation, coordination. That layer is largely automatable. What remains is systems thinking — understanding how one design decision constrains every other intertwined system, and making judgment calls when they conflict. That is a smaller slice of current engineering hours than the profession would like to admit. It is also the highest-value slice, and it is squarely human.

The firms that will navigate this are already examining their workflows with enough rigour to understand what belongs to each category. The ones that are not are making a strategic choice by default — and the window to make it deliberately is narrowing.

The question the profession is not asking

There is a significant upside to this shift that tends to get buried under the anxiety, but it comes with an honest question attached.

Most of the built environment has never been able to afford serious architectural services. The small commercial conversion, the affordable housing development, the institutional building sitting on obsolete space that no developer will touch because the economics do not pencil — these projects exist in a world where design quality has always been rationed by cost. AI does not just compress margins in the existing market. It makes it economically viable to bring genuine design intelligence — and genuine taste — to clients and communities that could never access it before.

But the harder version of this question is: when design costs fall significantly, who captures the savings? If the answer is primarily developers optimising yield, the built environment gets faster and cheaper without necessarily getting better for the people who occupy it. The profession has an opportunity here that it will only realise if it is deliberate — using the efficiency gains to take on more complex, more community-oriented, more ambitious work, rather than simply racing to the bottom on fee. That choice does not make itself.

“AI makes it viable to bring real design intelligence to places that could never afford it. Whether that benefit reaches communities or just margins is a choice the profession still gets to make.”

What to do with the uncertainty

Amodei has also said: “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.” We think that obligation extends to anyone with a clear view into what is changing in the built environment. The profession deserves honesty, not comfort.

The profession does not end. But the profession as it has existed — with its particular workforce, its particular economic logic, its particular model of where value sits — is being restructured around a different centre of gravity. The architect as production manager gives way to the architect as director: smaller teams, higher ambition, clearer accountability, and for the first time the computational firepower to actually match the scale of what the built environment needs.

That is not a diminished profession. It is, if the transition is navigated with honesty and intention, a more powerful one — defined not by how much it can produce, but by how well it can choose.

Read More
Antony Slumbers Antony Slumbers

The Dashboard Made Them Blind

Why workflow-level decomposition is the difference between AI that transforms and AI that quietly degrades

Most of what companies call an AI strategy is actually just a shopping list. Tools to evaluate, pilots to run, use cases to ‘explore.’ Which is fine, as far as it goes. It just does not get you very far. Most strategies quietly die in the gap between “we should use AI for asset management” and knowing what to build, where the human remains vital, and how the two combine to produce something genuinely better. This piece tries to close that gap.

EXECUTIVE SUMMARY

A credible AI roadmap starts by decomposing workflows to the task level. Each task then needs to be assessed through four questions: what constraints AI removes, what new ones it introduces, where value emerges across three horizons, and how the work should be redesigned so human and machine compound rather than compete. This piece applies that process to a single workflow: threat identification in an office REIT’s asset management cycle. The findings challenge assumptions: the same task can sit in different places on the CRE Automation Matrix depending on jurisdiction and data quality. Automating the measurable can degrade your capability in the unmeasurable. And the genuinely transformative horizon (H3) isn’t a better dashboard: it’s a different operating model for the function itself.
_______________
This essay pulls together several frameworks I’ve unpacked in recent newsletters - particularly RIRA, the CRE Automation Matrix, and the Prompting Framework - and applies them in detail to a single real estate workflow.
_______________

THE WORKFLOW

Over the past few months I’ve introduced a framework stack for creating value with AI in CRE: RIRA for strategy, the CRE Automation Matrix for analysis, and the Prompting Framework for execution. Together they answer three questions: how are we creating value, what kind of work is this, and how do we get it done?

This piece takes those frameworks off the shelf and applies them to a real workflow at the level of granularity a roadmap actually requires. The aim is not to be cheaper or faster, though you will be both. The aim is to be demonstrably, verifiably better: to produce a quality of output the traditional operating model cannot match. That requires surgical decomposition - seeing exactly where to push hard on the AI and where to optimise the human. Neither happens without genuine system redesign.

Take a hypothetical mid-cap European office REIT. 45 assets across the UK, Germany, and the Netherlands. Property management outsourced. Asset management team of twelve.

The asset manager’s recurring job is to answer four questions continuously:

How is income performing versus plan? 
What is threatening that income? 
What interventions will protect or grow it? 
What needs to happen next?


We’re going to take the second, threat identification, and apply the RIRA process. I’ve chosen this because it’s where the Automation Matrix classification is most contested, and where getting the human-machine boundary wrong creates real financial risk rather than just inefficiency.

Here’s what every asset manager knows but rarely says out loud: they are doing seven fundamentally different jobs simultaneously when they identify threats to income. And what RIRA reveals, when you decompose properly, is that those seven jobs need seven different human-machine configurations.

SEVEN JOBS, SEVEN DIFFERENT PROBLEMS

1. Lease event surveillance. 300+ breaks, expiries, rent reviews, and options in a rolling 24-month window. Knowing not just when they are, but which ones bite: a small lease that’s individually immaterial but leaves an entire floor vacant.

2. Tenant financial monitoring. Payment patterns, public financials, sector stress, balance sheet signals. Ranges from the trivially observable to the analytically complex.

3. Relationship intelligence. Tenant conversations, building walkthroughs, broker networks, the PM’s building manager mentioning something over coffee. None of this is in any database. For experienced asset managers, it is frequently the earliest threat signal.

4. Market threat assessment. Supply pipeline, rental direction, demand shifts, regulatory risk. The data exists; the interpretation of what it means for this specific building does not.

5. Physical and regulatory risk. EPC deadlines are rules-based and trackable. Building condition data is fragmented across PDFs, BMS systems, and within property managers’ heads.

6. Valuation trajectory. Forming a view on whether the external valuer will write an asset down. Which requires understanding not just the comparables but how the specific valuer thinks.

7. Synthesis. Pulling 1–6 together, reading the interactions between them, deciding what to escalate. This is where the experienced asset manager earns their salary.


The instinct, the one most AI strategies follow, is to scan this list and classify: automate surveillance, partially automate credit monitoring, leave valuation judgement alone. That classification would be right at such a crude level that it’s dangerous.

RELEASE: WHAT CHANGES AND WHAT GETS HARDER
Before deciding what to automate, you need to understand both the constraints AI removes and the ones it introduces.

RIRA starts with Release: map the constraints AI removes and the new ones it introduces. This is where the strategic picture forms.

What opens up:

The surveillance bandwidth constraint disappears. An asset manager with five multi-let buildings triages by rent quantum. Big tenants get attention; small ones get reviewed when something goes wrong. AI removes this: every event monitored continuously, regardless of size. This matters because the damaging event isn’t always the biggest tenant’s break. Sometimes it’s three small tenants all exercising in the same quarter, triggered by the same market dynamic, that nobody spotted as a pattern because each was individually immaterial.

Evidence assembly collapses from hours to minutes. Portfolio-level pattern recognition becomes possible for the first time: correlated threats across assets that no individual has the bandwidth to see. Multi-scenario analysis per asset per quarter becomes routine instead of exceptional.

What gets harder:

The false negative problem. This deserves to be treated as a first-order design constraint, not a risk to manage after the fact. Automated surveillance monitors what it’s configured to monitor. New threat types - a planning authority changing designations, a shift in corporate occupier strategy, a regulatory change not yet enacted - won’t trigger alerts because nobody anticipated them when the rules were written.

Consider what has actually destroyed significant value in institutional office portfolios recently: COVID, the energy price spike, the acceleration of distributed working, regulatory tightening on Energy Star or EPCs. A monitoring system designed the year before any of these would not have caught them. A good asset manager, reading the market and talking to tenants, might have.

Then there’s automation-induced complacency. Almost every technology tool introduced into asset management over the past 15 years has exhibited the same pattern: the tool replaces attention rather than supplementing it, because people are busy and the tool gives them permission to stop doing the effortful work. Automated rent tracking was introduced; people stopped walking floors. Dashboards appeared; people stopped reading detailed PM reports.

The uncomfortable implication of this is that the better automated surveillance works for known threats, the more it can degrade your capability for unknown ones. Not fixable by better AI. This is a human behaviour problem that the Redesign phase of RIRA has to address directly.

Add the verification gap - who checks the AI-assembled threat picture when the asset manager’s implicit quality control came from assembling it themselves? And the PM data governance question - were those outsourced PM contracts even written to contemplate automated analysis? With second-order consequences like these, the constraints-introduced side of the ledger is at least as consequential as the constraints-removed side.

THE AUTOMATION MATRIX: WHERE CLASSIFICATION GETS INTERESTING
The mistake is to treat a workflow as one kind of work. It rarely is.

With constraints mapped, you classify each task on the CRE Automation Matrix: what kind of work is this, and how verifiable is it? The generic classification - lease surveillance is Quadrant A, tenant credit is Quadrant B, valuation is Quadrant D - looks clean.

But it’s wrong in three ways that matter.

The same task sits in a different quadrant depending on jurisdiction. Lease event surveillance for well-abstracted UK leases with verified data is Quadrant A: rules-based, testable, automatable. But German commercial leases operate under BGB provisions with fundamentally different mechanics, from how break rights and rent adjustments work to how formal documentation requirements apply. Even after the 2025 reform that relaxed the old Schriftform written-form standard, new complexities have emerged around documentation and the legal status of informal agreements. Dutch office leases under Article 7:230a of the Civil Code have their own distinct framework again: different renewal and termination mechanics, a critical classification question about which statutory regime applies, and materially different consequences depending on the answer.

One workflow. Three markets. Three different quadrant positions. Build a system for the UK classification and deploy it uniformly: reliable for English leases, potentially dangerous elsewhere. The point generalises beyond these three jurisdictions: wherever you operate across borders, the automation profile is jurisdiction-specific. Local knowledge isn’t a nice-to-have. It’s a structural requirement.

The same task shifts quadrant depending on data quality. Tenant credit analysis is Quadrant B where data is clean: public financials, credit ratings, structured sector data. For privately held Continental European tenants where public data is limited or delayed, the same analysis moves into hard-to-verify territory. The evidence base isn’t there. You’re relying on relationship intelligence, not structured data.

Automating the measurable degrades your capability in the unmeasurable. This is the finding that matters most. Build a tenant credit dashboard. It monitors payment patterns and public financials well. It generates confidence. And because the dashboard is ‘handling’ tenant risk, the pressure on asset managers to maintain their tenant relationships - the conversations, the site visits, the reading of signals - quietly diminishes. People are busy. The dashboard gives them permission to stop.

But the tenant about to exercise a break doesn’t always show up in the financials. They show up in how they’ve stopped investing in their fit-out, the half-empty floors, the facilities manager who mentions in passing that the company is looking at options. That intelligence comes from being present. If you’re not present because the system is ‘handling it,’ you’ve automated yourself into a blind spot.

THREE HORIZONS: FROM FASTER TO FUNDAMENTALLY DIFFERENT
Once the task types are clear, the question becomes not just what gets cheaper, but what becomes possible.

H1: the efficiency gains. Automated calendars, payment dashboards, market data assembly, compliance trackers. Worth doing immediately. Table stakes within two years. Same picture, faster. Every REIT will build these.

H2: the capability upgrade. Evidence-linked threat assessments: every claim traced to source - the lease clause, the comparables with adjustment logic, the payment history, the sector outlook. The asset manager doesn’t assemble the picture; they evaluate it. A fundamentally different use of their time.

Portfolio-level pattern detection: a capability that doesn’t exist today in any form, even manual. The system surfaces correlated threats that no individual can see. The portfolio committee conversation shifts from “tell me about your buildings” to “here are the portfolio-level patterns - which do we act on?”

These are frontier capabilities - cognitive work made verifiable through evidence engineering. Hard to build. Hard to replicate. Where defensible competitive advantage lives.

H3: the transformation. Now apply the H3 Provocation Framework - the questions designed to push past efficiency and capability into genuine structural change.

What becomes free, and whose business breaks? The binding constraint in threat identification is the asset manager’s cognitive bandwidth: deep monitoring of 3–5 assets, not more. If AI removes the surveillance and evidence assembly constraints, a senior AM can maybe oversee 8–10. The relationship layer sets the ceiling - you can’t maintain deep tenant intelligence across fifteen buildings - but it rises meaningfully. Whose business model depends on the current bandwidth constraint? Every outsourced AM provider whose fee assumes the current ratio of human attention per asset.

What are you actually selling? Asset management fees bundle information assembly, analysis, market knowledge, relationships, judgement, and accountability. AI commoditises the first two. The premium concentrates on relationships, judgement, and accountability; perhaps 40% of where time currently goes. If clients see that 60% of the bundle is automated, the fee holds only if the remaining 40% is demonstrably better.

If someone built an asset management platform from scratch today, would they build what you’ve got? They’d build continuous surveillance and evidence assembly as infrastructure, hire a small team of very senior people whose job is relationships, judgement, and accountability, and pitch: “Better asset management at lower cost, because our platform handles what your AMs do manually, and our people focus on what requires human judgement.”

After this shift, whose signature still matters? The asset manager’s threat assessment currently carries authority because they assembled it. In the new model, their signature means: “I validated the AI’s assessment, applied my contextual knowledge, and I endorse this view.” Still valuable. But a different job. The skills shift from production to evaluation - and evaluation is a more senior skill. The junior analyst who helped assemble the picture may not have a role. The senior AM who can challenge an AI-generated assessment is more valuable than ever.

Who has this problem right now, and what are they paying to solve it badly? Every investor surprised by a write-down. Every portfolio committee that asked “did we know about this?” after a tenant default. Every fund manager who lost a mandate because a competitor seemed more on top of their portfolio. They’re all paying for the absence of continuous, evidence-linked threat intelligence. They’re just paying in lost value and lost mandates rather than invoices.

Apply the test: does it make the current process obsolete? If continuous threat intelligence exists, the quarterly “present your buildings” review becomes unnecessary. The one-person-owns-five-buildings-end-to-end structure becomes a legacy. That’s not a faster taxi. It’s a different way of getting around.

REDESIGN: HUMAN + MACHINE, BY DESIGN

RIRA’s Redesign phase turns the strategic picture into operational architecture. The key insight is blunt: the current model asks one person to do everything. They spend 60% of their time on data assembly and routine monitoring, 40% on interpretation, judgement, and relationships. The plumbing subsidises the cognition. That’s backwards.

The redesign separates four layers:

A continuous surveillance layer: lease events, payment patterns, market data, comparables, regulatory deadlines. Jurisdiction-specific rules. Data confidence indicators showing what’s verified and what hasn’t been. Always on.

An evidence assembly layer triggered by events: structured, cited assessments where every claim traces to source.

A portfolio pattern layer - genuinely new - surfacing correlations that no individual can see.

A human judgement layer with two explicitly protected activities. First: evaluate, challenge, and override the automated assessments using contextual knowledge the system doesn’t have. Second: maintain and deepen the relationship intelligence that detects the threats machines cannot see. Building visits, tenant conversations, PM relationships, market presence. This is not leftover work after automation. It is formally specified, time-protected, and treated as core capability - because the Release analysis told us that automation of known-threat surveillance will degrade unknown-threat detection if the relationship layer isn’t deliberately preserved.

This aligns with what recent research on human–AI collaboration keeps finding: the gains come not from replacing humans wholesale, but from designing workflows where pattern meets exception, scale meets judgement, analysis meets intuition, and creativity meets structure. Real synergy - where 2+2 genuinely makes 5 - is a workflow design outcome, not a purchasing decision.

The result: the senior asset manager spends 80% of their time on work only they can do, instead of 40%. Their judgement improves because they’re not cognitively depleted by data assembly. The portfolio committee receives evidence-linked, auditable assessments. Cross-portfolio patterns are visible for the first time. And the human intelligence layer is stronger, not weaker, because it’s valued rather than incidental.

That is what human + machine looks like when it’s designed rather than accidental.

WHAT TO DO NEXT

The same process applies whether your workflow is leasing, fund reporting, valuation support, debt management, development monitoring, or facilities operations. The specifics change. The method doesn’t.

Take your most important workflow and ask:

  1. What are the discrete tasks? Decompose until each is a single kind of work.

  2. What constraints does AI remove? Bandwidth, evidence assembly, pattern recognition, analytical depth…

  3. What new constraints does it introduce? False negatives, complacency, verification gaps, data governance… Design requirements, not afterthoughts.

  4. How does classification change by jurisdiction and data quality? If it doesn’t change, you haven’t looked closely enough.

  5. Where should the machine run continuously?

  6. What human capability must be protected, not displaced? Time-allocated. Measured. Accountable.

  7. Where does the combination produce something neither could achieve alone?

That’s your roadmap. Everything else is decoration.

Read More
Antony Slumbers Antony Slumbers

AI: Widespread Adoption. Shallow Usage.

The 3rd Annual AI in Real Estate Survey reveals an industry that has access to AI - but not seriousness about AI.

In 2024, curiosity. In 2025, experimentation. In 2026? A gap. Between access and depth, between awareness and action, between what the industry says it believes and what it’s actually doing about it.

You can download the full report here.

The Headline

93% of respondents now have access to AI tools at work. That number is essentially settled.

But only 7% describe their organisation as fully integrated.

Two-thirds haven’t moved past basic chat. A third describe usage as entirely ad hoc - individuals doing their own thing, with no organisational strategy behind it.

The industry has the tools. What it doesn’t have is seriousness about the tools.

Everyone Has a Hammer. Almost Nobody Has a Blueprint.

The survey paints a picture of a sector that has passed the access test and is now failing the maturity test. The numbers tell the story clearly:

34% of organisations use AI on a purely ad hoc, individual-driven basis. Another 23%are still trialling. Only 34% have implemented AI in defined business areas, and a mere 7% are fully integrated.

Microsoft Copilot leads daily usage (49%), largely because it’s the only tool many organisations sanction. ChatGPT follows (37%), but with a big caveat: much of that usage is shadow AI - individuals using personal accounts because they find the approved tools less capable. One respondent put it bluntly: their company provides Copilot, but for non-sensitive work they find Gemini, Claude and ChatGPT give better answers.

This is significant. Corporate AI strategy in most CRE firms is being driven by procurement and IT security, not by capability or user experience. People are voting with their feet. And the gap between what organisations sanction and what individuals actually use creates compliance, confidentiality and consistency risks that most firms aren’t equipped to manage.

Two-Thirds of the Industry Is Still Using AI Like a Search Engine

When the survey asked which advanced capabilities people actually use, 69% said standard chat and transcription. That’s it. The foundational level.

Knowledge management? 43%. Custom instructions? 36%. Deep research? 36%. Procedural training and skills? 31%. Live collaboration tools? Just 16%.

The gap between using a chatbot for ad hoc queries and embedding AI into repeatable, governed workflows is the gap between having AI and using AI. Most of the industry is on the wrong side of that line.

The dominant use cases confirm this: transcription, summarisation, drafting. The boring stuff. And 87% of respondents say the boring stuff is exactly what they want AI to handle. Which is fine - except that the boring stuff is increasingly table stakes. Transcription and summarisation are being baked into standard software. They’re hygiene factors, not differentiators.

The firms pulling ahead are doing something different: building knowledge systems, creating structured prompting frameworks, developing custom workflows, deploying AI agents. One respondent described having six operational AI agents in business workflows with twelve more in the build phase. They’re an outlier - but it’s the direction of travel.

The Trust Gradient

The survey reveals a trust hierarchy. Respondents trust AI most for summarisation (75%), meeting transcription (66%), and content drafting (59%). Trust drops as stakes rise: negotiation strategy (30%), property valuation (16%). Eleven per cent don’t trust AI with any of the listed tasks.

This is troubling. While care needs to be taken with these tools, blanket dismissal is indicative of veneer-like understanding. Used well they are enormously powerful right up the intellectual value tree. ChatGPT 5.4 outperformed, or tied, with humans across 83% of ‘knowledge work’ tasks within the ‘eval’ GDPVal. The industry is stuck in a mindset that they are only for trivial work.

The Number That Should Keep You Up at Night

58% of respondents believe the time-based fee model is the most vulnerable part of CRE’s business model.

Read that again.

These are, in many cases, the same professionals who earn time-based fees. They are telling you - in a survey, anonymously, with no incentive to exaggerate - that the way they charge for their work is the element most at risk from AI.

AI collapses the time required for many tasks. What used to take a junior analyst three days now takes an afternoon. What used to take a partner two hours of document review takes ten minutes. The input - time - is compressing. And when clients notice (they will), they will reset expectations about what an hour of advisory time is worth.

The second most vulnerable element? The Information Edge (18%) - the proprietary market data that research-led firms have historically treated as a competitive moat. As AI gets better at synthesising public data, that moat is being drained.

Only 6% believe relationships will fully protect fee structures. They may be right in the short term. But it’s a position that gets harder to defend every quarter.

The Governance Deficit

30% of organisations have a comprehensive AI governance policy. 21% have something basic. But 16% have no policy and no plans to create one. Another 10% don’t even know whether a policy exists.

That means roughly a quarter of the industry is operating AI tools - handling sensitive commercial and financial data - with no governance framework and no apparent urgency to build one.

You can’t govern what people don’t understand. And you can’t build governance for tools that people are using in the shadows.

The shadow AI problem and the governance deficit are two sides of the same coin. Employees use personal AI accounts because the approved tools feel restrictive. Organisations lack governance because they haven’t invested in understanding what their people are actually doing with AI. Both problems trace back to the same root cause: insufficient training and strategic intent.

Training Is the Anchor on Everything

Cultural resistance and understanding gaps are the most viscerally felt barrier to adoption. As one respondent put it: people know AI is important or powerful, but a lot do not know practically what this means.

You can’t redesign workflows if people don’t understand what the tools can do. You can’t build governance if the people writing the policies have never spent serious time with the technology. You can’t adopt agentic AI if your team thinks ChatGPT is just a search engine with better manners.

Training isn’t a nice-to-have. It’s the prerequisite for everything else - governance, workflow redesign, cultural change, business model adaptation. Without it, firms are stuck in a loop: shallow adoption generating shallow results, reinforcing the perception that AI isn’t that transformative, justifying further underinvestment.

Jobs: Not Elimination, but Redesign (and fewer of them)

Despite the low serious adoption and rather dismissive attitude towards AI (especially in the belief it is for the dreary work) it is striking that respondents expected each of the nine CRE disciplines surveyed to suffer a decline in jobs.

Research (63% expect decrease) and property management (61%) are seen as most exposed. FM the least - but even here over 40% expected a decrease in jobs.

I did think it would be interesting to see the results if everyone knew what the SOTA models were capable of.

But the qualitative responses were interesting. Several respondents emphasised that it’s less about headcount reduction and more about role transformation - fewer administrative tasks, fewer junior information-compiling duties, greater emphasis on judgement, stakeholder engagement and oversight.

One respondent offered a framing reflected across many industries: lower-rank roles like coordinators and analyst support get compressed, but demand for reviewers, approvers and humans-in-the-loop remains strong.

But… the assumption that senior or specialist roles are insulated is, I think, misplaced. As AI becomes more capable of analysis, drafting and knowledge retrieval, all roles shift. The exposure isn’t to the role - it’s to the person who doesn’t adapt their workflow.

Space: Reconfiguration, Not Collapse

On the question of AI’s impact on physical space, the majority view (55%) is reconfiguration: same amount of space, different use. 27% expect contraction. 13% see negligible impact this cycle. Only 5% expect expansion.

This aligns with the thesis I’ve been developing in recent newsletters. The question isn’t simply “will AI reduce headcount and therefore reduce space demand?” The answer depends on a cascade of strategic choices about how firms deploy productivity gains. The same efficiency assumptions can lead to radically different space outcomes depending on whether firms shrink, reinvest, or restructure.

The reconfiguration thesis - a shift from desks to collaboration and experience environments - suggests AI is seen as an accelerant of existing post-pandemic patterns rather than a wholly new force. That’s probably right for the next two to three years, but beyond that we’ll see far more direct impact on demand.

The Divergence

Here’s what I take away from three years of this survey.

The industry has moved from curiosity to experimentation to widespread but shallow usage. Adoption is high; maturity is not. And the gap between the firms that are serious about AI and the firms that merely have access to it is widening.

One respondent captured the stakes perfectly: We are overestimating the impact of AI for the next three years - but it will have a devastating impact on those organisations that haven’t worked out how to get, store and use their IP in five years. It will be like bankruptcy - you go bankrupt slowly and then fast.

The technology is accelerating exponentially. The industry is moving linearly. That mismatch has consequences.

A few firms are building AI agents, custom knowledge systems, and structured prompting frameworks. They are pulling away from the pack.

The rest? Widespread adoption, shallow usage. Lots of tools, not enough thinking.

Time to get serious.

Read More
Antony Slumbers Antony Slumbers

AI & Real Estate: Beyond Generative

The direction is obvious. The speed is what breaks you.

The latest module for #GenerativeAIforRealEstatePeople runs to 9,000 words. This is the core argument, compressed. Course participants get the full version with the scenario frameworks, asset-class analysis, operational horizons, and the uncomfortable questions at the end.

The thesis in one sentence

The direction of AI in real estate is largely knowable and mostly uncontroversial. The variable that determines whether your assets appreciate or depreciate is speed of arrival - and the industry has no framework for assessing it.

Why speed, not direction
In 2023, esteemed AI researcher Andrej Karpathy published a diagram showing an LLM as the kernel of a new operating system. Most people read it as a five-year forecast. It arrived in roughly eighteen months.

If the market has 10–15 years to adjust to AI-driven change, outcomes are manageable: portfolios rotate, buildings retrofit, organisations retrain, capital markets reprice gradually.

Compress the same changes into 3–7 years and you get stranded assets, workforce dislocation, and the kind of repricing that destroys value for anyone who moved too slowly.

The course module introduces a predictive lens for this: feedback loops. Domains with tight feedback loops - where you can quickly and objectively measure whether the AI’s output is correct - transform fast. HVAC optimisation, lease abstraction, code generation: these are already being reshaped. Domains with diffuse feedback loops - investment judgement, tenant relationship management, urban planning - change more slowly and less predictably.

For CRE professionals, this creates a split reality. The operational and transactional layers sit in the tight-feedback category. The strategic and relational layers sit in the diffuse category. But the operational transformation inevitably changes the context in which strategic decisions are made. You cannot hold the two apart for long.

What has changed since the first version of this module?
Three of the four original barriers to AI adoption have shifted:

The cost barrier has flipped. Individual AI inference is now negligible. But deploying AI at enterprise scale - the data architecture, integration middleware, cybersecurity, governance - is a serious capital commitment. The firms that understood this early allocated 20% of their AI budgets to technology and 80% to change management. Most did the opposite. Some are still debating whether to start.

Data quality matters differently. Frontier models handle messy, unstructured data remarkably well. The constraint has shifted from “our data isn’t clean enough for AI“ to “our data isn’t structured enough for assurable AI.“ AI can work with your data. The question is whether the outputs are auditable enough for board-level decisions and transaction sign-offs.

Agentic AI changes the integration calculus. The industry’s systemic fragmentation remains real. But AI agents that autonomously navigate between software platforms, APIs, and data sources mean the integration problem is becoming solvable without requiring the entire ecosystem to adopt common standards. This changes the cost-benefit calculation for early movers - which is exactly the dynamic that creates competitive advantage.

What has not changed: deep institutional inertia, fragmented ownership, long asset cycles, misaligned incentives. The destination is locked in; the uncertainty is pace.

Two forces, one asset

The module’s central analytical framework distinguishes two ways AI affects commercial real estate - and then immediately insists you cannot treat them separately.

AI acting ON real estate: new demand patterns, new asset classes, structural shifts in what gets built and where. A capital allocation question. Speed is outside your control.

AI used BY real estate: how existing assets are operated, transacted, valued, and experienced. An operational alpha question. Speed is a competitive variable.

The critical insight is that these are not separate categories to sort assets into. They are competing pressures acting on the same building simultaneously. A prime office is shaped by AI acting on it (changing occupier headcount, raising specification expectations) and by AI used within it (smart building systems, personalised tenant experience, automated FM). The investment outcome depends on whether the owner is deploying AI fast enough to offset the structural pressures AI is creating.

Need declines. Want is what remains.

The deepest argument in the module - the one that sits beneath every section - is a shift in the nature of real estate demand itself.

If AI absorbs the routine knowledge work that currently fills offices, the purpose of the office shifts toward collaboration, mentoring, culture-building, creative work. The same logic applies to retail (experiential over transactional), healthcare (patient-facing over administrative), and any asset class where the human element is the value proposition.

The fundamental question shifts from need to want. And that changes everything about underwriting.

Need-driven demand is predictable, stable, and relatively insensitive to quality. Choice-driven demand is volatile, quality-sensitive, and ruthlessly comparative. An office is no longer competing only against other offices. It is competing against the increasingly viable alternative of not having a central office at all. When the benchmark shifts from “the next best office” to “not having an office,” the bar for what constitutes a compelling offer rises dramatically.

This is uncomfortable for owners of average stock. But it contains a genuine opportunity: you can still win big in a shrinking market if you are building something people actively desire. The polarisation between best-in-class and everything else widens. Quality becomes not just a return enhancer but downside protection protection.

The asset-class implications (in brief)

The full module works through offices, data centres, logistics, healthcare, retail, and four residential sub-sectors. A few highlights:

Offices get a four-scenario framework crossing output growth with AI productivity. The same AI capability can mean 22% less space or 43% more, depending on whether the occupier is growing. No macro model tells you which scenario you’re in. Only your occupier’s specific context does.

Retail confronts agentic commerce - not just e-commerce, but AI agents that research, compare, and purchase on your behalf. This is more destructive to average retail than e-commerce was, because it removes the last functional justification for visiting a mediocre destination. But it simultaneously purifies the demand signal: everyone who walks through the door is there because they want to be.

Logistics gets an agentic commerce multiplier: even if consumer spending stays flat, the volume of goods flowing through fulfilment channels likely increases because agents eliminate the friction that currently constrains transaction volume. The module is honest about the returns question - net effect on return volumes is genuinely uncertain - but the specification requirements for logistics facilities are rising regardless.

Later living has arguably the most compelling operational AI case of any residential segment, because the applications are care-critical, not merely commercial. Health monitoring, predictive intervention, fall detection, loneliness mitigation. AI does not just improve efficiency; it changes the care proposition.

The geography argument

AI may be doing something more subtle than the pandemic’s failed decentralisation prediction. The pandemic proved remote work was technically possible. AI is making it technically excellent. The justification for being in a specific place shifts from “I need to be here to do my job“ to “I choose to be here because the environment is worth the commute.

The e-commerce parallel is instructive. Online retail didn’t uniformly undermine physical retail. It reshaped its geography - devastating locations that competed on convenience while boosting locations that competed on experience. AI may do the same to office geography: devastating locations that competed on proximity to work while boosting locations that compete on quality of place.

The operational horizon

The module includes a compressed reference map of twelve capabilities across three time horizons (now–2yr, 2–5yr, 5–10yr), classified by feedback-loop type. The tight-feedback, near-term capabilities - HVAC optimisation with sub-one-year payback, AI lease extraction, occupancy intelligence, agentic FM triage - are proven, measurable, and deployable now.

And this produces the module’s sharpest irony: the industry spends most of its analytical energy debating the things it cannot control (will AI reduce office demand by 10% or 20%?) and has not yet deployed the things

it can. The storm is uncertain. The ship-building is not.

What replaces confident forward projection?

The module argues that the pretence of confident 3–5 year NOI projection is becoming actively misleading when the variance of plausible outcomes has widened this dramatically. What replaces it:

Scenario discipline over point forecasts. Not new, but the variables that matter have changed. AI productivity and occupier growth posture are not factors that appeared in traditional CRE stress-testing.

Optionality over commitment. When the range of outcomes is wide, assets and structures that preserve flexibility are worth a premium. The traditional preference for long leases and stable income is a bet on low variance. If AI delivers high variance, that bet loses.

Occupier intelligence over market-level data. The edge moves to granular understanding of individual tenants - their growth trajectory, their AI posture, their industry dynamics.

Quality as insurance. Buildings people choose to occupy are resilient across more scenarios than buildings people are forced to occupy.

Supply constraints and market inertia are not a strategy. They are a sedative.


Read More
Antony Slumbers Antony Slumbers

Where’s the New Business?

A framework for finding where AI creates genuinely new ventures - not just better processes

Over the past three months we’ve built a collection of frameworks for addressing how to leverage AI in CRE: RIRA for strategy, the CRE Automation Matrix for analysis, and the Prompting Framework for execution. Together they answer: how are we creating value, what kind of work is this, and how do we get it done?

But there’s a question they don’t force you to confront.

When participants work through RIRA’s ‘Imagine’ phase - with its three horizons of Efficiency (H1), Capability (H2), and Transformation (H3) - they almost universally gravitate towards H1 and H2. Faster lease abstracts. Better evidence-linked analysis. Smarter compliance checks. All valuable. All necessary.

But none of them are new businesses.

H1 makes your current work faster. H2 makes it better. But H3 builds something that doesn’t exist yet.

And almost nobody gets to H3, not because they lack ambition, but because H1 and H2 are natural extensions of work you already understand. You can picture your workflows done more efficiently. You can picture them done with better evidence. But picturing a product or service that serves a customer who doesn’t yet know they need it, that earns revenue from a budget line that doesn’t currently exist - that requires a fundamentally different kind of thinking. It’s uncomfortable. It implicates your own business model.

Which is exactly why you need a provocation tool to get there.

The H3 Provocation Framework
The H3 Provocation Framework is a set of five questions designed to surface where AI creates genuinely new businesses, products, or market structures in commercial real estate. It sits within RIRA’s Imagine phase as a dedicated instrument for Horizon 3 thinking. It doesn’t replace the “Faster taxis / Better taxis / Uber?” diagnostic, rather it provides the provocations that generate H3 hypotheses in the first place.

WHAT H3s LOOK LIKE IN CRE

Before I describe the five provocations, let me show you what they can produce. I’ve run the framework against four of the most common products and services in commercial real estate - diligence, advisory, occupancy, and portfolio management - to illustrate the kinds of hypotheses that emerge. These aren’t predictions. They’re outputs of a generative process, and the same process can be pointed at any activity across the CRE landscape.

Continuous Diligence.
Due diligence exists as an industry because maintaining a verified, current information state across hundreds of documents requires sustained human attention over weeks. That’s a cognitive and temporal constraint. If AI removes it - keeping an asset’s entire information state continuously current and verified - the question “shall we commission a diligence report?” stops making sense. You don’t commission what already exists. The value migrates from “who can do diligence well” (a cottage industry of lawyers, surveyors, and consultants charging £50–100k per transaction) to “who maintains the verified state.” That’s a data infrastructure play, not a professional services play. Transaction speed becomes a product. The seller offering a continuously-diligenced asset with verified evidence chains commands a premium because they’ve collapsed the buyer’s time and risk.

Evidence-Based Advisory.
AI is about to flood the market with plausible-sounding investment analysis. The supply of professional-looking memos will explode. What won’t explode is the supply of trustworthy analysis - recommendations backed by evidence chains, auditable reasoning, and explicit assumption registers. The value migrates from “trust me, I’m experienced” to “trust the evidence chain, and I’ll interpret what it means.” The human adviser’s role shifts from analyst-packager to interpreter of verified outputs and owner of the final judgement call - pure Quadrant D work (see The CRE Automation Matrix Framework for details). The advisory firm that builds verifiable decision support first doesn’t just have a better product - they’ve made every competitor’s narrative-based approach look unaccountable by comparison.

Occupier-as-a-Service.
What if occupiers didn’t buy space but bought outcomes - guaranteed workspace performance delivered against SLAs rather than lease terms? Air quality, temperature responsiveness, service levels - all continuously monitored, verified, and priced on delivery. The purist version breaks the institutional capital stack (lenders can’t underwrite an SLA the way they underwrite a 15-year FRI lease). But the realistic transition is a hybrid: a conventional lease providing the contracted income floor, with a verified performance premium on top. The hotel sector already proved this model - variable income becomes investable once you have enough verified performance data to make it predictable. PBSA and BTR are moving the same way. Over time, as the data layer matures and investors learn to underwrite operational capability rather than just tenant covenant, the proportions shift. Value migrates from “location plus specification” to “verified performance delivery.”

Portfolio Intelligence as Product.
Large portfolio owners sit on enormous operational data - tenant behaviour, maintenance patterns, energy consumption, lease events - used solely for internal management. If the intelligence derived from that data became a product - benchmarking services, predictive models, optimisation algorithms - you’ve created a revenue stream that doesn’t currently exist in CRE’s business model. The moat is proprietary data combined with verification infrastructure that’s hard to replicate. The REIT doesn’t just earn rental income - it earns knowledge income. And unlike rental income, knowledge products compound: the more data you accumulate, the better the models become, the more customers they attract.

The Common Pattern
Notice what these four share. None of them improve an existing workflow. They each create something that didn’t previously exist as a product or service. They each involve a value migration - a shift in where the premium concentrates. And they each emerged from asking a specific, slightly uncomfortable question about the current state of things.

I chose diligence, advisory, occupancy, and portfolio management because they’re familiar to almost everyone in the industry. But the framework isn’t limited to these. Point it at development appraisal, debt origination, tenant representation, facilities management, fund reporting - any activity where AI is about to change the underlying economics - and it will surface H3 candidates specific to that domain. The five provocations are a lens, not a list.

One more thing worth saying plainly: the timelines on these are uncertain. Continuous diligence may be five years away from mainstream adoption, or three, or seven. The point of the framework isn’t to predict when. It’s to discern the direction of travel - to see where value is migrating so you can start positioning now rather than reacting later. The firms that recognised flexible workspace was a structural shift, not a fad, had years of advantage over those that waited for proof. The same dynamic applies here, across a far wider set of CRE activities.

Those questions are the framework.

THE FIVE PROVOCATIONS

The five provocations follow a narrative arc:
What becomes free →
What falls apart →
What gets built →
Who wins →
Who pays.


Each builds on the last, and the sequence matters.

1. Constraint Collapse - “What becomes free - and whose business breaks?”
Every workflow has a binding constraint - not always cost. Sometimes it’s time, cognitive bandwidth, scale, or access. AI is about to remove some of these entirely. The question isn’t “what gets cheaper?” It’s “whose revenue depends on this constraint existing?” If the answer includes your firm, this is where you need to be paying attention. The continuous diligence hypothesis emerged directly from this question: the constraint wasn’t that diligence was expensive, it was that maintaining a continuously verified information state was operationally impossible regardless of how much you spent. Remove that constraint and the entire episodic diligence model becomes a solution to a problem that no longer exists.

2. The Unbundling - “What are you actually selling - and which part is about to become worthless?”
Every service you charge for is a bundle of components, and you’ve probably never itemised them because the bundle is just “what we do.” AI will replicate some of those components to a verifiable standard. The ones it can’t replicate are where your future premium concentrates. The evidence-based advisory hypothesis came from this: an investment advisory mandate bundles market knowledge, analytical packaging, relationship access, and strategic judgement into a single percentage fee. AI commoditises the first two. The question is whether clients keep paying the same fee for the last two - or start buying judgement separately, possibly from someone who was never an “advisor” before but who now owns the best evidence infrastructure.

3. The New Entrant - “If someone started from scratch today, would they build what you’ve got?”
A well-funded team with no legacy systems, no existing relationships, but full access to frontier AI enters your market. They don’t need to respect how things currently work. What do they build? Who do they sell to? And the question that should keep you up at night: why can’t you build it first? Usually the honest answer is “because our current business gets in the way” - and that’s exactly the answer that should worry you most. The occupier-as-a-service hypothesis is a new entrant question: if you were starting a property management business today, would you build what property management currently looks like? Or would you build an AI-orchestrated operating platform with continuous monitoring, predictive maintenance, and verified performance delivery - that happens to manage buildings?

4. The Control Point - “After this shift, whose signature still matters?”
Value in CRE concentrates around accountability - whoever signs the recommendation, the valuation, the approval. Their signature carries weight because they’re standing behind judgements that can’t easily be verified any other way. When AI changes what can be verified and evidenced, that signature may carry less weight - or shift to someone else entirely. Whoever holds the accountability after the migration holds the value. Consider: a Red Book valuation requires a RICS-qualified surveyor’s signature because comparable selection, adjustment logic, and market judgement need a professional to stand behind them. If AI produces the analysis with full evidence chains, auditable reasoning, and verified comparables, the surveyor’s role shifts from “produce the valuation” to “validate the machine’s output.” That’s a different job, with different economics - and it may not need the same provider.

5. The Customer - “Who has this problem right now - and what are they currently paying to solve it badly?”
The most transformative new businesses don’t create demand from nothing. They serve demand that’s already there but currently met by expensive, slow, or inadequate solutions. The test isn’t “would someone hypothetically pay for this?” It’s “who’s already spending money or losing money because this doesn’t exist yet?” Every institutional buyer who has ever lost a competitive bid because their diligence took two weeks longer than the other side’s is paying the cost of the continuous diligence gap - they’re just paying it in lost deals rather than invoices. Every tenant who signed a lease based on a glossy brochure and discovered the building doesn’t perform is paying the cost of the missing performance verification layer. The customer already exists. They just don’t know yet that what you’re building is the solution.

HOW TO USE THIS

Work through the five provocations in sequence after a standard RIRA pass has produced its H1 and H2 outputs for a specific workflow. Those outputs become the foundation - you’re pushing further, not replacing them. Give each question time to breathe. The uncomfortable answers - the ones that implicate your own firm’s business model - are the ones worth pursuing.

Then classify what emerges. Using AI to produce lease abstracts faster is an H1. Building a lease abstraction system with evidence chains as a firm standard is an H2. Creating a continuously verified portfolio information state that becomes a product you sell to buyers, lenders, and insurers - that’s an H3. The distinction matters because H2s improve your current business while H3s build a new one. Both are valuable. Confusing them leads to underinvestment in the H3 and over-claiming on the H2.

Most ideas will turn out to be H2s. That’s fine. That’s expected. The framework has teeth precisely because it filters honestly. But the one or two ideas that survive all five provocations and still look like genuine H3s - those are worth serious investment of time and thought.

THE COMPLETE STACK

This completes RIRA. The framework has always had three horizons in its Imagine phase, but until now H3 has been a diagnostic label - something you aspire to - rather than a generative method. The five provocations give it teeth. They turn “are we building Uber or a faster taxi?” from a question you ask at the end into a structured process you work through to find out.

The stack remains three frameworks, each answering a different question. RIRA asks how we’re creating value - and now has the tools to push that question all the way to transformation, not just efficiency and capability. The Automation Matrix asks what kind of work this is and how it should be automated. The Prompting Framework asks how we actually get work done. Strategy, analysis, execution.

If your AI strategy starts and ends with doing current work faster, you’re playing an H1 game that everybody else will also be playing within two years. The firms that engineered verifiability into their cognitive work will be pulling ahead. And the firms that ran these provocations honestly, saw the value migration early, and started building - they’ll be playing a different game entirely.

The provocations don’t tell you which game to play. They help you see the games that are available.

Run the five provocations on your most profitable workflow - and note which answer makes you uncomfortable.

Read More