THE BLOG
The PropTech Apocalypse
And why you need RIRA
Summary: Over $300 billion in software market value was erased in a single day. This piece explains why - and what it means for commercial real estate. Using the RIRA framework applied to an investment deal workflow, I show how AI is swapping old constraints for new ones, where value is migrating, and why the firms that own the data layer and the execution layer will capture the next cycle’s profits. It’s a long read. It’s meant to be.
Over the last three weeks I’ve introduced three frameworks - the CRE Automation Matrix, Prompting for CRE Professionals and the RIRA (Release, Imagine, Redesign, Actualise) Framework itself. This week I want to show you RIRA in action, as an example for how it can be incorporated throughout your business.
But first, let’s talk about what happened on Wall Street at the start of February.
$300 Billion Erased
February opened with a software massacre.
On Tuesday the 3rd, according to Forbes, over $300 billion in market value was erased from the software sector in one trading day
“Several of the most entrenched enterprise software companies fell sharply in a single day. Salesforce, ServiceNow, Adobe, and Workday each dropped around 7 percent. Intuit fell nearly 11 percent. At the same time, valuation multiples across the sector compressed violently. The average forward earnings multiple for software companies collapsed from roughly 39x to about 21x in just a few months.”
Wall Street called it the “SaaS-pocalypse.“
This was less a random correction and more a fundamental repricing. The market is waking up to something the venture capital world has been whispering about for months: the software model that dominated enterprise technology for two decades is structurally breaking.
The Thesis: Why Software is Being Hollowed Out
The argument is as follows, and it maps onto commercial real estate with uncomfortable precision.
For twenty years, businesses bought software to help humans organise their work. Dashboards. To-do lists. Complex menus. You paid for a “seat” - a licence for your employee to log into a system. The software didn’t actually do anything. It was a digital filing cabinet. Your employee still had to log in, click buttons, enter data, send emails.
You paid twice. Once for the software. Once for the salary of the person operating it.
The thesis is straightforward: if AI can now execute tasks, paying for software that merely organises human tasks is becoming a bad investment. The industry is converging on this from multiple directions - Phil Fersht at HFS Research calls it “Services-as-Software,” a16z frames it as “software eating labour,” I’ve described it as “Outcomes-as-a-Service.” The labels differ but the signal is the same: not tools that assist the user, but systems that replace the user’s labour. Not dashboards that show you what needs doing, but agents that do it.
This creates what I’ve been calling the “barbell effect.” If the middle layer of software - the screens humans look at to manage tasks - is being hollowed out, value migrates to two poles. At one end: the data layer. AI agents cannot function without accurate, structured, proprietary data. Your lease data, payment history, building performance records - these become more valuable, not less. At the other end: the agent layer. Systems that actually execute work - negotiate a lease, dispatch a vendor, rebalance an HVAC system, screen an acquisition.
The middle - standalone CRMs, basic dashboards, static reporting tools, “thin wrapper” applications that put a digital skin on a manual process - gets crushed.
Clayton Christensen’s ‘conservation of attractive profits’ is the key: when a layer commoditises, profits don’t die - they move to adjacent layers where differentiation still matters.
What looked like “$300 billion erased” is the market repricing where future margins will sit: away from software that organises human work, and towards the layers that own the data and execute outcomes.
The question for every CRE professional is: which side of the barbell are you building on?
The question for every PropTech company is: are we facing our own apocalypse, and what can we do about it?
This is where RIRA comes in. As a strategy-to-execution framework for rebundling value creation in commercial real estate under conditions where AI materially changes the unit economics of analysis and content production, it helps you develop new products and services that leverage, rather than are crushed by, AI.
RIRA in Action: The Investment Deal
To demonstrate the process of using RIRA I am going to use the example of ‘The Investment Deal’. How might the way we deal with deals develop in a heavily AI-mediated world?
RIRA, which stands for ‘Release-Imagine-Redesign-Actualise’, is a way of looking at a workflow through four lenses, where each lens depends on the previous one, and the power comes from following the thread all the way through.
I’ve chosen an investment and capital markets workflow because the constraint-swaps at each stage are the most vivid, the evidence is the most concrete, and the implications are the most strategically significant. At the end, I’ll sketch how the same logic applies everywhere else.
Release: The Analyst Bottleneck Dissolves - and a New One Appears
The Release lens is about what constraints AI removes, and which new constraints it introduces.
Every acquisitions professional knows this workflow intimately. An Offering Memorandum arrives. A junior analyst extracts data from the PDF. Types it into Excel. Massages the assumptions. Builds an ARGUS model. Runs comps. Drafts an Investment Committee memo. Repeat.
A typical CRE acquisition consumes 30–50 analyst hours on data extraction alone. Deal volume is capped by how many OMs a human team can physically process. Firms miss opportunities not because the deals aren’t there, but because they lack the manpower to review them.
AI tools are already releasing this constraint. We are seeing tools (vendor-reported) that generate full five-year cash flow models and IRR analyses in under five minutes with (allegedly) 99%+ extraction accuracy. Others that achieve 95% faster approvals with 400% better risk detection than human reviewers. Firms using these tools report handling 3–4× more deals with the same staff. The precise numbers warrant scrutiny - vendor benchmarks rarely survive contact with messy, real-world portfolios - but the direction of travel is clear.
But, and this is the core insight of the Release stage, AI never just removes a constraint. It swaps it for a new one.
The old bottleneck was throughput: we couldn’t screen enough deals. The new bottleneck is explainability. An Investment Committee cannot approve a £100 million acquisition based on “trust me” from a model. Dealpath, in a recent survey, reports that 93% of institutional firms cite significant AI adoption barriers, with 43% lacking internal expertise. The binding constraint shifts from speed to trust. If the AI cannot show its working - if the outputs aren’t auditable, traceable, and defensible - the speed is worthless.
Data quality becomes an equally hard constraint. If the underlying data is fragmented or dirty, the AI will confidently produce bad returns. “Clean data” becomes the new prerequisite for speed.
Diagnostic question: If we could screen ten times as many opportunities tomorrow, what breaks first - governance, explainability, data quality, decision rights, or IC bandwidth? That’s the new binding constraint, and it’s where execution will stall unless you redesign around it.
Imagine: The 24/7 Deal Hunter
The Imagine lens is about what becomes possible, given the new technologies at your disposal.
If the marginal cost of intelligence drops to near zero, what becomes possible?
Across three horizons: H1 for what becomes more efficient, H2 for what new capabilities can we develop, and H3 for what transformation is now possible?
Think taxis - they could be faster, or maybe they could accept credit cards or… well they could become Uber.
Imagine an investment firm where no human performs data extraction. A sourcing agent continuously monitors thousands of inputs - public listings, court filings for distress, planning/zoning changes, demographic shifts, permit applications. It doesn’t just search. It reasons. It holds the firm’s specific investment thesis - “value-add multifamily in growth markets with strong transport links” - and when it identifies a potential asset, it performs a preliminary underwrite. It ingests the rent roll regardless of format, maps it to the firm’s chart of accounts, pulls real-time tax and insurance comps, and runs a ten-year DCF. The human principal receives a deal memo only for assets that clear the IRR hurdle.
If we could combine market intelligence, debt capacity analysis, and portfolio fit scoring in a single system, and run it autonomously, 24/7/365, we’d have enabled something pretty special. Are we there yet? No, but the likes of JLL (building on the skyline technology they bought a few years ago) and Cherre, building on their years of data wrangling, are getting there.
The critical distinction: this isn’t “a better dashboard.” It’s the elimination of the entire analyst-receives-OM → manual-data-entry → build-ARGUS-model → manual-comp-research → IC-memo workflow. The human enters at the judgement stage, not the extraction stage. The role of the analyst doesn’t disappear - it transforms from “data processor” to “decision validator.”
Design question: If the marginal cost (and time) of analysis and execution tends toward zero, what should we now do that we currently don’t - because it was previously uneconomic or too slow? Where should the system run 24/7, and where should humans enter purely for judgement, not extraction?
Redesign: Dismantling the Excel Industrial Complex
The Redesign lens is about building an evidence-based architecture for new workflows.
The Imagine stage showed us the destination. The Redesign stage is the engineering - how do we physically rebuild our data and workflows to get there?
The core problem has a name that every CRE professional will recognise: the Excel Industrial Complex.
Today, an investment model is a spreadsheet. The moment that file is saved, it is dead. It is a snapshot in time that does not update when interest rates change, a tenant vacates, or the market moves. When you want to test a new scenario, someone manually changes the inputs and saves a new version. Knowledge lives in filenames: “Model_v7_FINAL_FINAL_JB_edits.xlsx.”
The Redesign phase replaces this with what I’d call a Live Financial Twin - an investment model that is a living software object connected to the market.
Underneath that sits the real asset: the canonical deal graph - a structured schema that captures asset terms, counterparties, risks, covenants, and underwriting assumptions as first-class objects, not cells in a spreadsheet. Crucially, it preserves model lineage: every assumption and output is traceable back to source evidence (the specific clause, rent roll line, comp, rate series, or email confirmation) with an auditable chain of custody.
This is where you’d map the workflow against the CRE Automation Matrix — identifying which components are candidates for full automation, which need human-in-the-loop gates, and which remain irreducibly human judgement.
The architecture would have three layers. First, an ingestion layer: OCR and NLP tools ingest documents - rent rolls, tax bills, service charge packs - and convert them into structured data objects, not spreadsheets. Second, live connections: the model is connected via API to real-time market feeds. If the ten-year gilt yield moves, the model automatically updates the cost of debt and recalculates the IRR. You don’t wait for an analyst to notice. Third, write-back capability: when a decision is made - “approve budget,” “proceed to exclusivity” - it doesn’t just sit in meeting minutes. It writes back to the accounting system and updates the forecast automatically.
But notice the new constraint this introduces. If the model is live and automated, you need verification architecture that didn’t previously exist. You need what some are calling “Auditor Agents“ - secondary models whose sole job is cross-checking the maths and logic of the primary underwriting agent. You need human-in-the-loop gates at specific decision points. You need an audit trail that can satisfy both your IC and your regulators.
This is where the barbell becomes concrete. The value accrues to whoever owns the “canonical deal graph“ - the structured, permissioned, auditable data layer - and its agent interfaces. Not to whoever has the prettiest dashboard.
Engineering question: Where are we relying on humans to bridge systems - and what would it take to replace those bridges with a canonical deal graph and auditable model lineage tied to source evidence?
Actualise: From Effort to Edge
The Actualise lens is about embedding and capturing value - turning capability into competitive advantage and revenue.
The first three stages created capability. The Actualise stage answers the question that matters most: how do we capture the value, and how do we stop it leaking to competitors?
Start with what’s changed about what you’re selling. The traditional investment advisory model charges for effort - analyst hours, transaction fees, AUM-based management fees that are proxies for headcount. But if the workflow you’ve redesigned can screen ten times the deals with the same team, the effort-based model actively penalises you for being good at this. You did the work in three minutes; you can’t credibly bill for thirty hours.
The business model has to shift from charging for effort to charging for outcomes: time-to-decision, risk-adjusted conviction, deal throughput, yield improvement. This isn’t a cosmetic change. It requires rethinking fee structures, incentive alignment, and how you demonstrate value to LPs and Investment Committees. But it’s also where the margin expansion lives. The firms that cling to effort-based billing in an AI-accelerated world are subsidising their competitors’ transition.
Next, recognise where the moats actually form. If you’ve built the architecture described in the Redesign stage - the canonical deal graph, live financial models, auditable lineage - you now own something that compounds. Every deal screened enriches the dataset. Every assumption validated sharpens the model. Every IC decision recorded becomes training signal. A competitor starting today doesn’t have your technology and they lack your years of structured deal intelligence. This is the data flywheel, and it is the deepest source of defensibility in the new landscape. It is conservation of attractive profits made concrete: the proprietary data layer becomes the locus of value.
Finally, look for the profit pools you didn’t plan for. Well-architected data infrastructure generates optionality. A firm that built audit-grade, real-time performance data for underwriting purposes discovers it can also price transition risk more accurately than peers, or license anonymised benchmarking data, or offer LPs portfolio-level transparency that compresses their cost of capital. These adjacent opportunities weren’t the original design objective but they are emergent properties of having built the data layer properly. The firms that capture them are the ones that recognised their data infrastructure as a platform, not just a workflow tool.
Business model question: Where are we still charging for effort (hours, seats, process friction) when the value we deliver is actually an outcome (risk reduced, yield improved, time-to-decision shortened)? And which datasets are we generating today - currently treated as operational exhaust - that could become compounding advantage if captured, structured, and reused?
RIRA AS FLYWHEEL
I’ve presented these stages sequentially because that’s how writing works - you read one section after another. But RIRA isn’t a linear process. In practice, you’ll find that the Redesign stage surfaces constraints you didn’t identify in Release. The Actualise stage reveals profit pools that send you back to Imagine with new questions. The framework is iterative by design — each pass through it sharpens your understanding of where value is forming and where it’s migrating. The power isn’t in completing four stages; it’s in the quality of the loops between them.
For instance, when you reach the Redesign stage and discover that your data architecture requires auditor agents and human-in-the-loop gates, that’s a new constraint — which sends you back to Release with a sharper question. And when Actualise reveals that your underwriting data could be licensed as a benchmarking product, that’s a new possibility — which sends you back to Imagine with a business model you hadn’t considered. The framework tightens with each pass.
And because the framework adapts to whatever constraints a given domain surfaces, the same iterative logic applies well beyond investment deals.
The RIRA Pattern Holds Everywhere
I chose the investment deal because the constraint-swaps are vivid, the evidence base is deep, and the stakes are high enough to hold your attention. But the logic isn’t specific to capital markets. It’s structural.
Every CRE workflow has a binding constraint that AI is about to release. Every released constraint produces a new one that nobody was planning for. Every new possibility demands redesigned architecture. And every redesigned architecture creates value that migrates somewhere - the only question is whether it migrates to you or away from you.
Take leasing. The released constraint is responsiveness - business hours, language barriers, the sheer friction of human-to-human scheduling. EliseAI already automates 90% of prospect interactions across 70% of the top 50 US rental operators. That’s a real number, today. But follow the thread. The new constraint isn’t technological, it’s regulatory. When algorithms optimise pricing and availability across an entire portfolio, you’ve crossed from operational efficiency into territory that regulators recognise as market coordination. The US Department of Justice’s lawsuit against RealPage is the canary. The firms that saw leasing automation purely as a cost play, without redesigning their compliance and governance architecture, are now exposed to a constraint they never anticipated. Release → new constraint → architectural consequence. The pattern holds.
You can run this logic through facilities management, asset management, development, valuation, property management - any workflow where humans currently bridge between systems, interpret data, or make routine decisions under time pressure. The binding constraints differ. The new constraints that emerge will differ. The architectural responses will differ. But the sequence is invariant, and that’s what makes RIRA a framework rather than just a case study.
If you’re wondering how RIRA applies to your specific workflow - leasing operations, portfolio strategy, facilities management, tenant experience - that’s precisely what we work through, step by step, in my forthcoming course. The investment deal example gives you the logic. Applying it to your own context is where the value compounds. It’s also where the non-obvious constraints - the ones that only surface when Release, Imagine, Redesign and Actualise interact - tend to catch firms out. Contact me for details.
The Takeaway
The $300 billion that was erased from software stocks is a signal, not an anomaly. Value is moving. It’s moving from tools that organise human work to systems that execute it. From dashboards to data layers. From seats to outcomes.
Christensen was right: when one layer commoditises, attractive profits don’t die - they change address.
And in this cycle, they’re migrating towards data ownership and execution.
The RIRA framework is a way of thinking clearly about where it’s moving in your specific context - and making sure you’re building on the right side of the barbell when it arrives.
And a way for PropTechs to avoid their own apocalypse. Or at least see it coming.
You need to prompt with skill
No matter who you are, quality prompting has a high ROI
A Prompting Framework for CRE Professionals:
In the first part of this series we looked at RIRA- a strategy-to-execution framework for rebundling value creation in commercial real estate under conditions where AI materially changes the unit economics of analysis and content production. And last week we looked at ‘The CRE Automation Matrix Framework’ - a tool for understanding where value is moving to, what to automate, and how. This week we introduce ‘A Prompting Framework for CRE Professionals’, showing how effective prompting is a repertoire matched to domain expertise, AI fluency and task mode.
Together these frameworks form a set of foundational tools for maximising the impact, and leverage, of AI on any business.
THE EVIDENCE
Anthropic, makers of the frontier AI model Claude recently published their fourth ‘Economic index Report’, in which they show that people who can articulate complex, nuanced requirements in a prompt get complex, nuanced outputs. AI amplifies existing expertise rather than substituting for it. The ROI gap between a team that knows how to prompt well and one that doesn’t is enormous.
So if you have domain expertise, and prompting expertise, you will gain the most from current AI.
What This Framework Is
The ‘Prompting Framework for CRE Professionals’ is a practical framework for choosing how to prompt based on two realities of professional work:
(1) your domain expertise is relative to the problem you’re facing (you can be an “expert” in offices and a “novice” in data centres)
(2) your task is either Discovery (sense-making, shaping the problem, exploring options) or Execution (producing a deliverable under constraints).
The framework maps these two variables into four prompting modes - A, B, C, and D - and adds an expert loop (“Mode Switching”) that deliberately alternates between divergent analysis and convergent drafting to produce better work with fewer avoidable errors.
Why it matters: the Strategic Problem AND What Goes Wrong Without It
Most AI misuse in commercial real estate is not “people using the wrong model”. It’s people using the wrong prompting mode.
Without a mode choice:
You treat Discovery like Execution: You ask for an “IC pack” before you’ve clarified the investment thesis, risk boundaries, and decision criteria.
The Result: a polished document that is directionally wrong - high credibility veneer, low decision usefulness.You treat Execution like Discovery: You stay in brainstorming indefinitely, generating option overload and never landing a decision-ready artefact.
The Result: time sink, “AI theatre”, and stakeholder impatience.Novices over-delegate: A junior analyst asks for market comps, regulatory interpretation, or lease risk conclusions without guardrails.
The Result: plausible-sounding inaccuracies that slip into deliverables and create reputational or compliance risk.Experts under-specify: A senior person assumes the model “knows what I mean” and provides thin context.
The Result: generic output, hidden assumptions, and brittle analysis.Teams don’t share a prompting language: Everyone prompts differently, so outputs are inconsistent, review is slower, and governance becomes hand-wavy (“Just be careful with AI”).
THE FRAMEWORK EXPLAINED
Core Insight (Domain Expertise × AI Fluency × Task Mode)
The intent of the framework is to remind you that “good prompting” is not a universal template. It is a repertoire you select based on who you are in that moment (expert/novice), what you are doing (discover/execute), and how comfortable you are with AI as a collaborator.
How to Use It
Before you type anything substantial, force a 10-second classification:
Am I effectively a novice or expert for this problem?
Am I in Discovery or Execution mode?
What is my AI fluency - do I know how to specify constraints, request assumption registers, and force checkability?
Typical mistakes:
Confusing seniority with domain expertise for the task.
Skipping “task mode” and asking for outputs too early.
Treating AI fluency as irrelevant (“I’m a real estate person, not a prompt engineer”).
CRE example: You are an experienced office investor exploring PBSA for the first time. You are an expert in underwriting discipline but a novice in this asset’s demand drivers and operational risks. You should start in A (Guided Exploration) and quickly move into B (Scaffolded Production) for structured outputs, rather than pretending you are in D.
Quadrant A: Guided Exploration (Discovery mode × Domain novice)
For: Domain novices learning the territory.
Technique 1: Outcome-first querying
Intent: Anchor exploration in the decision you eventually need to make.
How to use it: Ask for the shape of an answer before the content. Start with “What would a good answer look like?” or “What decision does this inform, and what inputs would it require?”
Typical mistakes: Asking for facts and figures before you know which facts matter; accepting an answer that isn’t decision-linked.
CRE example: “I’m evaluating a retail park acquisition. Outline the top 12 questions an IC needs answered and why each matters (cashflow, leasing, capex, ESG/MEES, risk). Then tell me what data I need to answer them.”
Technique 2: Terminology unpacking
Intent: Reduce ambiguity and prevent false confidence.
How to use it: Ask for definitions, distinctions, and “if you hear X, what does it imply?”
Typical mistakes: Using terms loosely (“prime”, “stabilised”, “reversionary”) and letting AI mirror your vagueness.
CRE example: “Explain the difference between ERV, passing rent, and headline rent in a UK multi-let office; list common traps when comparing deals.”
Technique 3: Map-before-territory
Intent: Build a conceptual map before you go deep.
How to use it: Ask for frameworks, taxonomies, checklists, and causal diagrams that organise a domain.
Typical mistakes: Diving into detailed drafting without a map; treating a single framework as “the truth”.
CRE example: “Create a taxonomy of data-centre underwriting risks: power, planning, customer concentration, capex, supply chain, grid constraints, and exit liquidity - then explain how each category shows up in diligence.”
Quadrant B: Scaffolded Production (Execution mode × Domain novice)
For: Domain novices who need to produce work product.
Technique 1: Template generation
Intent: Reduce cognitive load by producing a strong structure that you fill with verified inputs.
How to use it: Request a deliverable skeleton with headings, subheadings, and placeholder prompts for required data.
Typical mistakes: Letting AI fill placeholders with invented content; confusing a template with a conclusion.
CRE example: “Generate an IC pack structure for a UK office acquisition. For each section, include: required inputs, typical analyses, and ‘red flag’ questions. Use placeholders rather than invented numbers.”
Technique 2: Checklist-driven execution
Intent: Shift quality from “hope” to “process control”.
How to use it: Ask the model to create a completion checklist and a review checklist (what to verify; what to cite; what cannot be assumed).
Typical mistakes: Only checking the final narrative, not the underlying assumptions and calculations.
CRE example: “Create a QA checklist for a lease abstract summary used in underwriting: items to verify, clauses that must be escalated, and what constitutes ‘insufficient evidence’.”
Technique 3: Role-based guardrails
Intent: Force the model to behave like a constrained assistant, not a confident commentator.
How to use it: Assign a role plus rules: ask clarifying questions, provide an assumptions register, avoid fabricating, flag uncertainties.
Typical mistakes: Overly theatrical roles (“be a genius investor”) instead of operational constraints; failing to require uncertainty labelling.
CRE example prompt pattern (short):
Act as a CRE analyst. If key inputs are missing, ask for them. Use placeholders rather than guessing.
Provide:
(1) Assumptions register,
(2) Data required list,
(3) Draft output.
Do not invent rents, yields, or regulatory claims.
Quadrant C: Accelerated Analysis (Discovery mode × Domain expert)
For: Domain experts exploring new angles or complex problems.
Technique 1: Adversarial stress-testing
Intent: Use the model as a structured critic to surface fragility.
How to use it: Ask for a red-team review: “If this is wrong, how would it fail?” Force concrete failure pathways.
Typical mistakes: Asking for “pros and cons” (too generic); not tying critique to evidence requirements.
CRE example: “Red-team this investment thesis: ‘Grade A offices will rebound due to flight to quality’. Identify 8 ways it fails, what early signals would show failure, and what mitigations exist.”
Technique 2: Comparative framework generation
Intent: Avoid single-lens thinking by generating multiple analytical frames.
How to use it: Request 3–5 alternative frameworks (e.g., demand-side, regulatory, financing, operational) and compare what each highlights.
Typical mistakes: Treating frameworks as decoration rather than decision tools.
CRE example: “Analyse a workplace strategy decision using three frames: productivity/performance, talent/retention, and cost/risk. Where do they conflict?”
Technique 3: Assumption surfacing
Intent: Make hidden assumptions explicit so they can be tested.
How to use it: Ask the model to list assumptions, classify them (critical vs minor), and propose validation methods.
Typical mistakes: Allowing assumptions to remain embedded in prose.
CRE example: “List all assumptions embedded in a ‘stabilised income’ valuation narrative and propose which can be evidenced vs which require judgement.”
Technique 4: Perspective multiplication
Intent: Create decision robustness by simulating stakeholder lenses.
How to use it: Ask for outputs from specific perspectives: IC chair, lender credit, occupier, planner, ESG committee, asset manager.
Typical mistakes: Using caricatures; failing to produce actionable questions.
CRE example: “From a lender’s credit committee perspective, what are the five deal-breakers in this business plan? What evidence would satisfy each?”
Quadrant D: Precision Delegation (Execution mode × Domain expert)
For: Domain experts in production mode.
Technique 1: Front-loaded context & constraints
Intent: Replace ambiguity with bounded execution.
How to use it: Provide the scenario, the decision context, constraints, definitions, what not to do, and your preferred format.
Typical mistakes: Under-specifying constraints; assuming the model knows your house style; forgetting confidentiality boundaries.
CRE example: “Draft the ‘Risks & Mitigations’ section in our IC style: short bullets, no marketing language, each risk must have an owner and a mitigation plan.”
Technique 2: Artefact-specific commands
Intent: Tell the model exactly what artefact you are producing (memo, IC section, lease clause summary, board slide) and the acceptance criteria.
How to use it: “Produce X, for audience Y, with length Z, and include A/B/C elements.”
Typical mistakes: Asking for “a report” instead of a defined artefact.
CRE example: “Write a 250-word executive summary for an IC: thesis, key numbers placeholders, top 3 risks, recommendation.”
Technique 3: Negative specification
Intent: Prevent known failure behaviours.
How to use it: Explicitly prohibit: invented numbers, uncited claims, legal conclusions, unbounded scope, buzzwords.
Typical mistakes: Only saying what you want, not what you don’t want.
CRE example: “Do not reference specific regulations unless I provide text. If uncertain, flag as ‘needs legal check’.”
Technique 4: Output format locking
Intent: Make outputs reusable and reviewable.
How to use it: Fix headings, bullet structure, and required sections (assumptions; data gaps; confidence).
Typical mistakes: Letting format drift across iterations; mixing analysis and narrative.
CRE example: “Use exactly these headings: Background; Investment case; Key sensitivities; Risks; Mitigations; Open questions.”
Technique 5: Iterative refinement
Intent: Improve quality through controlled revision rather than re-prompting from scratch.
How to use it: Run short cycles: “Revise only section 3; keep the rest unchanged; incorporate these three changes.”
Typical mistakes: Rewriting everything (introduces new errors); failing to track what changed.
CRE example: “Revise the leasing section to reflect a 12-month void assumption and add two mitigations; do not change other sections.”
The Expert’s Meta-Skill: Mode Switching (C → D → C → D)
Intent: Experts get better outcomes by alternating between:
C (Diverge): generate options, risks, questions, frames
D (Converge/Draft): lock format, draft deliverable
C (Stress-test): red team, assumption checks, scenario attacks
D (Refine): tighten, align, finalise, QA
Typical mistake: Staying in one quadrant because it feels productive (either endless ideation or premature drafting).
CRE example: In underwriting, use C to expand risk hypotheses, D to draft the IC narrative, C to stress-test with lender/IC lenses, D to produce the final pack with an assumption register and clear decision asks.
WHY BOTHER
Perhaps this reads as prescriptive. In a way it is. Until it becomes muscle memory. Then you can iterate and ‘free solo’ your way to exceptional outcomes. The discipline is in knowing why you are doing X in Y order,
This is workflow control. You already manage risk with checklists, templates, and IC disciplines. This framework simply extends that discipline to AI-assisted work.
Use it as a structured assistant: to generate your own work frameworks, surface assumptions, draft formats, and run adversarial critique - tasks where reliability comes from constraints and review.
Do make the effort for ‘Mode switching’, because this replaces rework. It’s a slow way to get where you need to fast. Without pushing in then zooming out you miss so much, and so often that means endless versions before a final outcome. This foreshadows that by stress testing as you go.
In Quadrant D, which requires artefact-specific commands and format locking, you can upload your own house style and templates and jump straight to outputs that are ready to go.
And if you are worried that your messy data means none of this will work very well, this is exactly why you need assumption surfacing and evidence plans. The framework makes uncertainty explicit. Once known you can fill in the gaps.
Here’s a 10-minute quick start you can try out today.
Pick one live task you’re doing this week (IC pack, lease summary, board note).
Decide Discovery vs Execution. Decide whether you’re a novice or expert for that task.
Run one prompt from the matching quadrant:
A: ask for a map and definitions
B: ask for a template with placeholders + checklist
C: ask for assumption register + red team
D: lock format + constraints + negative spec
Add one mandatory control: “No invented numbers; use placeholders; list missing inputs.”
Save the best prompt as your team’s starting template.
Working with this framework will lead to compounding returns. Referencing back to the Anthropic study, how we prompt the AI determines how effective it can be.
A closing caution: this framework makes AI collaboration more effective, but “more effective” applied to the wrong task simply accelerates waste. Prompting discipline without task discipline is efficiency theatre with better production values.
This is the third in a stack of frameworks, and sequence matters. RIRA asks the strategic question: how are we creating value—faster taxis, better taxis, or Uber? The CRE Automation Matrix asks the analytical question: what kind of work is this, and what’s the right human-AI interaction model? This Prompting Framework answers the operational question: how do we actually get that work done?
Work the stack top-down. If you haven’t passed the “faster taxis or Uber?” test, no prompting technique will rescue you from building the wrong thing more efficiently. Start with strategy, filter through analysis, then reach for this framework when you’re ready to execute properly.
You're Probably Automating the Wrong Things
A framework for knowing where value actually lands
Last week we looked at RIRA- a strategy-to-execution framework for rebundling value creation in commercial real estate under conditions where AI materially changes the unit economics of analysis and content production.
This week we’re going to look at ‘The CRE Automation Matrix Framework’ - a tool for understanding where value is moving to, what to automate, and how.
WHAT THIS FRAMEWORK IS
The CRE Automation Matrix is a decision framework for choosing how to apply automation and AI across commercial real estate workflows by classifying work along two dimensions:
What kind of work it is (operational “plumbing” versus strategic “cognition”)
How verifiable it is (hard-to-verify versus rule-based/testable).
Its practical purpose is to prevent two common errors:
(i) automating the wrong things (low ROI or high risk)
(ii) automating the right things with the wrong method (straight-through automation where you actually need evidence links, tests, and human exception handling).
WHY IT MATTERS: THE STRATEGIC PROBLEM
Most CRE organisations are currently “tool shopping” (Copilot, chatbots, RPA, point solutions) rather than systematically deciding where automation belongs, what must remain human-led, and what requires verifiability engineering (tests, evidence, audit trails).
This framework solves the prioritisation, design and risk control problems:
Prioritisation: Where is the highest ROI with the lowest governance burden?
Design: What automation pattern fits the reality of the work (straight-through, evidence-based agents, or human-in-the-loop)?
Risk control: Where do hallucinations, silent failures, and compliance gaps most commonly occur?
WHAT GOES WRONG WITHOUT IT
You automate cognition without verifiability
You get fluent outputs that are not defensible: investment narratives without evidence links, lease abstracts without citation, “compliance checks” that miss edge cases.
Result: reputational and financial risk.
You under-automate plumbing because it feels unglamorous
Teams chase “AI strategy” while still rekeying rent rolls, reconciling invoices manually, and firefighting data quality.
Result: the operational drag remains; AI “ROI” never shows up.
You treat messy workflows as if they were standard workflows
Cross-org tasks and tribal processes (variations, exceptions, unwritten rules) break brittle automations.
Result: compounding errors, escalation fatigue, and loss of trust in automation.
You blur accountability:
Who signed off the value?
Who verified the covenant?
Who checked the lease clause?
Without the matrix, AI adoption often smuggles in a dangerous assumption: “the model did it”.
The bottom line?
If you cannot explain how a task is verified, you are not “automating”; you are outsourcing judgement to a stochastic text generator - and hoping your governance function never notices.
WHY IT WORKS SO WELL
The power of this framework comes from the way it melds the type of work with its verifiability.
The Vertical Axis: Complexity of Work (Plumbing → Cognition)
The intent here is to understand what is the nature of the work being undertaken.
So first off you decide if a task is ‘Plumbing’ or ‘Cognition’. I.e, is it about moving, transforming, routing, reconciling, standardising or validating? In which case it is ‘Plumbing’ or is it about interpreting, prioritising, negotiating, scenario framing, decision-making, or storytelling with accountability? In which case it is ‘Cognition’.
Many tasks contain both, in which case you classify based on where the risk and value concentrate.
People make typical mistakes here: they call everything “strategy” just because it touches money. Or treat “cognition” as producing nicer text rather than making (and owning) a decision. People label anything involving writing as “strategic”, then optimise for eloquence instead of decision clarity: options, trade-offs, thresholds, owner, next action.
CRE examples might be: SLA-based triaging of incoming maintenance requests is ‘Plumbing’ whereas deciding whether to hold/sell an underperforming asset in a thinly traded submarket is ‘Cognition’.
The Horizontal Axis: Verifiability
Here the intent is to distinguish tasks where outputs can be objectively checked (rule-based/testable) from tasks where correctness is contested, contextual, or data is too sparse.
So for any task you need to ask three questions:
Is there a reference standard? (rules, schema, thresholds, contractual language, accounting logic, policy)
Can I test the output? (logic tests, reconciliation, constraints, sampling, peer review)
Can I attach evidence? (citations to leases, invoices, emails, comps, photos, source systems)
If the answer is “mostly yes”, you are on the Verifiable side. If “mostly no”, you are on Hard-to-Verify.
Typical mistakes are confusing “sounds plausible” with “verifiable”. Or assuming availability of documents equals availability of evidence (a PDF is not evidence unless you can reliably extract and cite the relevant clause). Or treating a policy statement as a test (a test requires pass/fail conditions).
CRE examples might be:
Verifiable - “Does the rent roll total match the general ledger control account within tolerance?” (reconcilable)
Hard-to-verify - “Is this tenant likely to renew if we refurbish?” (contextual, behavioural, market-driven)
So with these variables in place we get to having:
FOUR QUADRANTS
Quadrant D
Human-Led Strategic (Hard-to-Verify Cognition)
Where the intent is to protect genuinely strategic, low-verifiability decisions from premature automation while still using AI to improve the quality of thinking.
Such as here: Pricing in Thin Markets; Negotiation and Relationships; Strategy (Hold/Sell); Development Vision.
Strategically this is about: ‘AI as Challenger, Human Decider’.
Quadrant B
The Karpathy Frontier (Verifiable Cognition)
Where the intent is to capture the emerging sweet spot: tasks that feel cognitive (reading, reasoning, writing) but can be engineered to be verifiable through evidence links and tests. This is where AI can be powerful and governable.
Such as here: Lease Abstraction (Schema+Evidence); Compliance Checks (Rulebook); Valuation Model QA (Logic Tests); Investment Memo Coherence (Evidence Link).
Strategically this is about: ‘Reinforcement Learning with Verifiable Rewards’ and Evidence-Based Agents.
NOTE: Andrej Karpathy is one of the most revered AI researchers in the world and he has recently been arguing that the most promising frontier for AI capability expansion lies in domains where outputs can be verified programmatically - enabling models to learn from feedback loops rather than human labelling. This quadrant represents that frontier applied to CRE.
Quadrant C
The Messy Middle (Hard-to-Verify Plumbing)
Where the intent is to acknowledge where automation fails most often: not because the task is “hard”, but because the environment is non-standard (variable inputs, inconsistent behaviours, cross-boundary handoffs).
Such as here: Unpredictable Counterparties; Messy PDFs/Images; “Tribal” Processes; Cross-Org Workflows.
Strategically this is about: Automating the Spine, Human Exception Handling. The goal isn’t eliminating humans from these workflows; it’s ensuring they’re only invoked where they add value, not where they’re compensating for brittle automation.
Quadrant A
High ROI Automation (Verifiable Plumbing)
Where the intent is to focus attention on the “boring” work that yields disproportionate ROI because it is frequent, standardisable, and testable.
Such as here: Rent Roll Ingestion; AP/AR Reconciliation; SLA-Based Routing; Data Quality Checks.
Strategically this is about: Full Automation and Straight-Through Processing
FROM TASKS TO WORKFLOWS
Most workflows span multiple quadrants - a quarterly asset review involves Quadrant A reconciliation, Quadrant B compliance checks, and Quadrant D hold/sell judgement. The framework’s purpose is to decompose workflows into constituent tasks and optimise each appropriately.
This connects directly to last week’s RIRA framework. The ‘Redesign’ phase isn’t just about improving individual tasks - it’s about re-architecting workflows so that quadrant transitions become deliberate design decisions. Where does verification occur? Where do evidence chains form? Where does human judgement enter and exit?
The Karpathy Frontier matters here because verifiable cognition is what enables linkage. A cognitive task that produces verified, evidence-linked outputs can feed downstream automation. One that doesn’t creates a break - requiring human review not because the task demands judgement, but because its output can’t be trusted by the next process in the chain.
WHERE’S THE VALUE?
Dive deep into these quadrants and it becomes very clear where automation should occur, and how.
More importantly though, they tell you where future value lies and the future shape of the real estate industry.
Verifiable Plumbing must be automated asap - because it will be relatively easy to do so, and everybody will. Which means you have to do it but you’ll not find much value there. Short term (say up to 2 years) early movers will gain an advantage but eventually all gains will be competed away.
Hard-to-verify plumbing is why you’ll still be in business, and why per unit of output labour reduction will be limited. Because it’s where the ‘messy middle’ tasks lie; tasks that aren’t particularly high-value but are very hard to automate. This will be the refuge of the complacent, and Luddite. They’ll rely on this ‘irreducibly human’ work to see them through. The problem with that is that it will be relatively short lived - AI will ‘come for’ more and more of this work over time. As capabilities expand more messiness will be automatable.
Verifiable cognition is where good money lies. Because making it verifiable is hard, complex, detailed, and reliant on excellent systems and smart domain knowledge. Only the best companies will be able to automate this work, and it’s unlikely to be commoditised. Sure, individual Lease Abstraction will be, but maybe not easily at scale, and at maximum accuracy. And seamlessly linked in as part of wider workflows. This is where tacit knowledge will become explicit, but protected by IP rights.
The best money though will be in hard-to-verify cognition. Areas where there is genuine and likely persistent human irreducibility. This is where tacit knowledge cannot be codified, and genuine ‘wisdom’ resides. Along with, probably, some formats of proprietary data and knowledge. This is where all the due diligence has been done and is looking good, but the advice comes to ‘don’t do the deal’. This game though is played in rarified air. Getting to play won’t be easy.
This CRE Automation Matrix is the analytical follow-up to the strategic vision you’ll develop with last week’s RIRA framework. And is to be used with it, in a circular way. You’re working on your H1, H2, and H3 visions but where do the component parts sit within this matrix? Are they defensible? Is that where value will be found?
CONCLUSION
In five years, the firms that automated plumbing will be table-stakes competitive. The firms that engineered verifiability into their cognitive work will be pulling ahead. And the firms that built genuine strategic judgement—the hard-to-verify cognition that can’t be copied—will be playing a different game entirely.
The matrix doesn’t tell you which game to play. It shows you which game you’re currently in.
It’s Time to Get Serious
RIRA and Value Creation Rebundling for CRE
In late 2025 AI crossed a tipping point. What was, since the launch of ChatGPT in November 2022 an exciting, but somewhat maddening, new technology, morphed into being something much more robust. Yes, the ‘Jagged Edge’ of capabilities still exists, but an increasingly large number of business tasks and workflows are now firmly within AI’s purview. Put simply, we have reached the moment we need to take all of this seriously.
The Steam Engine Analogy
In the 1880s steam engines in factories started, slowly, to be replaced by new-fangled electric motors. And great productivity boosts were promised. But they never occurred. It turns out that replacing the power driving a central shaft with chains and pulleys made next to no difference to how well factory workers could actually work. It wasn’t until nearly four decades later, when factories themselves were redesigned, work disaggregated, and instead of one central power drive, a multitude of motors, serving a multitude of different processes, were introduced, that we see any meaningful productivity boost. And then it went ballistic - productivity soared and the roaring 20’s were underway.
Leveraging the new technology was impossible when bolted on to current workflows. Only when workflows were redesigned to specifically match the capabilities of the new technology did change occur. One needed to bake a new cake, not just put cream on top of the old one.
And now, history is repeating.
The ‘J’ Curve
Stanford Professor Eric Brynjolfsson has written about what he describes as the J-Curve. Where when new technologies are introduced, productivity actually declines for some time, before turning upwards dramatically. And this occurs because we have to invest in the intangible capital of new processes, organisational design, workforce training and complementary innovations, in advance of reaping any rewards. It is not until all of this melds into place, flywheels emerge, and benefits compound, that we stop being in the red.
And this is where we are now.
Changing Gears
The last two months of 2025 saw the release of ChatGPT 5.2, Gemini 3 and Claude Opus 4.5. Three enormously powerful frontier models. On top of which have emerged UX developments making interacting with the underlying power of these models increasingly easy. Google released Antigravity, OpenAI Codex 5.2, and Anthropic linked Claude Code with Opus 4.5. All of these are tools for developing ‘agentic’ systems (where multiple task-based ‘agents’ are strung together), primarily aimed at developers. What is emerging that is a step change is that these types of tools are being given interfaces that are, for the first time, accessible to non-coders. Most notably the new ‘chatbox’ version of Claude Code available via their desktop app.
We are very rapidly moving into the era of the ‘Business coder’. Where someone with no, or very limited coding skills, can summon up working software simply through natural language. The ability to explain one’s requirements clearly is now the defining skill. How well you understand ‘WHAT’ should be built, and how it should work, is the new superpower.
The barrier between thought and actualité is fading away.
We are all coders now.
I exaggerate to make the point, but give it another year and this absolutely will be reality.
But… how to know ‘WHAT’ to build?
Introducing The RIRA Framework
Let’s assume your ambitions stretch beyond having AI draft your emails, or summarise your documents. Let’s assume you’re ready to be serious.
Then you need to take The RIRA Framework (plus its two complements - the CRE Automation Matrix and the CRE Prompting Frameworks) seriously, because it is specifically designed for those thinking hard about how the real estate industry is set to change and how they, and their teams and companies, need to reposition themselves to maximise AI leverage.
What is it?
‘RIRA (Release–Imagine–Redesign–Actualise) is a strategy-to-execution framework for rebundling value creation in commercial real estate under conditions where AI materially changes the unit economics of analysis and content production.
It helps teams move beyond “using AI” towards engineering repeatable, evidence-based workflows that:
(i) explicitly surface what constraints are removed, and introduced, by AI
(ii) identify the new possibilities that become economically viable
(iii) redesign work into verifiable modules with controls
(iv) capture value through implementation, measurement, monetisation, and defensibility’
The critical terms are ‘constraints’ ‘new possibilities’ and ‘verifiable’.
AI both removes and introduces constraints: time and cognition are no longer scarce, but trust, confidentiality, and governance become critical.
AI enables us to be more efficient, or add new features, but far more importantly, it enables us to do things that were simply not possible before.
And AI makes whether something is verifiable or not, the cornerstone of how we design systems. You cannot, or at least should not, automate anything you cannot verify.
Why It Matters: The Strategic Problem
Al makes it cheap to generate outputs that look professional - memos, drafts, comps tables, market narratives, even pseudo-model commentary.
In CRE, where much value is mediated through decision confidence, risk allocation, and narrative control, this creates a structural trap:
The supply of plausible analysis rises sharply, but…
The supply of trustworthy analysis does not
RIRA is built to solve that mismatch: it forces a shift from “drafting at scale” to verifiable decision support, and from “internal efficiency” to capturable economic value.
What Goes Wrong Without It?
Without a RIRA-like discipline, organisations typically land in one of four bad equilibria:
- Productivity theatre: teams generate more documents faster, but decisions are no better; quality becomes subjective; risk quietly increases.
- Shadow Al: adoption happens informally; sensitive data leaks into tools; auditability is lost; governance reacts too late.
- Brittle automation: organisations automate workflows that should be redesigned, then get exception-handling chaos and reputational damage.
- Uncaptured value: real productivity gains occur, but are not measured, productised, or defended - so the benefit is competed away or simply absorbed as “busyness”
RIRA matters because it treats Al adoption as an operating model change with explicit trade-offs: you release constraints, but you also introduce new ones; you imagine new value, but you must redesign for evidence; you implement, but you must capture and defend value.
Before And After
Before AI, narrative drafting and expert packaging were scarce - and this scarcity was what commanded a premium.
But AI upends this: drafting and packaging becomes easy, a commodity. Everyone can do that.
What everyone cannot do, and probably won’t because it is hard and involves real effort and commitment, is:
Proving it’s right (evidence)
Ensuring it doesn’t break (controls)
Embedding it in real systems (integration)
Getting it to the right people (distribution)
Making stakeholders trust it (governance credibility)
This is what I mean by value creation rebundling: the components that once commanded a premium unbundle, commoditise, and the premium rebundles elsewhere.
Release, Imagine, Redesign, Actualise
The RIRA Framework is not meant as a sequential process. It’s an iterative loop. As you work your way through looking at constraints, and failure modes (which tell you how to verify or where to retain a human in the loop) you’ll develop a feel for how transformative you can be with any given workflow. Some will remain much as is, with maybe some efficiency tweaks. Some will lend themselves to being made more capable. Some though, you will discover, fit perfectly into that ‘value creation’ bullseye.
Within this iterative loop you will use the ‘CRE Automation Matrix’ Framework to understand exactly which tasks to automate, and where value lies, and this will be the subject of next week’s newsletter.
For now, the imperative to grasp is that AI has reached a threshold where dabbling is no longer enough. In CRE, as in all businesses, we have reached the point where considerable change is a certainty, and iterating on the past won’t suffice.
There’s a phrase that does the rounds - “AI won’t take your job, someone using AI will” - but it misses the deeper point.
As Sangeet Choudary put it recently: “New tech collapses old constraints. Once a constraint disappears, the logic of competitive advantage and business model design must be reimagined from first principles.”
In CRE, the constraints that defined value creation for decades - the scarcity of time, cognitive bandwidth, and expert packaging - are dissolving. What emerges in their place is a different scarcity: evidence, verification, integration, governance credibility. The premium rebundles around whoever solves for those.
Bolting AI onto existing workflows won’t get you there. Redesigning around the new constraint set will.
That’s what RIRA is for. Next week: the CRE Automation Matrix, which shows you exactly where to look.
Where the Firm Learns Now:
GCCs and the New Operating Model of Knowledge Work
Last week I suggested that 2026 feels odd because our familiar ways of thinking are not aligning with the world as it is today. We keep trying to explain the world with single-cause stories, hybrid work, AI, rates, geopolitics, talent shortages, and each one is true enough on its own. But experienced together, they create a kind of conceptual vertigo.
This week I want to make one of the quieter structural shifts much clearer: the rise of the Global Capability Centre (GCC) as a default design pattern for large organisations.
Not as a footnote to globalisation, but as a clue to where firms are choosing to build themselves.
The GCC as operating-model infrastructure
A GCC in India still gets described, casually, as “offshoring”. That language is a hangover. It carries the intuition of factories and back offices, things moved for cost reasons, tasks reallocated, efficiency extracted.
What is happening now is more interesting and, I think, more consequential.
A modern GCC is increasingly where large organisations build and maintain the machinery of contemporary knowledge work. Not simply running processes, but owning platforms and capabilities that everything else depends on: engineering, data, analytics, cybersecurity, product modules, AI operations, internal tooling. In many firms, these centres are where competence compounds, where people stay longer, systems get understood deeply, and teams develop muscle memory.
The most important effect isn’t always visible as layoffs “at home”. The more common mechanism is that the next wave of roles, especially the ones that form career ladders and leadership pipelines, land somewhere else. It is growth displaced rather than jobs replaced.
That is a very different kind of shift to the one most public debate is set up to recognise.
What actually gets done inside a GCC (in 2026 terms)
It helps to break GCC work into layers, because they are often conflated:
1) Industrialised execution
These are repeatable, verifiable outputs:
Data engineering and pipeline maintenance
QA and testing
Production analytics and dashboards
Reporting, reconciliations, controls
Operational risk processes, monitoring
This is the “cognitive factory” layer: high throughput, high standardisation, measurable outputs.
2) Platforms and orchestration
The systems and workflows that make the firm run:
Internal tools and automation
Enterprise workflow design
MLOps / AI ops (model deployment, monitoring, governance mechanics)
Cloud and platform engineering
Integration work across teams and systems
This layer is where capability becomes an asset, because once you own the platform, you shape what is possible.
3) Resilience, controls, and risk infrastructure
The nervous system of the organisation:
Cybersecurity architecture and operations
Incident response
Identity, access, and controls engineering
Compliance tooling and audit readiness
Operational resilience programmes
These functions rarely get the glamour, but they have become existential. That pushes them toward scale and continuity.
4) Product and engineering ownership (increasingly common)
This is the significant shift:
End-to-end ownership of services or product modules
Feature development tied to global roadmaps
Data products, model products, and internal “platform as a product” teams
At this point the GCC stops being a support function and becomes part of the organisation’s core building capacity.
The important point is not that every GCC does all of this, but that the direction of travel is clear: these centres are moving up the stack.
Why firms do this: the overlooked economics of coordination
Most people assume the dominant driver is cost. Cost matters, of course. But if cost was the whole story, we would see these centres treated as interchangeable labour pools, constantly shopped around.
Instead we see firms committing long-term capital, large footprints, and leadership attention. That tells you something else is doing the heavy lifting.
The under-discussed driver is coordination economics.
Modern knowledge work, especially in large firms, is not primarily constrained by the ability to hire brilliant individuals. It is constrained by the ability to field coherent, stable, persistent teams that can build complex systems over time.
What kills productivity is not the salary line. It is:
High churn and short tenure
Constant re-forming of teams
Loss of institutional memory
Weak shared context
Senior time spent endlessly recruiting and re-aligning rather than building
A large, well-run GCC can reduce that coordination drag because it often offers:
Greater team stability
Longer tenures
Deepening shared context
Stronger internal labour markets and career ladders
A culture built around platform stewardship rather than constant reinvention
This is a new organisational physics.
Last week I discussed JPMorgan’s commitment to a Brookfield-developed campus in India, institutional-grade capital, not a temporary lease. Microsoft’s India Development Center is one of its largest R&D centres outside Redmond and contributes materially to core Microsoft products and platforms. Over 130 UK firms operate Indian GCCs employing 200,000 professionals. The shift is real.
And once you see it, the GCC becomes less a “location choice” and more a structural choice about how the firm wants to learn and compound capability.
Note: This is not across the board, and the market is changing rapidly. Tier-1 Indian cities face attrition rates of 20–25% in IT roles; Tier-2 cities like Indore, Coimbatore, and Kochi offer 8–12%. That stability matters increasingly for agentic workflows, where deep institutional knowledge is required to train and govern AI systems effectively.
Separately, infrastructure is creating new centres of gravity. Google’s $15 billion investment in Vizag - gigawatt-scale data centre operations and a new subsea cable gateway - signals that AI-era location decisions are driven as much by power and connectivity as by talent.
A historical rhyme: the invisible move
This has echoes of manufacturing, but the similarity is easily overstated.
Manufacturing relocation was legible:
Factories shut
Jobs disappeared from specific places
Whole local ecosystems unraveled
Knowledge work relocation is quieter. It arrives as “global delivery models”, “platform teams”, “centres of excellence”. It often looks, in the moment, like perfectly reasonable corporate housekeeping.
Which is precisely why it has such power.
The story of the last few decades is full of shifts that were operationally rational and politically unreadable until they were entrenched. Supply chains reorganised; politics arrived later, and often with language that didn’t match the structure of the change.
Something similar is happening now.
Public discourse is still most fluent when the issue is visible and physical: factories, borders, trade in tangible goods. Knowledge work is harder to narrate because it doesn’t vanish in a single closure. It thins. It relocates as future growth. It becomes a missing ladder rather than a headline.
In a way, politics helps to hide the shift, not always deliberately, but structurally, because it tends to focus on what can be pointed at, photographed, and blamed. The more important change is often administrative and slow.
Why this connects to London (and other global cities)
A question that naturally follows is: if GCCs are where so much capability is being built, why do firms still invest so heavily in major hubs?
I think we are watching the “office” split into two functional roles, and the split is becoming sharper:
Capability hubs (often GCCs): places where execution at scale, platform building, and operational intelligence are concentrated
Commitment hubs (major global cities): places where client relationships, regulatory accountability, senior arbitration, and legitimacy are concentrated
These are different forms of work. They require different densities, different rhythms, different building types, and different economic logic.
It also helps explain why you can see large commitments to prime space in London alongside large commitments to capability centres elsewhere. The firm is not choosing one geography. It is designing a system with multiple centres of gravity.
The most important mechanism for the West: “missing growth”
If you want one phrase to hold onto, it’s this: missing growth.
The typical effect is not that Western offices empty overnight. It is that:
headcount growth slows
junior and mid-level ladders thin
whole cohorts of roles that used to be created domestically are created elsewhere
platform ownership and learning accumulate away from traditional hubs
This matters deeply for cities because cities do not just depend on jobs; they depend on:
ladders
clustering
progression
the density of early-career opportunity
the social infrastructure of an upper-middle class that compounds skills and civic capacity over time
When you displace growth, you eventually displace the social fabric built on that growth.
That is why this is a city story, not just a corporate efficiency one.
The AI twist
One more nuance is important, even in a GCC operating-model piece.
GCCs are often built to standardise and systematise work. That is what makes them scalable. It also makes parts of them legible to automation. AI does not arrive later as an external disruption; it arrives inside these centres as a force that compresses the very work they were designed to industrialise.
That does not mean GCCs are a short-term fad. It suggests a more complex trajectory:
growth and consolidation now
compression of industrialised execution later
a shift towards orchestration, governance, exception handling, and platform stewardship
This is worth holding in mind because it affects both labour-market expectations and real estate planning in GCC locations too.
But the immediate point remains: today, GCCs are where firms are choosing to build capability at scale.
What this means for commercial real estate
Put all of this together, and you get an explanation for a pattern many of us feel in the market:
Office demand isn’t behaving the way the old models said it should.
Hybrid work is part of the story. Rates are part of the story. But underneath them is a deeper decoupling: organisations are learning to scale capability without scaling domestic space in the way we once assumed.
For CRE, that implies:
A lower long-run demand ceiling in many Western markets, even if prime assets remain resilient
More polarisation between buildings that support high-stakes coordination and those designed for routine execution
Weaker development optionality (less need for large “growth leases” and expansion rights)
A sharper distinction between cities that host commitment and cities that host capability
And perhaps most importantly:
the office becomes less a universal container for work and more a specialised instrument for particular human functions.
That is an argument for clarity, not simply optimism or pessimism. We have to deal with the world as it is.
A closing question
If GCCs are becoming operating-model infrastructure, places where competence, continuity, and platform ownership increasingly compound, then we have to ask a question that commercial real estate has not been forced to ask in decades:
Where does the organisation actually learn now?
Because where it learns is where it eventually invests, not just in buildings, but in people, influence, and long-term relevance.
Next week I’ll add the accelerant: AI. Not as a general technology story, but as the force that re-sorts work into execution and judgement, and, in doing so, reshapes both offices and participation in value creation.
For now, one question to leave you with:
If growth is being displaced rather than destroyed, what would it mean to value office markets by “missing absorption” as much as by vacancy?