THE BLOG
Cognitive Sovereignty - Use It Or Lose It
We’ve already arrived in the era of ‘Workslop’, and need to decide if this is just an amusing meme, or something much more perilous.
The adoption of Generative AI within businesses represents the most fundamental change to cognitive work since the Internet. And it presents us with a real paradox; it simultaneously has the potential to deaden our minds or energise them. Your cognitive capabilities (which is what you are paid for) are either going to atrophy or be augmented. And which it is is entirely down to you.
Executive Summary
Generative AI presents knowledge workers with a stark binary: cognitive atrophy or augmentation. Research shows that 40% of professionals now receive “workslop” (AI-generated content that appears substantive but lacks genuine insight) and those who produce it suffer significant reputational damage (50% are viewed as less capable, 42% as less trustworthy).
The risk isn't hypothetical. Unlike calculators or GPS - occasional tools for discrete tasks - AI is being embedded into the core daily work of nearly every knowledge professional. Persistent cognitive offloading leads to documented degradation: erosion of critical judgement, memory atrophy, automation bias, and loss of problem-solving capability.
But the same technology can augment rather than replace thinking. This requires intentional engagement: using AI as a cognitive scaffold for routine tasks while maintaining "strategic friction" to preserve deep thinking capabilities. This newsletter presents a five-pillar Intentional Intelligence Framework grounded in cognitive science:
Generative Primacy: attempt problems independently before consulting AI.
Strategic Friction: time-box AI access - schedule deep work without it.
Metacognitive Monitoring: maintain awareness of your thinking processes.
Contemplative Presence: use micro-pauses to interrupt autopilot behaviour.
Weekly Practice: commit to analog problem-solving and digital sabbaths.
The choice between atrophy and augmentation isn't made by your employer or the technology. It's made in dozens of small decisions daily about how you engage with AI. The consequences, for your cognitive capability and professional value, are now well documented.
“Workslop”
This hideously ugly word was recently coined in a Harvard Business Review article - ‘AI-Generated “Workslop” Is Destroying Productivity’.
The authors define it as ‘AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.’ And they say that 40% of the 1,150 US based participants in the study received ‘workslop’ in the last month and that 15.4% of the content they receive at work qualifies.
It seems to be having a corrosive effect on the workplace.
“Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent.”
To be clear, AI didn't invent shallow work. Many corporate incentive structures have long rewarded visible activity over genuine insight. What has changed is that AI has now made the production of this plausible-sounding 'slop' nearly instantaneous and infinitely scalable, turning a chronic issue into an acute crisis for productivity and trust.
None of which is good. In fact this is a flashing red sign of something likely to get very much worse unless steps are taken to mitigate it. After all these are high numbers considering the still very low day to day usage of Generative AI in business. So what is going on?
Setting the Scene: The Competitive Edge
What is happening is a consequence of the fundamental purpose of AI systems. They are designed to try and remove friction and generate efficiencies. The aim is to eliminate effort and difficulty. That is their job.
Unfortunately us humans are naturally lazy and when given the opportunity to take the path of least resistance we do that. In a work environment the option to delegate thinking and accept the easy, immediate answer is too much for many to resist. And they don’t, which leads to what researchers call ‘metacognitive laziness’. And business people call loss of ‘competitive edge’
Actions Have Consequences
This has consequences that are well known, and have been seen before many times. With the rise of calculators we lost the ability to perform mental arithmetic, when GPS became competent we lost the ability to read maps, and when Google came out we reconfigured our memory to remember where we can find facts rather than remember the facts themselves (known as the “Google” effect). All of this is supported by extensive academic research, as well as being something we can all relate to.
What is happening with AI though is more pervasive and more dangerous. We all have easy access to calculators, seldom need to read a map, and know how to ‘Google’. So having diminished cognitive function in these areas has little downside.
The threat from AI is of a different magnitude entirely. Calculators automate a single, discrete task. GPS is used only when navigating unfamiliar territory. These are edge cases. Generative AI, however, is being integrated into the core daily workflow of nearly every knowledge worker. It's not an occasional tool for a minor task; it's a constant partner for our most important work: writing, analysing, strategising, and creating. The cognitive offloading isn't occasional and peripheral; it's becoming constant and central.
The trouble is that with AI, as a general purpose technology, rapidly acquiring higher level knowledge skills, if we lose our ability to think then our intrinsic value trends downwards very rapidly. Pushing out ‘workslop’ is like having a sign above your head saying ‘I’m not needed’.
The Atrophy Thesis
The actual consequences of ‘cognitive offloading’ are broad, and frankly scary. They are academically noted as the ‘atrophy thesis’ and in headline terms are:
Erosion of Critical Judgement
Long-term Memory Atrophy
Loss of Problem Solving Capability
Automation Bias (accepting AI results unconditionally)
Reduced Trust in Own Judgement
Reduced Capacity for Sustained, Deep Work.
Reduced Ability at Self-Monitoring and Critical Self-evaluation
Loss of Divergent Thinking and Creative Confidence
Which is a lot of downside in return for a spot of laziness, but overwhelmingly supported by converging evidence. Consistently offload your thinking to an AI and this will be you.
How to Do It Right: Augmentation as Strategy
There is though another way. The same technology that will deaden your brain if you let it can also be leveraged as a catalyst for growth. This requires pedagogical intent and a bias towards augmentation rather than automation. Using the technology as an interactive tool to support and extend thinking, rather than simply replace it.
Augmentation in Practice
1. Scaffolding and Cognitive Load Optimisation: AI should function as a "cognitive scaffold," managing the extraneous mental effort of routine, lower-level tasks. By delegating routine components, such as data formatting, compiling standard environmental disclosures, or summarising long, uncritical market reports, you release and reallocate cognitive resources. We all have limited working memory so the more we can delegate ‘admin’ type work the more we’ll have available for hard thinking.
2. Enhanced Quality and Speed: Mostly, human+AI outperforms AI alone. A 2025 study showed this conclusively - teams working with AI greatly outperformed teams working without. See ‘the Cybernetic Teammate’ study for more on this - the same applied for individuals with AI but they still lost out to teams with AI.
3. The Metacognitive Mirror: This academic term refers to using an AI as a thinking partner. By intentionally engaging with an AI it can reason back to you and help with illuminating assumptions or finding blind spots. By asking it to adopt different personas you can stress test your arguments against a range of interlocutors.
Discipline and the Power of the Pause:
The single greatest barrier to using AI as an augmentation tool is our own ingrained habit of seeking the fastest, easiest answer. The technology is designed for frictionless, immediate output, which triggers our brain's reward system.
To counteract this, we need a practical method to interrupt this automatic impulse. This is where the work of Buddhist Monk Gelong Thubten becomes surprisingly relevant to corporate strategy. He teaches a method for inserting "micro-moments of meditation” - brief moments of mindful awareness - into our daily workflow. By very consciously pausing at apposite moments we can move from reactive to reflective engagement.
The crucial factor in transforming AI use from a threat into an advantage is Intentionality. This intentionality must be cultivated through discipline, precisely because the natural impulse is towards automaticity.
Based on this thinking, a useful process might be as follows:
The Pre-Prompt Pause: Before typing a query into the AI (e.g., "Summarise this 50-page lease abstract"), take one conscious in-breath and out-breath. In that brief space, ask: "What is my intention? What do I truly seek to understand or achieve with this interaction?". This interrupts the automated, immediate impulse and introduces purpose.
The Post-Response Pause: After the AI generates the summary or draft, take another conscious breath before copying or acting on the information. Ask: "What is my critical evaluation of this output? Does it align with my professional judgment? What is the next wise action?”.
These micro-moments are not a "wellness add-on"; they are a direct form of cognitive training, building the metacognitive muscle required to resist passive delegation.
Building Cognitive Muscle Memory
Intentional engagement for us needs to be like muscle memory for athletes. Something just baked in to how we operate.
At first glance, some of these principles might seem contradictory. How can we use AI as a 'cognitive scaffold' while also practicing 'strategic friction'? This is not a contradiction; it is a necessary duality for effective augmentation. Scaffolding helps us manage cognitive load for the task at hand, while Strategic Friction ensures the long-term health of our cognitive abilities, preventing the scaffold from becoming a permanent crutch.
Here is an Intentional Intelligence Framework:
Generative Primacy
What it is: This is the core principle of maintaining the cognitive effort required for learning by always generating your own answer or work before consulting AI. This counteracts the loss of the ‘generation effect’ and ensures the cognitive work that drives durable memory and skill building is performed.
What should you do: You should attempt problems independently for a set time (e.g., a 10–30 minute "try-first" period) or draft initial responses manually before using AI for refinement or comparison.
Strategic Friction
What it is: This principle involves deliberately re-introducing productive difficulty and effort into workflows to counteract the "frictionless" design of modern AI, which otherwise leads to skill degradation. This preserves the desirable difficulties necessary for long-term retention and transfer.
What should you do: You should implement time-boxed AI access (using it only during specific windows, not continuously) and schedule deep work blocks (e.g., 90–120 minutes daily) entirely without AI to ensure core faculties are exercised.Metacognitive Monitoring
What it is: This is the practice of maintaining conscious awareness and regulation of one's own thinking, serving as the user's primary defence against automation bias and the "illusion of competence". It involves thinking about how you are thinking, assessing comprehension, and recognising habitual offloading patterns.
What should you do: You should practice pre-task intention setting by asking, "What is my intention?" before using AI, and perform post-task evaluation by asking, "What did I genuinely learn?". You should also engage in weekly pattern recognition to identify areas of over-reliance.Contemplative presence
What it is: This pillar integrates mindfulness practice to prevent the user from resorting to autopilot reactivity and habitual searching. It involves cultivating awareness of the present moment and one's internal impulses, which trains the metacognitive muscle needed for intentional AI engagement.
What should you do: You should practice the "Pre-Prompt/Post-Response Pause"—taking a conscious in-breath and out-breath before initiating or acting on AI interaction—to transform reactive impulses into conscious choices.
Weekly Practice
What is it: This component refers to structured, routine activities designed to sustain mental fitness and prevent AI from becoming a constant cognitive crutch. These practices serve as focused workouts for the most at-risk cognitive skills.
What should you do: You should commit to at least one complex, non-trivial problem-solving session weekly using only analog tools (pen and paper), and/or schedule a Digital Sabbath (a 12–24 hour technology-free period).
Conclusion
None of the above is hard. But the downsides of not doing this are definitely harsh. As we’ve seen above this really is an important choice that each of us needs to make. If we take the easy route and offload our thinking to AIs, then our brains will atrophy, and we’ll genuinely not be of much use to any employer. I suspect an awful lot of people will go this way, unaware of just how cognitively damaging their behaviour is. And the consequences for them will be bad.
The choice is now explicit. The consequences are documented. And having read this far, you don't have the excuse of ignorance. I'd guess you either aren't offloading too much anyway, or this will prompt you to adjust your behaviour.
I knew this was a big deal, but until doing the research for this newsletter I was not aware just how much research and evidence already exists around the topic.
I hope the above is enough to highlight the issue, but if you want to deep dive into this there is a mountain of material to consult.
PS. For reference I prompted Claude, Gemini and ChatGPT to each produce Deep Research reports which I then added to NotebookLM, and from there had long discussions, produced multiple reports, and three different audio overviews. In other words, not as substitutes for thinking, but as research assistants. The difference matters.
OVER TO YOU
What’s your behaviour with AI like? How are your habits? What are you going to change? How do you remain ‘the Boss’? I would love to hear.
CRE Crosses The Rubicon
CRE Crosses the Rubicon
Industry professionals need to be focusing on the future, not iterating the past.
Change in AI is happening so fast you need to ignore today’s capabilities, and start thinking of what might be possible soon. In just 15 months the real estate landscape is likely to look very different.*
Executive Summary
By December 2026, the commercial real estate industry will have crossed a technological Rubicon. The prevailing paradigm of Artificial Intelligence will have shifted decisively from its current state, a collection of discrete, human-operated tools for task automation, to the deployment of orchestrated systems of autonomous AI agents that manage entire, end-to-end business workflows.
This transformation will render many of today's operational models obsolete and fundamentally redefine the sources of competitive advantage. While today's market leaders leverage AI to enhance the productivity of their human workforce, the leaders of late 2026 will deploy a "digital workforce" of AI agents that function as proactive, collaborative teammates, executing complex processes with minimal human intervention.
Ignore ‘Exponential’ at Your Peril
Julian Schrittwieser is an AI researcher at Anthropic, makers of Claude, and one of the original authors of DeepMind’s AlphaGo and AlphaZero. So he’s something of a superstar. Last week he published an essay ‘Failing to Understand the Exponential, Again’ in which he explained how people make the mistake that when they encounter errors in current AIs they jump to the conclusion that it’ll never be capable of XYZ. Whereas if one follows the data, AI progress is moving at an extraordinary pace, and that we should be expecting that:
Models will be able to autonomously work for full days (8 working hours) by mid-2026.
At least one model will match the performance of human experts across many industries before the end of 2026.
By the end of 2027, models will frequently outperform experts on many tasks.
So whilst we tend to focus on the here and now, we need to appreciate that Commercial Real Estate, alongside other industries, is fast approaching a pivotal moment where what is technologically possible is set to change dramatically.
When the 'Expert' is a Machine - What Then?
And this is going to upend the 'work we do', our business models, and where value and competitive advantage is to be found. When an AI can work autonomously for 8 hours with minimum human supervision whole swathes of industry workflows become possible to execute in an entirely different way to today.
Even more so when the ‘intelligence’ of that autonomous agent matches or surpasses that of a human expert. OpenAI recently released research (GDPval) showing that today, across many tasks, 47.6% of deliverables by Claude Opus 4.1 were graded as better than or as good as the human deliverable. Fast forward 15 months and you get to ‘match the performance of human experts across many industries’. And one more year and ‘frequently outperform experts’ becomes commonplace.
Given the above you need to plan ahead. Whoever redesigns their operations for this world will have an inordinate advantage. And this advantage will compound, as these systems have a flywheel effect where each completed workflow acts as learning material for the next. In contrast those who don’t will find their knowledge degrades in value increasingly fast. Being a great saddle maker when cars arrived was not a great place to be.
Unlocking the Future - Data
To unlock the future, two foundational shifts are necessary:
First, the leap to expert-level, autonomous AI is impossible with the fragmented data infrastructure that plagues the CRE industry today. Professionals spend up to 80% of their time just gathering and cleaning data, a massive bottleneck to high-value work.
The vast majority of crucial information, leases, legal contracts, property photos, is unstructured and remains largely untapped. By thinking of data as a critical asset future ready companies will be creating a data spine that pulls all of this together and makes it possible to be effectively managed.
One often hears real estate people proclaim that proprietary data is their ‘gold mine’, but this will not last. More and more information is becoming open in one way or another, and AI makes the scraping, aggregating and synthesising of disparate data sources increasingly easy. Being able to orchestrate all of these data sources is where competitive advantage will lie. One doesn’t need to own data to extract value from it.
Unlocking the Future - Agents
Secondly, the enormous value of AI that can work autonomously for hours is that one can start to orchestrate whole swarms of customised, bespoke ‘Agents’. By unbundling and rebundling workflows (which we’ve covered multiple times in this newsletter) it becomes possible to chain any number of tasks together to achieve a goal.
Autonomous Agents in Action
Here are four examples of autonomous workflows that will be common by December 2026.
Investment & Acquisitions: Predictive Underwriting
Today: AI-assisted valuation models use historical data. Due diligence is a manual, weeks-long process involving expensive experts.
December 2026: AI agents will perform autonomous due diligence. Fed an offering memorandum, an orchestrated team of agents will extract financials, abstract lease terms, and check planning/zoning laws, producing a comprehensive risk report in hours, not weeks. Valuation will become dynamic, with AI continuously analysing news, social sentiment, and satellite imagery to identify mis-priced assets before the market does.
Development & Construction: Proactive Project Orchestration
Today: AI assists with design optimisation. Project management is manual and reactive.
December 2026: Generative design will produce near-complete schematic designs and BIM models based on a set of constraints (budget, codes, energy targets). On-site, AI agents will act as a central intelligence hub, integrating live data from IoT sensors and supply chain APIs to proactively orchestrate schedules, predicting delays and recommending solutions before they disrupt the project.
Leasing & Tenant Management: Autonomous Leasing
Today: Simple AI chatbots handle basic tenant inquiries. Negotiations are fully human-led.
December 2026: End-to-end autonomous leasing agents will manage the entire workflow 24/7 - from engaging prospective tenants with hyper-personalised conversations to conducting virtual tours, running automated screening, and generating customised lease documents. For complex negotiations, a generative AI "copilot" will assist humans in real-time by redlining contracts, flagging risks, and suggesting legally compliant alternative clauses based on the firm's playbook.
Asset & Portfolio Management: Real-Time Risk Orchestration
Today: Predictive maintenance alerts are common. Portfolio analysis is a periodic, backward-looking activity.
December 2026: AI will enable continuous, real-time portfolio risk orchestration. Agents will work 24/7, monitoring tenant credit risk, tracking loan covenant compliance, and identifying ESG compliance gaps. Crucially, they will move to prescriptive intelligence, not just flagging a risk but autonomously modelling "what-if" scenarios and recommending quantified, data-driven solutions for human asset managers to approve.
Humans Must Maintain Agency
In each of these cases designing the autonomous workflow is a super-skill. It is up to you what to automate, and where to insert a ‘human in the loop’. You’ll probably design different processes for different circumstances. But the point is to balance autonomy with agency.
Much of the process for which you used to charge will become commoditised, so you need to recreate value elsewhere.
The focus will move from being a "doer" of tasks to a "strategic overseer" who can:
Provide Strategic Judgment: AI lacks common sense and an understanding of local nuance; human experts will provide the critical context and make the final judgment call.
Master Negotiation and Relationships: AI cannot replicate the empathy, rapport, and emotional intelligence required for high-stakes deal-making.
Build Trust: In a world of automation, trust becomes the ultimate currency. Authenticity and integrity remain fundamentally human differentiators.
Recommendations for 2026 Readiness
The transition to an autonomous, AI-driven operational model is not optional; it will trigger a period of "digital Darwinism" where technologically advanced firms gain insurmountable advantages.
1. Make Data Your Core Strategy: Immediately begin the work of breaking down data silos and building a unified "data spine." This is a C-suite level business initiative, not an IT project.
2. Domain-Specific Reasoning: Autonomous Agents need exceptional ‘context, clarity and constraints’, so assembling detailed instructions as to what inputs are required, what processes need to be executed, and what outputs are desired is essential. You can feed all the domain-specific information you have access to into the system, but you have to be very clear as to what the Agent can access, where it is, and how to get to it.
3. Invest in Your People: Proactively manage the cultural shift by framing AI as a tool for augmentation, not replacement. Invest heavily in upskilling programs to equip your team for higher-value strategic roles.
4. Establish Robust AI Governance: The regulatory landscape is evolving rapidly. Build frameworks now to ensure data privacy, prevent algorithmic bias, and maintain transparency. This will become a source of competitive advantage.
Conclusion
The above may seem like science fiction and a world away from what you think AI is capable of. But it is not. This is short term prediction based on what we know for sure today. By the end of next year we will all have access to models that can perform autonomously (but based on our supervision) for a full working day, and they will be ‘expert’ in many of the tasks you have to do.
To make use of these capabilities you will have to have to have developed a unified data spine, and thought hard about what value you can generate now that many of your workflows will be commoditised by automation. I’ve no doubt this exists in spades: we all spend so much time processing data rather than thinking hard about it. In that thinking there is surely much value?
And, of course, much of the industry won’t have done any of this, or even thought much about it, so short term at least, they’ll be huge value to be had from being an early adopter.
OVER TO YOU
How ready are you? What’s the state of your data? Do you have a decent grasp of the principles of data science? Have you built any Agents yet? Are you working for a future ready firm? If not, is it time to move on?
PS *
Some will argue this timeline is too aggressive, that implementation will lag capability, that I'm underestimating organisational inertia. They might be right. But here's the crucial asymmetry: if you prepare for rapid transformation and it takes 36 months instead of 15, you've merely invested in data infrastructure and AI literacy earlier than necessary. If you wait for certainty and transformation happens on the aggressive schedule, you've lost your business. In an environment of exponential change, the rational strategy is to prepare for the aggressive scenario.
When Bricks Meet Compute
How AI’s Economic Shift Will Reshape Real Estate
Economic growth is moving from human labour to computational resources. For the real estate industry, this means new tenants, new asset classes, and new rules for value creation.
Executive Summary
Artificial intelligence is not just another technology cycle. It represents a fundamental change in how economies grow, how work is organised, and how value is distributed. For the first time in modern history, economic progress is being driven less by the productivity of human workers and more by the expansion of computational resources, “compute”, and the energy that powers them.
For commercial real estate, for you, this shift is profound. The centre of gravity is moving away from routine office work and toward infrastructure that supports AI at scale: data centres, energy hubs, specialised R&D environments, and logistics platforms. Demand across asset classes will diverge sharply.
Three recent reports illuminate this future:
Epoch AI’s “AI_2030” projects the scaling of compute, data, and energy through 2030, with tangible capability milestones.
Agrawal, Gans & Goldfarb’s “Genius on Demand” models how knowledge work will be reallocated as AI “geniuses” enter the workforce, pushing humans to the creative frontier.
Pascual Restrepo’s “We Won’t be Missed” explores the long-run economy after Artificial General Intelligence (AGI), where compute drives growth, labour’s share declines, but absolute human prosperity can still rise.
Together, these reports suggest three horizons:
To 2030: Scaling continues unless constrained; AI transforms digital R&D.
2030–2040: Knowledge work reorganises; humans specialise in frontier creativity, AI dominates routine.
Post-2040: Growth decouples from labour, but humans still work and may prosper in absolute terms; outcomes hinge on how compute is owned and taxed.
The message for CRE professionals: fast change is underway, but multiple paths are possible. Up to 2030, we can map a baseline. Beyond that, the direction of travel is clear, though the speed and distribution are uncertain.
What the Three Papers Tell Us
"By 2030, a single AI training run could rival the annual consumption of a mid-sized city."
Epoch AI’s “AI_2030” is the near-term anchor. Compute used to train the largest AI models has grown 4–5x annually since 2010. Extrapolating this trend suggests training runs in 2030 could be 1,000 times larger than today, with costs in the hundreds of billions of dollars. Frontier training already consumes tens of gigawatt-hours of electricity per run; by 2030, a single training run could rival the annual consumption of a mid-sized city.
Capabilities are expected to follow:
Software engineering benchmarks (SWE-bench) solved by 2026.
Mathematics reasoning (FrontierMath) potentially cracked by 2027.
Molecular biology protein-ligand modelling (critical for designing new drugs) benchmarks solved within this decade.
Weather prediction is already outperforming numerical methods on hours-to-weeks horizons.
Epoch stresses that scaling is contingent on energy, chip supply, and investment and not guaranteed. But it provides the most concrete baseline for 2030.
Agrawal, Gans & Goldfarb’s “Genius on Demand” takes a microeconomic lens. It models routine workers (who apply known knowledge) and geniuses (who create new knowledge at rising cost the further from what is known). Before AI, scarce human geniuses were allocated at the boundary of routine work. With AI geniuses entering, humans are pushed outward to more novel questions. Routine roles erode; the economy bifurcates into AI geniuses handling the mainstream and human geniuses at the frontier. The paper assumes managers allocate questions optimally, though in reality orchestration will be messy.
Pascual Restrepo’s “We Won’t be Missed” looks at the long-run equilibrium. Distinguishing bottleneck work (essential for growth, e.g. energy, logistics, science) from accessory work (non-essential, e.g. arts, hospitality), he argues:
Bottlenecks are automated as compute becomes abundant.
Economic growth becomes constrained by, and proportional to, the expansion of compute.
Human wages converge to the compute-equivalent cost of replicating their work.
Crucially, this cap is above today’s wages. Humans may prosper in absolute terms, but their share of growth declines as compute compounds faster.
Work does not disappear - humans still perform accessory tasks and some bottleneck complements.
Restrepo also stresses political economy: societies may tax compute, redistributing its returns. The long-run outcome is not jobless dystopia, but a shift in power between labour and compute.
Three Horizons — With Scenarios
Horizon 1 (to 2030): Scaling Baseline
If current scaling holds, AI delivers predictable capability gains in digital science. Desk-based research - software, maths, biology - flourishes. Compute and energy become the new bottlenecks.
Baseline Scenario: Scaling persists. Data centre demand grows rapidly, AI R&D tenants proliferate.
Alternative Scenario: Scaling slows. Algorithmic efficiency replaces brute force; AI progress continues but with narrower use cases and less energy demand.
Horizon 2 (2030–2040): Labour Market Reallocation
AI geniuses reshape knowledge work. Routine roles erode; humans concentrate on creativity and judgement. Realistically, this is not binary: tasks within jobs get unbundled, automated in parts, recombined into hybrid roles.
Baseline Scenario: Office demand will bifurcate: routine-heavy employers will shrink their footprints, while frontier-intensive occupiers will invest in specialised, collaboration-rich environments.
Alternative Scenario: Cultural and regulatory drag slows adoption; hybrid human-AI roles persist longer.
Horizon 3 (Post-2040): Compute-Driven Growth
Growth is pinned to compute. Labour’s share declines, but wages rise above today’s levels before flattening. Humans still work, particularly where compute is uneconomical or socially valued.
Baseline Scenario: Wealth concentrates among compute owners; housing affordability pressure grows.
Alternative Scenario: Societies tax compute, redistribute gains, and sustain broad-based prosperity.
What Could Break the Forecast?
Energy constraints: Grid capacity, renewable intermittency, and 3–7 year approval cycles for new projects.
Semiconductor limits: Approaching physical boundaries at atomic scale.
Regulation: EU AI Act, China’s state-led model, US antitrust and export controls.
Capital cycles: AI clusters costing $10–50bn may hit financing headwinds.
Public trust: Safety failures or backlash could slow deployment.
Regional Divergence
United States: Advantage in hyperscalers, shale energy, and capital depth. Strong growth in data infrastructure and frontier AI hubs (Bay Area, Austin).
Europe: Regulation-first (EU AI Act, sustainability mandates). Growth in data infrastructure capped by energy and planning constraints.
China: Pursues domestic chip scaling, centralised AI strategy, state-backed data infrastructure build-out. Implications for different demand patterns in industrial and logistics.
Middle East: Energy-rich states (Saudi, UAE, Qatar) investing in sovereign AI clusters; likely to become global destinations for hyperscale campuses.
Singapore: Illustrates capacity limits, restricting new data centres despite demand.
For CRE, this means opportunities and risks are uneven: location, regulation, and energy matter as much as demand.
Mapping to Real Estate Asset Classes
1. Data Centres & Energy Infrastructure
Compute is the new growth driver; data centres are its factories. If scaling persists, demand is exponential. If scaling slows, demand is still strong, but more efficiency-driven. Either way, land near power, cooling, and fibre is strategic. Expect competition from hyperscalers, sovereigns, and utilities.
2. Industrial & Logistics
AI-enabled supply chains, robotics, and predictive systems reshape demand. Expect bifurcation: generic warehouses vs high-tech hubs with energy and compute integration. Adaptive reuse into AI-ready facilities is a major opportunity.
3. Offices & R&D / Life Sciences
Office demand does not split neatly into routine vs frontier. More likely: gradual unbundling of tasks, hybrid AI-human roles, and new formats for orchestration. Frontier R&D and life sciences demand grows; routine-heavy tenants shrink. Offices become compute-rich collaboration hubs, not desk farms.
4. Retail
AI reshapes supply chains and consumer engagement. But inequality is the deeper driver: luxury and subsidised segments expand, mid-market weakens. CRE must prepare for divergence.
5. Residential
Housing demand persists. Absolute wages rise, but stagnation relative to compute-driven growth stresses affordability. Luxury remains buoyant, subsidised/social expands, mid-market shrinks. Policy (e.g. compute taxation) will heavily influence outcomes.
How to Handle the Change
For Individuals
Build AI fluency (workflows, orchestration, oversight).
Develop frontier skills (framing, synthesis, judgement).
Prepare for hybrid roles where tasks are constantly reallocated.
For CEOs / Firms
Treat compute as strategic: not a back-office cost, but a core input.
Focus on workflow orchestration: integrating AI into valuation, leasing, asset management.
Pursue ecosystem partnerships: with energy, data, and tech players.
For the CRE Industry
Adapt valuation and leasing standards to AI-driven occupier models.
Adjust sector weighting: overweight data infrastructure, grid-adjacent industrial, life sciences. Underweight routine office.
Engage regulators: compute taxation, energy allocation, and AI policy will shape demand as much as economics.
Conclusion: The Decade to Position
Fast change is underway - but multiple futures are possible. To 2030, the baseline suggests explosive compute demand and tangible AI capabilities. Beyond that, labour reallocates and growth decouples from wages, but humans still work and likely prosper in absolute terms.
For CRE, the imperatives are clear:
Data infrastructure, AI-linked industrial, and frontier offices are growth categories.
Routine-heavy offices and mid-market retail face structural headwinds.
Housing remains resilient, but affordability pressures grow.
Doing nothing is not an option. Those who lean in, building AI fluency, repositioning assets, rethinking strategies, will not only survive but thrive in an age where compute, not labour, drives growth.
OVER TO YOU
Does this resonate? How are you underwriting the risk of 'routine office' tenants shrinking their footprint?
That '1000X more compute' figure isn't abstract. It means a land rush for property near power substations. Is your team mapping these locations?
I specialise in helping firms build a strategic response to these horizons. If you're ready to move from thinking to acting, let's talk.
Agents, Agents, Agents
The low hanging fruit of generative AI. Waiting to be picked.
AI 'Agents' come in many forms, and OpenAI’s ChatGPT contains three that you can learn and leverage in no time at all - Custom GPTs, Projects and ‘Agent' mode.
So What are AI ‘Agents’
Let’s start with what AI ‘Agents’ are. They are simply intelligent systems that can be designed at different levels of complexity — from lightweight assistants to fully autonomous problem-solvers. They come in many forms, each suited to different purposes, whether guiding a single workflow, managing a project, or operating as a flexible digital teammate.
To be clear this is a sliding scale, all the way up to what purists mean when they talk about ‘Agentic AI’:
‘A true AI agent is an autonomous system that persistently pursues goals through iterative environmental sensing, decision-making, and action-taking, whilst adapting its strategies based on feedback and changing conditions.’
These exist today, in limited numbers, but mostly the AI Agents we work with today are not fully autonomous and are very much designed with a ‘human in the loop’.
Think of these three as rungs on a ladder: Custom GPTs for repeatable text, Projects for structured workflows, and Agents for proactive automation.
ChatGPT Custom GPTs
What are they?
They’re bespoke versions of ChatGPT tailored for a specific role, task, or style of work. They are configured with your own instructions, tone of voice, and reference material so outputs are consistent and repeatable. And you can embed templates, checklists, or frameworks relevant to your domain (e.g., investment memos, ESG plans, board packs) into them.
When to use them?
You use them for repeatable, text-driven tasks such as memos, reports, checklists or templates. They work best when you want a GPT that consistently “thinks like your team” without re-explaining context each time, and when you want to share a standardised tool with colleagues, so everyone produces outputs in the same style and structure.
In a nutshell, you use a Custom GPT when you want repeatable outputs, in a consistent style, that you can share with others.
Use Cases
Here are four use cases:
Investment Committee Support: draft polished IC memos in your firm’s preferred format.
Fund Reporting: produce NAV updates and ESG reports in a consistent structure.
Recruitment: creates job specs, interview packs, and scoring templates with the right tone.
Heads of Terms Negotiation: draft clauses, flags risks, and ensure standardised outputs.
In the #GenerativeAIforRealEstatePeople course we have 20+ ‘TDH GPTs’ that do everything from provide career advice, act as sustainability consultants, act as IC Committee Advisors, and help you negotiate Leases.
Outside real estate progressive companies use Custom GPTs throughout their business. Vaccine developer Moderna has over 3,000 among a workforce of 5,600. Nearly every workflow could benefit from a Custom GPT.
One of the TDH Custom GPTs will even help you work out where best to use them in your own business.
ChatGPT Projects
What are they?
They are workspaces inside ChatGPT designed for multi-step, data-driven, or ongoing workflows. They let you store files, instructions, and conversation history so you can return and build on work over time. And they support advanced data analysis (spreadsheets, models, scenario runs) alongside natural language prompting.
When to use them?
They are best used for complex workflows that need structured inputs and iterative runs (e.g., portfolio stress testing, budgeting, capex prioritisation), and when you need to upload and reuse data or documents (e.g., lease schedules, ESG data, financial models). In addition when you want to track progress across sessions, and not just deal with one-off answers.
A major difference though is that Projects are not shareable - they are personal workspaces, unlike Custom GPTs which can be distributed across a team.
Use Cases
Here are four use cases:
Portfolio Stress Testing: run vacancy and interest rate scenarios with uploaded data, saving results for comparison over time.
Capex Prioritisation: rank ESG retrofits and fit-outs using criteria and data files, updating iteratively as assumptions change.
Budgeting & Forecasting: manage Opex/Capex scenarios, store models, and track forecasts across sessions.
Planned Preventive Maintenance (PPM): schedule, prioritise, and update maintenance tasks across sessions.
Within the course we have a Project that lets you see a building through the eyes of its occupiers. It surfaces the pain points that drive churn, disputes, or reputational risk, and generates practical AI/tech interventions to fix those problems.
You can push Projects pretty hard; for workflows that you repeat, are quite complex but follow certain patterns, and require updated datasets they can be incredibly useful.
ChatGPT Agent
What are they?
They are AI assistants that can work continuously in the background, not just when prompted. They can monitor systems, fetch data, update trackers, and send alerts across different tools, and are designed for proactive workflows that go beyond “ask and answer.”
When to use them?
They are best used when a workflow needs ongoing monitoring (e.g., lease events, arrears, compliance deadlines). Also when tasks require cross-tool coordination (e.g., pulling from CRM, data room, spreadsheets, and messaging platforms). Overall, when you want the AI to act without being asked each time, and are best for time-sensitive or repetitive processes where a missed step carries risk.
Note: Agents are still evolving - they often need careful setup and integration with your existing systems. They’re powerful, but not always “plug and play.”
Use Cases
Here are four use cases:
Lease Event Management: track renewals, break clauses, and re-gears, fetching market data and prompting timely action.
Arrears Management: monitor payments, flag arrears, and draft notices automatically.
Compliance (Safety, etc.): track fire, asbestos, and statutory deadlines, sending reminders and updating logs across systems.
Deal Pipeline Tracking: continuously monitor NDAs, bids, and due diligence statuses, reducing manual oversight.
Most of the above would take some setting up, and are perhaps more aspirational than practical today. Nevertheless, one should keep track of the capabilities of ‘Agent’ mode because it is developing fast. In a year it’ll be unrecognisable. Within a year, expect agents that handle entire leasing workflows — scheduling viewings, updating deal trackers, and flagging risks automatically — with minimal human input.
That said, in Agent mode today you can send your ‘Agent’ off to visit websites, read news, do calculations and report back. You just have to try ideas out and see how they get on. It’ll be good practice for when you have dozens of these virtual helpers working for you 24/7.
Conclusion
These three ‘Agents’ are not widely used, but they should be. And no doubt will be over time. But for now, as I repeatedly stress, if you lean into these advanced uses of Generative AI you’ll be doing yourself an enormous favour. These are low hanging superpowers - I’d be amazed if you weren’t amazed at what they are capable of if you give them a serious go.
OVER TO YOU
What workflows resonate with you? Where do you think you could use a Custom GPT, a Project or an Agent? There are probably dozens of use cases - just pick a few, and dive in!
This week, try building one Custom GPT that mirrors your team’s tone. Use it three times. If it doesn’t save you time, email me and tell me why.
Changing Assumptions
"When the facts change, I change my mind” - John Maynard Keynes
Real estate is heading towards an operating model where job functions run on prompts—and agents do the rest. The shift is coming faster than expected.
The Hypothesis
It is my belief that a (very) large percentage of workflows in real estate can be broken down into a series of tasks, and that these tasks can be completely, or nearly, automated by the application of ‘Prompt Packs’.
Prompts are Enough
Each ‘Prompt’ in the pack - they’d work sequentially through 3-7 steps - would contain the essence of that task. By which we mean they’d contain:
Inputs – the required data (rent rolls, EPCs, abstracts, comps).
Processing steps – filtering, benchmarking, compliance checks.
Outputs – tables, reports, approvals.
Examples – to make the flow transparent.
We looked at workflows across nine categories, such as Leasing & Occupier Management, Valuation & Investment and Asset & Portfolio Strategy. And realised that almost all the workflows could be fitted into the I/P/O/E framework above.
It became clear why Morgan Stanley (after analysing tasks performed by 162 real estate investment trust and commercial real estate firms, with a combined $92 billion of labor costs and 525,000 employees) recently wrote that ‘AI can automate 37% of tasks in real estate, representing $34 billion in operating efficiencies.’ **
Two Breakthroughs This Year
This still holds true from when we devised this at the beginning of the year, but since then, two breakthroughs have emerged to push the whole idea forward.
Automation of automation (Master Prompts)
First, it may be possible to ‘automate the automation’. Since GPT-5 came out in early August we’ve seen how much better it is at multi-step problem solving, instruction-following for complex and evolving tasks, and invoking additional ‘tools’ as and when required. And with these new capabilities we’ve found it is possible to develop ‘Master Prompts’ that allow you to enter a workflow and the models work out the entire input/process/output/example framework and build the series of prompts required.
In effect a prompt can create ‘Prompt Packs’.
Now this is at an early stage but the technology has developed to such an extent that a lot of shortcuts to the future are now available.
Practical Agents (Agentic AI)
The second breakthrough to emerge is that the long-forecasted world of ‘Agentic AI’ is arriving fast. Whilst still somewhat brittle, the idea of creating discrete software services that can be given instructions and that can then be left to autonomously work out how to complete them, is coming to pass. Anyone who has used ChatGPT Agent will have experienced the rather odd sensation of watching a virtual entity thinking and acting its way through a problem whilst boxed in a computer within a computer.
This is opening up a whole new world of opportunity and whether more robust forms of Agents take 3, 6, 9 or 18 months to arrive, they will definitely be arriving. And so one can plan for them.
From Prompt Packs to Agents
Which means that in the near future we are very likely to see large swathes of the real estate industry become industrialised. Prompt Packs will naturally morph into collections of Agents.
Just as lean manufacturing, via ‘The Toyota Production System’, codified shop-floor know-how into standard work instructions, Agentic AI will be doing the same for CRE.
Now, ’The Toyota Production System’ is one of the most consequential management innovations of the 20th century — arguably as influential to industrial organisation as double-entry bookkeeping was to finance.
The fundamental point is that the concept of codifying tacit knowledge is hardly new in management thinking. So real estate should not be surprised that it appears to be finally reaching our industry.
There are already domain specific AI companies offering a range of services dealing with high value, document heavy and repeatable real estate workflows. Where this ‘Prompt Pack/Agentic’ framework differs is that it applies to all the other workflows one has to deal with day-to-day that do not merit VC backed startups addressing. The implementation of ‘Prompt Packs/Agents’ will be led by all of us. Each creating our own swarm of virtual helpers to suit our particular needs.
As we’ve discussed before , the days of us being ‘Agent Bosses’ are near. We’ll create, monitor and curate these tools. We’ll industrialise ourselves!
So, between the high-end outsourced agent creators and our ‘build your own’ efforts, a huge amount of what we’ve been paid for as an industry in the past is about to be automated.
Critical Questions
Which leads to some critical questions:
1. Do we buy or build?
It has long been unfashionable within real estate to entertain industry players building their own technology. Always better to buy in technology from specialists has been the mantra. On the basis that ‘you don’t know what you’re doing, don’t have the talent, and can’t afford to do it properly’.
Today the tables are turned. Given the ease with which a lot of new tools can be developed, or utilised, every real estate company needs to build a level of technical competency, or at least literacy, in-house. You need to know what you’re doing, you need technical talent, and now YOU CAN afford it.
2. If we buy, what does that mean?
As we’ve seen ‘Prompt Packs’ and Agents are going to become easy enough to create and curate in-house, but for the heavy lifts you are going to need help from the AI services companies working on things such as Lease Extraction, Asset Performance Reporting and ESG Analysis.
These are high value, repeatable tasks that are complicated and nuanced - but they can be largely automated. Just not, most likely, by you.
Which means that, unless you are a large player who SHOULD be building this capability in-house, over time the value in this work will accrue to the AI service providers. If you are using someone else’s tools you are vulnerable to being commoditised. And most likely will be.
AI unbundles knowledge from jobs, and reduces the cost of intelligence. Value will move to those that enable this.
The strategic trade-off is clear:
Build = defensibility, talent, control point.
Buy = speed, commoditisation risk, margin erosion.
3. If the ‘machines’ are doing all this work, what are we humans supposed to do?
This is not nearly as hard, or as worrying, as often stated. You just have to be clear about relative competencies. Think of it like this:
AI provides:
Rapid processing of complex, multi-dimensional datasets
Identification of patterns humans might miss
Consistent analytical frameworks across large portfolios
Probabilistic insights to inform human decision-making
Humans provide:
Market context and nuanced interpretation
e.g Local knowledge, regulatory nuanceStrategic judgment and risk assessment
e.g Portfolio capital allocation, risk appetite.Stakeholder relationship management
e.g Tenant trust, investor alignment.Creative problem-solving for novel situations
e.g Complex mixed-use repositioning, adaptive reuse.
Changing Assumptions
So you need to be paying huge attention to what we discussed in ‘AI Fluency is Not Enough’ - as AI removes existing constraints (eg intelligence) it creates others (eg data, coordination and trust). You need to be over-indexing on how the above will reshape the industry.
Where can we add value?
Where can we act as a ‘control point’?
How can we absorb customer risk and guarantee an outcome?
How can we change the story - what was expensive is now cheap, but you now need XY or Z.
And above all else, changing your assumptions about the future:
What constraints within our industry will go away, but what will endure?
What do we need to know that we did not know before, and how long will it take for us to acquire this new knowledge?
What coordination problems will be commoditised (for example which reports)?
What will be defensible - of those four ‘Human’ skills listed above which ones am I/We strongest at?
Should I be updating my five-year plans to two. Or one?
OVER TO YOU
Ask yourself:
Which of my workflows are most at risk of commoditisation?
Where can I act as a control point?
Which human skill is my differentiator?
What’s my planning horizon — five years, or one?
** Obstacles remain, of course, such as fragmented data, tacit knowledge, and organisational inertia. But the trajectory is clear: agents will chip away at each, and this rebundling will force CRE leaders to redefine where they add value.