AI: Widespread Adoption. Shallow Usage.
The 3rd Annual AI in Real Estate Survey reveals an industry that has access to AI - but not seriousness about AI.
In 2024, curiosity. In 2025, experimentation. In 2026? A gap. Between access and depth, between awareness and action, between what the industry says it believes and what it’s actually doing about it.
You can download the full report here.
The Headline
93% of respondents now have access to AI tools at work. That number is essentially settled.
But only 7% describe their organisation as fully integrated.
Two-thirds haven’t moved past basic chat. A third describe usage as entirely ad hoc - individuals doing their own thing, with no organisational strategy behind it.
The industry has the tools. What it doesn’t have is seriousness about the tools.
Everyone Has a Hammer. Almost Nobody Has a Blueprint.
The survey paints a picture of a sector that has passed the access test and is now failing the maturity test. The numbers tell the story clearly:
34% of organisations use AI on a purely ad hoc, individual-driven basis. Another 23%are still trialling. Only 34% have implemented AI in defined business areas, and a mere 7% are fully integrated.
Microsoft Copilot leads daily usage (49%), largely because it’s the only tool many organisations sanction. ChatGPT follows (37%), but with a big caveat: much of that usage is shadow AI - individuals using personal accounts because they find the approved tools less capable. One respondent put it bluntly: their company provides Copilot, but for non-sensitive work they find Gemini, Claude and ChatGPT give better answers.
This is significant. Corporate AI strategy in most CRE firms is being driven by procurement and IT security, not by capability or user experience. People are voting with their feet. And the gap between what organisations sanction and what individuals actually use creates compliance, confidentiality and consistency risks that most firms aren’t equipped to manage.
Two-Thirds of the Industry Is Still Using AI Like a Search Engine
When the survey asked which advanced capabilities people actually use, 69% said standard chat and transcription. That’s it. The foundational level.
Knowledge management? 43%. Custom instructions? 36%. Deep research? 36%. Procedural training and skills? 31%. Live collaboration tools? Just 16%.
The gap between using a chatbot for ad hoc queries and embedding AI into repeatable, governed workflows is the gap between having AI and using AI. Most of the industry is on the wrong side of that line.
The dominant use cases confirm this: transcription, summarisation, drafting. The boring stuff. And 87% of respondents say the boring stuff is exactly what they want AI to handle. Which is fine - except that the boring stuff is increasingly table stakes. Transcription and summarisation are being baked into standard software. They’re hygiene factors, not differentiators.
The firms pulling ahead are doing something different: building knowledge systems, creating structured prompting frameworks, developing custom workflows, deploying AI agents. One respondent described having six operational AI agents in business workflows with twelve more in the build phase. They’re an outlier - but it’s the direction of travel.
The Trust Gradient
The survey reveals a trust hierarchy. Respondents trust AI most for summarisation (75%), meeting transcription (66%), and content drafting (59%). Trust drops as stakes rise: negotiation strategy (30%), property valuation (16%). Eleven per cent don’t trust AI with any of the listed tasks.
This is troubling. While care needs to be taken with these tools, blanket dismissal is indicative of veneer-like understanding. Used well they are enormously powerful right up the intellectual value tree. ChatGPT 5.4 outperformed, or tied, with humans across 83% of ‘knowledge work’ tasks within the ‘eval’ GDPVal. The industry is stuck in a mindset that they are only for trivial work.
The Number That Should Keep You Up at Night
58% of respondents believe the time-based fee model is the most vulnerable part of CRE’s business model.
Read that again.
These are, in many cases, the same professionals who earn time-based fees. They are telling you - in a survey, anonymously, with no incentive to exaggerate - that the way they charge for their work is the element most at risk from AI.
AI collapses the time required for many tasks. What used to take a junior analyst three days now takes an afternoon. What used to take a partner two hours of document review takes ten minutes. The input - time - is compressing. And when clients notice (they will), they will reset expectations about what an hour of advisory time is worth.
The second most vulnerable element? The Information Edge (18%) - the proprietary market data that research-led firms have historically treated as a competitive moat. As AI gets better at synthesising public data, that moat is being drained.
Only 6% believe relationships will fully protect fee structures. They may be right in the short term. But it’s a position that gets harder to defend every quarter.
The Governance Deficit
30% of organisations have a comprehensive AI governance policy. 21% have something basic. But 16% have no policy and no plans to create one. Another 10% don’t even know whether a policy exists.
That means roughly a quarter of the industry is operating AI tools - handling sensitive commercial and financial data - with no governance framework and no apparent urgency to build one.
You can’t govern what people don’t understand. And you can’t build governance for tools that people are using in the shadows.
The shadow AI problem and the governance deficit are two sides of the same coin. Employees use personal AI accounts because the approved tools feel restrictive. Organisations lack governance because they haven’t invested in understanding what their people are actually doing with AI. Both problems trace back to the same root cause: insufficient training and strategic intent.
Training Is the Anchor on Everything
Cultural resistance and understanding gaps are the most viscerally felt barrier to adoption. As one respondent put it: people know AI is important or powerful, but a lot do not know practically what this means.
You can’t redesign workflows if people don’t understand what the tools can do. You can’t build governance if the people writing the policies have never spent serious time with the technology. You can’t adopt agentic AI if your team thinks ChatGPT is just a search engine with better manners.
Training isn’t a nice-to-have. It’s the prerequisite for everything else - governance, workflow redesign, cultural change, business model adaptation. Without it, firms are stuck in a loop: shallow adoption generating shallow results, reinforcing the perception that AI isn’t that transformative, justifying further underinvestment.
Jobs: Not Elimination, but Redesign (and fewer of them)
Despite the low serious adoption and rather dismissive attitude towards AI (especially in the belief it is for the dreary work) it is striking that respondents expected each of the nine CRE disciplines surveyed to suffer a decline in jobs.
Research (63% expect decrease) and property management (61%) are seen as most exposed. FM the least - but even here over 40% expected a decrease in jobs.
I did think it would be interesting to see the results if everyone knew what the SOTA models were capable of.
But the qualitative responses were interesting. Several respondents emphasised that it’s less about headcount reduction and more about role transformation - fewer administrative tasks, fewer junior information-compiling duties, greater emphasis on judgement, stakeholder engagement and oversight.
One respondent offered a framing reflected across many industries: lower-rank roles like coordinators and analyst support get compressed, but demand for reviewers, approvers and humans-in-the-loop remains strong.
But… the assumption that senior or specialist roles are insulated is, I think, misplaced. As AI becomes more capable of analysis, drafting and knowledge retrieval, all roles shift. The exposure isn’t to the role - it’s to the person who doesn’t adapt their workflow.
Space: Reconfiguration, Not Collapse
On the question of AI’s impact on physical space, the majority view (55%) is reconfiguration: same amount of space, different use. 27% expect contraction. 13% see negligible impact this cycle. Only 5% expect expansion.
This aligns with the thesis I’ve been developing in recent newsletters. The question isn’t simply “will AI reduce headcount and therefore reduce space demand?” The answer depends on a cascade of strategic choices about how firms deploy productivity gains. The same efficiency assumptions can lead to radically different space outcomes depending on whether firms shrink, reinvest, or restructure.
The reconfiguration thesis - a shift from desks to collaboration and experience environments - suggests AI is seen as an accelerant of existing post-pandemic patterns rather than a wholly new force. That’s probably right for the next two to three years, but beyond that we’ll see far more direct impact on demand.
The Divergence
Here’s what I take away from three years of this survey.
The industry has moved from curiosity to experimentation to widespread but shallow usage. Adoption is high; maturity is not. And the gap between the firms that are serious about AI and the firms that merely have access to it is widening.
One respondent captured the stakes perfectly: We are overestimating the impact of AI for the next three years - but it will have a devastating impact on those organisations that haven’t worked out how to get, store and use their IP in five years. It will be like bankruptcy - you go bankrupt slowly and then fast.
The technology is accelerating exponentially. The industry is moving linearly. That mismatch has consequences.
A few firms are building AI agents, custom knowledge systems, and structured prompting frameworks. They are pulling away from the pack.
The rest? Widespread adoption, shallow usage. Lots of tools, not enough thinking.
Time to get serious.