Generative AI for Real Estate People, in 10 Steps

Summit - One Vanderbilt, New York City - Antony Slumbers

Part 1 - 1-5

The fundamental point to understand about Generative AI is that it provides each of us with an infinite army of virtual interns, at our beck and call, day or night, anywhere. It affords us the luxury of having not just ‘access to all the worlds information’ (a la the original aim of Google) but endless help in searching, parsing, analysing, synthesising and learning from it.

We have been graced with ‘superpowers’.

Our only task is developing our ability to leverage this. When confronted with knowledge about ‘everything’ we need to up our skills in handling the cognitive load. In the future the valuable human (in commercial terms) will be the one who can ‘conduct’ technologies to produce extraordinary outputs. Anyone will be able to produce ordinary; it will be the extraordinary that matters.

To get to there we need to build some solid foundations. Starting with understanding where all these capabilities have come from, what they are, what they foretell and the ups and downs of using them. We need to know what’s what.

1. History

AI has of course been around for a very long time. The term “artificial intelligence” itself harks back to August the 31st, 1955, when it was coined in a proposal for a “2 month, 10 man study of artificial intelligence” to be held the following year at Dartmouth University. A summer study group was all that would be needed to crack it.

Of course that did not come to pass but some half century later things really did start to kick off. A combination of more data, greater compute power and ever more refined algorithms did lead to some remarkable progress. In 2006 Geoffrey Hinton of the University of Toronto released new research updating how his own invention, the technique of ‘Back Propagation’ could be made vastly more powerful, and useful, in the fields of speech and object recognition. 

From here on these areas improved rapidly and the ability of computers to understand text and images at human levels was reached, roughly speaking, by 2015.

In 2017 a team of researchers at Google released a paper entitled ‘Attention Is All You Need’ which introduced the notion of ‘self attention’. This is utilised in a deep learning model called a ‘Transformer’ that processes data (like text or images) in parallel rather than sequentially, and allows for faster and more contextual understanding, making them particularly effective in tasks like language translation and text summarisation. And notably it means a system can ingest exponentially more data than was previously possible. Which is why these ‘Foundational Models’ are called Large Language Models.

And this is the basis for GPTs, Generative Pre-trained Transformers, and the technology behind Open AI’s ChatGPT.

GPT-1 was released in 2018, and just a year later came GPT-2 which had increased in scale by a factor of 10.

GPT-3 followed in 2020 with a yet greater jump in scale. Whilst GPT-2 has 1.5 billion ‘Parameters’ (think of them as the building blocks of the system) GPT-3 had an incredible 175 billion.

Until November 2022 interacting with the GPT-3 model was something only advanced technologists could do. This was very much a tool for geeks only, and even then only invited geeks.

That all changed on November 30th, when OpenAI launched ChatGPT, which put an easy to use front end onto the system that meant it was usable by anyone who could type a question into a text box. And that kicked off an explosion of interest. Within just two months ChatGPT hit 100 million users, far and away the fastest adoption of a technology ever.

Just four months after this ChatGPT was upgraded to run off the hugely more powerful GPT-4, and by the end of March 2023 the ability to add ‘Plugins’ was also rolled out. The new version was also multi-modal, which meant it could process images as well as text, though for now this is not available via ChatGPT.

Getting us to where we are now. Where, for $20 a month, anyone can corral their army of interns to help them in all manner of ways.

Key Difference between AI and Generative AI

As discussed above, within the realm of AI we had already got to human levels of understanding of text or images by 2015 or so. Since then these computational skills have just got better. The accuracy and speed at which a machine can ‘read’ and understand text or identify the contents of an image or video has increased exponentially.

But traditional AI only ‘understands’. The key to Generative AI is in the name; this AI can generate new text and images (and music). Both AI and Generative AI are trained by looking for patterns in large corpuses of data, but only Generative AI is able to ‘generate’ new material.

And that is why this is such a big deal. For however impressed we may be with the capabilities of these current systems they are of course the worst version we will ever use. Now there is not a 100% causality between time and computing power (perhaps progress will slow down) but it looks like exponential growth is likely to persist for quite some time. So one should consider the consequences of a hundred times greater computing power in 10 years time, 8000 in 20 years and a million in 30 years.

Most likely we’ll see the content these systems are trained on grow from being solely text and image based, to including video and sound. And that adds vastly to the corpus of knowledge.

Effectively one can assume that when using ChatGPT it has read everything. Certainly everything in English. Now imagine it has also seen everything, and heard everything.

2. Capabilities

We need to be clear about the different capabilities of non generative and generative AI. They can be used in tandem but generally they will be performing different functions.

So non generative AI, which is what we have seen most use of in business to date has these key capabilities:

Machine Learning: algorithms that use statistics to find patterns in massive amounts of data, anything from numbers to words to images or other digital information. Once trained this tech underpins what we could call the ‘Prediction Machines’.

Natural Language Processing: enabling machines to read and understand human language.

Computer Vision: the ability to understand the contents of an image or a video.

Robotic Process Automation: automating repetitive tasks, such as data entry, transaction processing, or even more complex activities like auditing and compliance reporting.

Whilst Generative AI has these key capabilities:

Text Generation: Writing essays, reports, presentations, creating poetry, generating scripts for movies, or even simulating a chat conversation. It uses patterns and structures it learns from the input data to create entirely new sentences and paragraphs.

Text Understanding: Beyond generating text, Generative AI is able to classify, edit, summarise and answer questions about text based content.

Image Generation: Creating entirely new images or altering existing ones. For example it can generate human-like faces, design graphic elements, or create scenes for a video game.

Music and Sound Generation: Composing new music or sounds based on the patterns and structures it learns from its training data.

Predictive Modelling: Generating simulations or predicting outcomes based on large amounts of data. Critically it can be used to generate a full set of plausible data or outcomes, instead of making a single-point prediction like traditional predictive models. In essence, it involves creating new data samples that resemble your training data.

These are different capabilities but often they will be used in combination. For example, in the field of autonomous vehicles, computer vision (non generative AI) is used to interpret the surrounding environment, while Generative AI could be used to simulate potential scenarios for training the self-driving system.

But for our purposes, we are going to concentrate now on use cases for Generative AI.

3. Ecosystems and Customisation

Before that though it is important to understand how ecosystems of suppliers are developing rapidly to allow intense customisation to be built on top of Foundational Models like GPT-4 (and others such as Google’s Bert and Meta’s LLaMa), and much of this is developing within the Open Source community.

There are two key reasons for this Cambrian explosion: 

First, businesses need to be able to interact with these large language models using their own data but they cannot do so via public systems as that then exposes their data to the outside world. So they need mechanisms to be able to leverage the power of these models without impacting their privacy. 

And secondly, whilst the likes of GPT-4 have ingested everything on the public internet they do not, of course, have access to proprietary data. So businesses need a way to bring the full power of these models to bear on their own, private, data.

In response AI companies, and open source software, are being built to answer these needs. Every major company is likely to train these systems on their own data but also, at an industry level, it is highly likely that domain specific models will emerge that are fine tuned and built atop industry specific data.

It is probably inevitable that we will see Legal and Real Estate specific models becoming available in the near future. One trained on every legal case imaginable and the other on every piece of documentation attached to real estate assets.

These will be immensely powerful.

4. Applications - Generic & Legal

So let’s look at some applications of Generative AI now.

There are five areas where Generative AI will be most applicable. These are Marketing and Sales, Customer service, Operations, Software Development and Research & Development.

So:

Marketing and Sales: creating personalised content for campaigns, generating product descriptions, predicting customer behaviour, SEO optimisation, designing new product concepts based on consumer trends and preferences, lead generation and nurturing.

Customer Service: self service systems, dispute resolution, call centre operator prompting and support, issue summarising. Helping the support team better support customers.

Operations: dramatically improving internal knowledge management. Making available what is needed, whenever and in a frictionless manner. Making tacit knowledge explicit.

Software Development: software developers are already using Generative AI to assist with coding. Particularly through Github CoPilot which ‘turns natural language prompts into coding suggestions across dozens of languages.’

Research & Development: with access to ‘all the worlds information’ Generative AI is an extraordinarily powerful tool for R&D. Can be used to uncover information, debate tactics, and strategy, develop RFPs, research competitors, develop business models, virtually design products, or create simulations. Teach others or teach yourself.

All of these capabilities are generic and cross industry, but it’s not hard to see how they could apply at specific domains.

Thinking of the Legal industry, Generative AI could be used for:

Document Automation: Automating the creation of legal documents such as contracts, deeds, or wills, saving time and reducing errors.

Legal Research Assistance: Generating summaries of long, complex legal texts, aiding lawyers in understanding and digesting these documents.

Dispute Resolution and Predictions: Simulating a variety of scenarios based on case facts and past judgments to predict possible outcomes, assisting lawyers in devising effective strategies.

Risk Assessment: Predicting potential legal risks of different business strategies, providing valuable guidance for companies.

Personalised Client Interaction: Using chatbots to interact with clients, answering routine legal questions and gathering necessary information.

IP Creation and Protection: Assisting in creating and protecting intellectual property by generating patent applications or identifying potential IP infringements.

Case Strategy Development: Helping lawyers build more effective strategies for upcoming cases by learning from past court cases and their outcomes.

Legal Analytics: Generating insights from large amounts of data, such as trends in case law, judge's decisions, or the success rate of different types of legal arguments, helping lawyers make data-driven decisions.

E-Discovery: Sorting through and organising large volumes of electronic documents quickly and efficiently, identifying the most relevant materials for a case.

Training and Education: Creating realistic case scenarios for students to practice on, helping them gain practical experience.

Legal Coding: Assisting in legal coding or tagging, where AI can help automate the process of assigning legal taxonomy tags to documents.

And on and on. A Generative AI system fine tuned for the entire corpus of legal knowledge will be awesomely powerful. And surely coming soon.

5. Applications - Commercial Real Estate 

Similarly there are multiple domain specific uses for Generative AI in Commercial Real Estate. Here are a few:

Property Design and Development: Producing multiple building designs based on predefined parameters, helping to streamline the design process.

Performance Prediction: Predicting how a particular building design might perform in terms of environmental efficiency or user experience.

Predictive Modelling and Risk Assessment: Generating a range of potential market scenarios by analysing market trends, economic indicators, and demographic data, assisting in making informed property investment and development decisions.

Location Analysis: Identifying locations that may become 'hotspots' in the future or warn about areas that might see a decline in property values.

Leasing Strategy Optimisation: Simulating the effects of different commercial leasing strategies, helping to optimise revenue.

Property Management: Improving efficiency in managing properties by predicting maintenance needs or optimising resource utilisation.

Tenant Screening: Predicting tenant reliability based on historical rental data and tenant information, helping property managers make informed decisions.

Space Utilisation: Generating optimal layouts for office or retail spaces, maximising utilisation and enhancing user experience.

Market Demand Forecasting: Simulating various scenarios to predict future demand for different types of commercial spaces in various locations.

Energy Efficiency: Modelling and predicting a building's energy usage, helping to design more sustainable and efficient properties.

Property Valuation: By analysing historical sales data and market trends, generating accurate property valuations, aiding in investment decisions.

Real Estate Marketing: Generating personalised marketing content for different segments of clients, improving engagement and conversion rates.

You might be wondering how Generative AI can achieve some or many of the above. So let’s take one example and break it down as to how it could actually work.

‘Performance Prediction: Predicting how a particular building design might perform in terms of environmental efficiency or user experience.’

What would be the process for this?

Well, Generative AI can be trained on datasets that include a variety of building designs, each paired with its historical performance data in terms of energy use, internal climate control effectiveness, occupant satisfaction, and other metrics that indicate environmental efficiency and user experience. These metrics could be derived from actual utility data, building management systems, or occupant surveys from existing buildings.

Once the model is trained on this data, it essentially understands the relationship between building designs and their performance. For example, it might learn that buildings with certain window placements or orientations perform better in terms of natural lighting and energy efficiency. Or it could learn that open-plan designs are rated better by occupants for collaboration in a workspace.

When a new building design is proposed, the Generative AI model can then analyse it and predict its performance. It would generate a range of possible outcomes based on what it has learned from the training data. It could predict how energy-efficient the building will be, or how well it will meet the needs of its users.

Of course none of this is possible without the detailed, granular data being available, and it’s not giving you THE answer, but rather helping you make better decisions and more informed judgements.

It’s also critical to think about how all of this technology is used within your business. The temptation will be to do a bit here and a bit there, but really, even if you start small, you need to try and think through how one use case might interact, or impact on, another. You need to be thinking in systems not silos. The ultimate benefit of these technologies will surface when the entire operating model of a business is designed around them, with each part talking to each other and exposing and consuming data from across the enterprise.

Of course none of this will be easy, especially as this is a fast developing area of technology. But whoever does crack it will be mighty competitive.

Previous
Previous

Generative AI for Real Estate People, in 10 Steps - Part 2

Next
Next

Real Estate's Four Great Challenges - Part 5