Generative AI for Real Estate People, in 10 Steps - Part 2

Summit - One Vanderbilt - NYC - Antony Slumbers

Part 2 - 6-10

6. Preparedness & Skills

Let’s start at the organisational level.

It’s safe to say that an organisation or company that does not have a culture of innovation is going to struggle getting the most out of Generative AI. It is essential that there is a willingness to adopt new technologies, a preference against maintaining the status quo, and a desire to push for new opportunities with an acceptance of possible failure. You don’t need to go all in on ‘move fast and break things’ but you do need to be comfortable with change.

After that, the most important input will come from having your data in order. The more rich and granular, digital, data you have the better. So much of the power of Generative AI comes from its ability to ingest very large quantitates of data and make sense of it. Clearly if you don’t have much that limits what is possible. That said, Generative AI works very well with unstructured data, such as emails, notes, correspondence, documentation, so is more forgiving than other technologies. And of course, if you are working with a foundational model like ChatGPT you are already working with ‘all the worlds data’, so there is much you can still do. You just won’t be able to tailor outputs to your own precise requirements.

As has been repeated for decades, real estate needs more and better data. These new technologies just re-emphasise the point.

A bias towards continuous learning should already be an organisational imperative but if it isn’t you need to start now. The speed at which these technologies are developing necessitates ongoing curiosity, retraining and re-skilling. You will always be behind the curve, but you must not see it move out of sight. So learning from all stakeholders, internal and external, is vital.

The last essential organisational level trait is to have in place systems to regularly evaluate the effectiveness of any initiatives and to rapidly course correct as necessary. Think ‘Build, Measure, Learn’ at all times.

At the individual level there are many steps to take to be prepared for a Generative AI world. Normally one would think these will differ dependant on whether you are an entry, mid or C Suite level person, but in this case I think one should assume the best ideas, tactics and strategy ‘might’ come from anywhere. So, whilst it is clearly the job of the C Suite to lead they need to be very fluid as to where they take their insights from.

With that in mind, here are some steps each of us should take:

Your organisation or company should be offering you training in these new technologies, but if they are not, do it yourself. There is a great deal available for free online and, offering more structure and help, many paid for courses. Knowledge of new tools is a serious competitive advantage that will repay just about any investment. Whoever you are.

Use the tools. Play around with the free versions of ChatGPT or MidJourney or any number of other offerings. Just getting a feel of what is possible makes a big difference.

Note down ideas about possible use cases. This is especially useful if done by front line employees who often see the actualité of how a company operates, and what it looks like to customers, in a way that more senior managers don’t.

Normally one would task mid level managers with identifying areas where AI can create the most value and developing a roadmap for implementation but frankly they might not be in the best place to do so. Let ideas flourish from top to bottom.

They are though most likely to have to handle the almost inevitable change management programs that these new technologies are going to necessitate. More on this later, but again this might need a more collaborative approach than is often the case.

C-Suite executives will be responsible for signing off on key strategic goals, roadmaps and overall vision, but before they do that their real value will come from enabling the above and making available the investment and resources to do the job well. This is not an area for top down diktats but it sure does need top down air cover, commitment and drive.

That covers general preparedness but what specific skills do companies require in their employees to effectively harness and leverage the capabilities of Generative AI?

First off the more people who understand the fundamentals of Generative AI the better. They don’t need to be practitioners but they do need to understand what it is, broadly speaking how it works, its strengths and weaknesses, and how to effectively utilise its capabilities.

You will need access to people with skills in data analysis and manipulation: proficiency in data analysis and manipulation is essential for working with Generative AI models. So work out where you will find people who have skills in preprocessing and cleaning data, as well as the ability to extract meaningful insights from datasets used to train Generative AI models.

Likewise with programming and software development: strong programming skills are necessary to implement and deploy Generative AI models. So someone needs to be proficient in programming languages commonly used in AI, such as Python, and have experience with relevant libraries and frameworks like TensorFlow or PyTorch.

Employees with a creative mindset and design thinking abilities can leverage Generative AI to produce innovative solutions, designs, or content that adds value to the business. So seek them out!

It goes without saying that domain expertise is super valuable when working with Generative AI. Especially if working on fine tuned systems for specific industries. But what you really want is people with domain expertise who can communicate extremely well with AI technologists. Being able to clearly explain requirements, inputs and desired outputs is a super skill. Look hard for these people.

Everyone needs to have a notion of ethical behaviour. More on this later.

Likewise everyone needs to develop their abilities with regard to abstract and critical thinking. Employees with strong critical thinking and problem-solving skills can effectively identify opportunities where Generative AI can add value. They can analyse complex challenges, define clear problem statements, and apply Generative AI techniques to develop innovative solutions. These skills can be taught and really are going to become critical skills in companies leveraging increasingly powerful technologies.

So we all need to be prepared and up-skill ourselves appropriately. Hopefully the above give you some ideas to where to start.


7. Risks, Concerns and Ethical Implications

Let’s start with the most common ‘risk’ mentioned around Generative AI, and that is its tendency to ‘hallucinate’.

Hallucinations refer to instances where AI models generate content that may sound plausible but is factually incorrect or completely fabricated. This can lead to misinformation and further exacerbate the challenges of trust, reliability, and authenticity in the digital landscape.

Hallucinations in Generative AI can occur due to various reasons, including limitations in the training data, biases in the learning process, or the inherent complexity of modelling real-world phenomena accurately. When relying on AI-generated content without proper verification, there is a risk of spreading false information or reinforcing misleading narratives.

Whilst the degree of hallucinating has reduced (very much so between GPT-3 and GPT-4) it is a reality that is unlikely to fully go away. In much the same way as humans are not inherently trustworthy, neither are machines.

So we need to be very careful about where we use Generative AI and when we need to maintain a ‘human in the loop’. Roughly speaking one should not use human less systems where the implications of being wrong are great or severe. Such as in medical diagnosis. If giving a definitive answer is vital then these are not the systems to rely on. One could easily be opening up levels of risk in one part of an organisation by using them in another.

It’s worth thinking about creating a taxonomy of when to have a human in the loop and when not to. This will develop over time, as new techniques and tools are developed to mitigate hallucinations but the principal remains valid. Risk management of Generative AI is an important area.

A rather hilarious recent example of the need for taking care happened with a lawyer in the US who used ChatGPT to help in putting together a defence for a client in a court case. The AI duly produced a solid argument and provided multiple, authoritative sounding references to existing case law. The only problem was that whilst they looked perfect they were entirely fictitious. Safe to say the Judge was less than impressed.

The point to understand is that ChatGPT is trying to work out, statistically, what the next word should be and whilst what it concludes might look and sound right that is not a reflection, in absolute terms, that it is right. Looking right isn’t being right. A Large Language Model is not a database, it's a prediction engine. It is trying to predict what a good answer to the question being asked would look like.

This is where the abstract and critical thinking mentioned above comes into play.

Beyond hallucinations here are eight other key risks and ethical considerations to take into account when using Generative AI:


  1. Misinformation and Deepfakes: Generative AI can be used to create realistic and deceptive content, including deepfakes, which are manipulated media that appear authentic. This raises concerns about the potential for spreading misinformation, damaging reputations, and undermining trust in media and information sources.

  2. Intellectual Property Infringement: Generative AI models can generate content that resembles existing copyrighted material, raising concerns about intellectual property infringement. This includes the unauthorised creation of artwork, music, or written content that closely mimics the style or characteristics of original works.

  3. Bias and Fairness: Generative AI models can unintentionally perpetuate biases present in training data. If the training data is biased, the generated content may reflect those biases, leading to unfair or discriminatory outcomes. Addressing bias and ensuring fairness in the training data and model outputs is crucial for responsible use of Generative AI.

  4. Privacy and Data Protection: Generative AI models often require access to large datasets, which may contain sensitive or private information. Ensuring robust data protection measures and respecting privacy rights is essential to prevent unauthorised access or misuse of personal data during the training and deployment of Generative AI models.

  5. Consent and Permission: Generating content using Generative AI may involve using data or personal information without explicit consent or proper permissions. Respecting legal and ethical requirements for obtaining consent and permissions is crucial to avoid infringing on individuals' rights and privacy.

  6. Security and Malicious Use: Generative AI models can also be misused for malicious purposes, such as generating synthetic identities, creating deceptive content for fraud, or producing convincing phishing materials.

  7. Psychological and Social Impact: The realistic nature of content generated by Generative AI can have psychological and social implications. For example, generated content that promotes hate speech, violence, or harmful ideologies can contribute to negative societal impacts and pose risks to public safety and well-being.

  8. Accountability and Attribution: Generative AI raises challenges in determining accountability and attributing generated content to its creators. This can have legal and ethical implications, especially in cases where generated content is used for illegal activities, defamation, or other harmful purposes.


The overall point here is not to reduce the appeal of working with Generative AI, or to make it seem a legal minefield that might best be avoided, but to emphasise that there is a serious side to all of this that warrants serious consideration. Being aware of the situation alone goes a long way to nullify it. Again, use your abstract and critical thinking skills whenever working with these tools. And keep abreast of legislation and ‘accepted norms’; these are likely to increase and change considerably in the years ahead.


8. Task adoption - timetable to automation

I’m going to keep this section brief and rely on data put out by McKinsey in a report (The Economic Impact of Generative AI) published in June 2023.

To quote the report:

‘Based on developments in generative AI, technology performance is now expected to match median human performance and reach top quartile human performance earlier than previously estimated across a wide range of capabilities (Exhibit 6 - see below). For example, MGI previously identified 2027 as the earliest year when median human performance for natural- language understanding might be achieved in technology, but in this new analysis, the corresponding point is 2023.’


As a result of these reassessments of technology capabilities due to generative AI, the total percentage of hours that could theoretically be automated by integrating technologies that exist today has increased from about 50 percent to 60–70 percent. The technical potential curve is quite steep because of the acceleration in generative AI’s natural-language capabilities (Exhibit 7 see below). 

Interestingly, the range of times between the early and late scenarios has compressed compared with the expert assessments in 2017, reflecting a greater confidence that higher levels of technological capabilities will arrive by certain time periods. 

They make three other very pertinent points:

First that based on historical findings technologies take eight to 27 years from commercial availability to reach a plateau in adoption.

Second that automation adoption is likely to be faster in developed economies, where higher wages will make it economically feasible sooner. 

And third that technologies could be adopted much more rapidly in an individual organisation. 

Put it all together and the implications are that we are set for rapid change!

9. Impact on Professionals & Jobs

‘AI is the revenge of the working class: now it's the middle class's turn to fear for their jobs.’

Pedro Domingos, Professor of Computer Science, Washington University

This is a pithy way of putting it but it is undoubtedly true that Generative AI has really upended the consensus as to whom AI is going to have the most impact on. Because of its ability to be creative and to handle high cognitive tasks ‘Generative AI is likely to have the biggest impact on knowledge work, particularly activities involving decision making and collaboration, which previously had the lowest potential for automation.’ As stated by McKinsey but pretty much reflecting the new consensus. Because ‘its capabilities are fundamentally engineered to do cognitive tasks’.

‘Our estimate of the technical potential to automate the application of expertise jumped 34 percentage points, while the potential to automate management and develop talent increased from 16 percent in 2017 to 49 percent in 2023. 

As a result, many of the work activities that involve communication, supervision, documentation, and interacting with people in general have the potential to be automated by generative AI, accelerating the transformation of work in occupations such as education and technology, for which automation potential was previously expected to emerge later’

McKinsey, in their report, are actually quite bullish for jobs and productivity. But based on the time involved with tasks being lost to Generative AI being repurposed to work on new tasks that are at least as productive as those they are substituting for.

Which strikes me as a big ask!

At least in the short term. As we know historically, whenever a new technology has displaced jobs they end up being more than replaced by new jobs that only exist because of that new technology. But we also know that there can be a very long gap between the two. In the Industrial Revolution this was known as the ‘Engels Pause’ during which living standards for the working class stagnated or declined, despite increases in productivity, and which lasted for decades.

How are knowledge workers going to create new high paying, cognitive work when ‘the machines’ can now do high end cognitive work?

The Economists Daron Acemoglu and Simon Johnson, in their recent book ‘Power & Progress’, make the point that new technology does not necessarily lead to higher living standards, but when it does it is a consequence of societal norms and beliefs that ‘will’ that to be the case. Whether AI does so or not is going to be down to whether ‘we’ are happy to go along with the structural situation where returns in technology mostly go to Capital rather than Labour.

As Mr Domingos suggests, the middle classes were not so bothered when major changes impacted the working classes, but maybe now they are in the firing line things might be different.

Either way this is too big a topic for here but two thoughts are worth bearing in mind:

First, when AI reduces the cost of writing software down to almost zero we might well see a vast increase in new software that enables us rapidly and widely to create many many new and better, faster, cheaper products and services. When the cost of intelligence trends towards zero we ‘might’ unleash all manner of new opportunities.

And secondly, with lawyers in mind, when AI enables everyone to sue everyone, everywhere, all the time, the sheer scale of new work might keep them all gainfully employed for a long time to come.

Truth is we don’t know what might come to pass, and much of the consequences will be down to actions ‘we’ take, but nevertheless it is certain that a lot of change is coming our way.

10. Impact on real estate / cities.

For this I am going to refer you to my 5 in the series set of articles about ‘Four Great Real Estate Challenges’, and my long article on ‘Cities, AI and the Metaverse? Risks, Opportunities, Actions’.

Generative AI will have an enormous impact on real estate / cities as one, and a very important one, of the forces currently at play. Between the two articles I cover most of what you need to know in some depth.

Conclusion

This is long enough for now so the 'Conclusion' is to follow:)

Previous
Previous

Generative AI for Real Estate People - Conclusion/Summary

Next
Next

Generative AI for Real Estate People, in 10 Steps