The AI Labs Are Telling Three Different Stories
OpenAI's new policy document is telling investors, regulators and workers three different stories about the future of AI.
OpenAI’s new policy document is telling investors, regulators and workers three different stories about the future of AI. For commercial real estate, the contradictions between them matter more than any individual proposal.
OpenAI published ‘Industrial Policy for the Intelligence Age’ on 6th April. Most of the coverage has focused on the headline proposals: a Public Wealth Fund, four-day-week pilots, adaptive safety nets. All interesting. All missing the underlying point. The frontier AI labs are now telling three different audiences three different stories, and the stories cannot all be true. For those of us in commercial real estate, that contradiction matters more than any individual proposal in the document.
THREE AUDIENCES, THREE STORIES
To investors, the labs are telling a story of massive, durable value capture. The justification for near-trillion-dollar valuations, the hundreds of billions in compute commitments, the talent arms race, none of it works unless the labs themselves capture a huge share of the productivity gains they generate. Investors are being told, in the language of pitch decks rather than policy papers, that frontier AI is a winner-take-most market and that the winners will extract rents at a scale that makes today’s hyperscalers look modest.
To policymakers and the public, the labs are telling a story of broadly shared abundance. Lower costs for essential goods. Scientific breakthroughs reaching ordinary communities. Productivity dividends flowing to workers. A Public Wealth Fund so everyone participates in the upside. This is the OpenAI document. This is the story that makes the regulatory bargain palatable.
To workers and labour markets, the labs are telling a story of disruption-but-also-opportunity. Some jobs go, others emerge, the care economy absorbs the displaced, retraining bridges the gap. It will be hard, but it will be fine. The historical analogies — electricity, the combustion engine, mass production — get wheeled out to reassure.
These three stories cannot simultaneously be true. The arithmetic doesn’t work.
THE ARITHMETIC PROBLEM
If the labs capture the value, there isn’t enough left over for abundance or for new worker opportunities at the scale being promised. The total productivity pie has to be divided. The labs’ investor pitch requires them to take the biggest slice. Everything else is residual.
There’s a deeper problem with the historical analogies. Electricity, the combustion engine and mass production were all complementary to human labour over the long run. They made workers more valuable, not less. Electrification didn’t eliminate factory workers; it changed what they did and made them indispensable. The productivity gains were captured by labour because labour was still needed to operate the new capital. AI’s distinctive feature, the thing that excites investors, is precisely that it substitutes for cognitive labour rather than complementing it. That is the whole pitch. The gains accrue to whoever owns the substitute, not to the workers being substituted for. The labs know this. It’s why they’re proposing capital tax rebalancing and Public Wealth Funds in the first place.
The honest position the labs could take is this: AI will produce enormous abundance, modest aggregate growth, severe distributional consequences, and our business model depends on capturing a disproportionate share of the gains, which is why we’re proposing redistributive policies as the price of social licence. That would be coherent. It would also be unsayable to investors. So instead you get a document that gestures at all three stories simultaneously and hopes nobody does the arithmetic.
If you’re in commercial real estate, you need to do the arithmetic. Each of these stories implies a completely different future for your assets. So does the story none of them is telling.
THE STAIRCASE
Frontier AI capability is moving on a staircase. The leading edge is genuinely expensive and genuinely hard. A handful of labs (Google, OpenAI, Anthropic) sit at the top, burning capital at extraordinary rates to push the frontier forward. They will probably keep doing this, and the frontier itself will probably stay profitable. But only for the narrow slice of use cases that genuinely need bleeding-edge capability. Frontier research. The hardest reasoning problems. Genuinely novel domains. A specialty business, not a civilisation-funding cash cow.
Meanwhile, the cost of year-old capability is dropping 10x per year. GPT-5-class models, then frontier-class models, then whatever comes next: each generation falls down the staircase, gets cheaper, gets smaller, runs locally on commodity hardware (the end-point Apple is banking on), and embeds itself into every productivity tool and business process across the economy. Open-source models like Kimi, DeepSeek and Qwen are trailing the frontier by 6-9 months, which means state-of-the-art becomes nearly free rapidly. At some point in the not-distant future, a one or two year old free model will be more than powerful enough for most things most people will ever need.
This is the semiconductor pattern. Intel made fortunes at the leading edge while last year’s processor became a commodity. The leading edge stayed profitable because it stayed scarce. The commodity layer generated enormous economic value, but very little of it accrued to the chip-makers themselves. Most of it accrued to the firms usingthe chips: Apple, the cloud providers, the entire downstream economy. AI looks to be heading the same way, and heading there faster than the labs’ valuations assume.
A point worth being precise about. The frontier labs are real businesses with real products, real users, real revenue, and real strategic importance. On a net-net basis they are almost certainly a societal good. None of that is in question. The question is whether the valuations price in a winner-take-most outcome that the staircase pattern makes unlikely. Massive demand for AI is not the same thing as massive profit for the labs. You can have a transformative technology, deployed at vast scale, generating enormous economic value, in which the producers of the technology earn perfectly reasonable returns rather than monopoly rents. That is the staircase scenario, and it is the one the current valuations cannot survive.
WHO ACTUALLY LOSES
The mechanism for the loss is already being prepared. OpenAI is reportedly heading for an IPO at a valuation north of a trillion dollars. Anthropic is on a similar trajectory. The current shareholder base is a mixture that will behave very differently when the exit arrives. Founders, employees and early venture capital are natural sellers at peak valuations; that is how the venture model works, and they will take their distributions. Strategic investors like Microsoft and Google are a different case: they are long-term holders with enormous, diversified businesses that will capture downstream value from the staircase whether or not the labs themselves deliver on their direct valuations, which means they are effectively hedged and relatively relaxed about the outcome.
The unhedged position at IPO is the retail one. Index funds buy in because they track the market. Pension savers and ISA holders and 401(k) participants own those index funds. By the time it becomes clear that frontier rents have migrated downstream into the broader corporate economy, the venture capital will already have exited and the strategic investors will be shrugging it off because their cloud divisions are booming. The losers will be the people who own a FTSE or Dow Jones All-World tracker and never knew they were taking a frontier-AI bet at all.
What does this mean for the three stories? The labour disruption is just as real in this scenario as in any other. Substitution still happens. Cognitive workflows still get restructured. The displacement still arrives. But the actual money shows up somewhere completely different from where the labs are pointing. Not concentrated in a few San Francisco companies. Not vanishing into untaxable consumer surplus either. Spread across thousands of mid-cap firms across every sector that successfully restructures around AI-augmented operating models. Their margins go up. Their corporate tax bills go up. The fiscal base grows through entirely normal channels.
In this scenario, you don’t need a ‘robot tax’ of the kind Bill Gates proposed in 2017. You don’t need exotic new instruments to capture rents that aren’t there. You need normal corporate income taxation working at slightly higher effective rates against a broader and faster-growing base. The redistributive arithmetic works without anyone having to invent a new fiscal architecture.
Which makes the OpenAI document’s policy framework worth reading a second time. The document does not argue for the staircase world. It argues for the investor-pitch world, and it proposes policies that would fund the fiscal response to AI disruption by broadening capital taxation across the board rather than by taxing the labs specifically. Read it again with that in mind. The proposals are carefully pointed at the broad capital base: higher capital gains, higher corporate income tax, vaguely-defined ‘measures on sustained AI-driven returns’. What is conspicuously absent is anything targeted at the labs themselves. No compute tax. No inference levy. No windfall tax on lab IPOs. No specific rates, no specific instruments, no specific thresholds. The vagueness is the tell. The document is signalling to investors that profits will be enormous, while simultaneously signalling to regulators that the fiscal response should fall on capital in general rather than on frontier AI in particular. Those two signals look contradictory but they’re not. They converge on exactly the same policy outcome: broad-base capital taxation, which at a group level costs the labs far less than a targeted rent tax would, whether the rents arrive or not.
REAL ESTATE PAYS TWICE
For commercial real estate, the labour disruption is real and you should plan for it regardless of how the fiscal argument resolves. Office demand keeps grinding lower. Bifurcation continues. Prime wins, secondary loses, the middle hollows out. The standard CRE story about AI is roughly right at the operational level, and you should not let the more comforting noises out of the industry trade press lull you into thinking otherwise.
The more interesting question is fiscal. Read the OpenAI document again with real estate specifically in mind. Suppose the investor pitch turns out to be right. Suppose frontier AI really does generate enormous, durable rents at a handful of labs, and governments face the labour-tax erosion the document warns about. What does OpenAI propose as the fiscal response? Broaden capital taxation. Higher capital gains at the top. Higher corporate income tax. Vaguely-defined measures on ‘sustained AI-driven returns’.
Notice what that framework does when applied in practice. Governments looking to broaden capital taxation go first to the largest, most immobile, most politically vulnerable capital base they can find. Commercial real estate scores maximum points on all three. REIT exemptions, capital allowances, the lighter treatment of real estate gains, carried interest on fund structures: every one of these becomes a revenue target the moment the political conversation turns to ‘where do we find the money to replace eroded labour taxes’. The labs are quietly insulated because broad-base taxation applies to everyone equally. Real estate is specifically exposed because its existing tax position is the one that looks like a loophole to the median voter.
Which means the real estate industry has a counter-intuitive interest in the whole argument. If the labs’ investor pitch turns out to be right, CRE is a sitting duck. If the staircase turns out to be right, CRE is mostly fine. I think the staircase is the more likely outcome, with a caveat I want to be explicit about. It rests on the ‘AI as Normal Technology’ hypothesis being broadly correct on timing (the Princeton argument, from Arvind Narayanan and Sayash Kapoor, that AI diffuses slowly through organisations and institutions rather than arriving as an overnight shock), which I’d say is odds-on but not certain, and on the gains ending up genuinely distributed across the corporate economy rather than concentrated in a handful of dominant firms, which is more uncertain still. Realistically we are probably looking at a ten-year transition rather than the three-year one the cheerleaders imply or the three-decade one the ‘AI is just another tool’ sceptics hope for. That still leaves the real estate industry with meaningful dislocation at the operational level. What it does not leave us with, in the staircase world, is a fiscal raid on real estate’s tax position to fund the state’s response to AI disruption.
The point of this piece is that the real estate industry should be actively arguing for the staircase world, and arguing specifically against the OpenAI framework that assumes the investor pitch is right and proposes to fund the fiscal response through broad-base capital taxation. The honest position is that AI should pay for its own disruption. If the labs generate the rents they promise their investors, those rents should be taxed directly - through targeted instruments on compute, inference, or lab equity events - rather than laundered into a generalised attack on capital that conveniently spares the companies causing the disruption while hitting the industries that had nothing to do with it. Bill Gates was right about this in 2017, and he’s more right about it now. The real estate industry should be the loudest voice making that argument, because under the OpenAI framework real estate pays twice: once through lower tenant demand as AI displaces knowledge workers, and again through higher capital taxes levied to fund the state’s response to the displacement it didn’t cause.
The labs have placed their bets, and the bets contradict each other. The real estate industry has the option of placing a more honest one: that AI should pay for AI. That argument is not being made. It should be.