The Pyramid Has Already Broken

Six months ago this was a hypothesis. Now it has 270-plus partners and no juniors.

In November 2025 I published ‘Human + Machine Organisational Architecture’. The argument was that capable AI would break the talent pyramid that knowledge-intensive firms have run on for a century, by destroying the economics of the junior tier without replacing the developmental function it served. I described a delayed catastrophe: feast in years 0 to 3, famine from year eight onward, when seniors retire and there is no internal bench to replace them. The piece was a hypothesis. As of this month, May 2026, it is now visible in the operating model of a fast-growing US law firm, and the macro evidence is catching up fast. So the time for treating this as a future risk has run out.

EXECUTIVE SUMMARY

Pierson Ferdinand is showing what the new model looks like: 270-plus partners, no US associates, and AI doing much of the work juniors once performed.

The firm-level logic is rational. The system-level logic is dangerous. If every firm cuts the junior tier and relies on AI plus laterals, the lateral market eventually runs out of laterals. The profession enjoys a short productivity feast, then discovers it has stopped manufacturing senior judgement.

The answer is not nostalgia for the old pyramid. It is a new Human + Machine architecture: AI for routine execution, humans deliberately trained in phronesis, and firms designed around the scarce capabilities the post-commodity economy will still reward.

But that cannot be solved by individual firms alone. Capability development is becoming a collective-action problem. Firms that train juniors bear the cost; firms that cut juniors can buy the benefits later. So the next stage of AI adoption requires a new social contract: professional bodies, training systems, reporting standards, and possibly tax or levy structures that reward the deliberate production of human capability rather than the short-term extraction of AI productivity.

The pyramid has already broken. The question now is whether we build a better architecture in its place, or strip-mine the human-capability base on which the next economy will depend.

THE THESIS, RECAPPED

Six months ago, in ‘Human + Machine Organisational Architecture’, I argued that the traditional pyramid in knowledge-intensive firms had two functions doing different work. The economic function: cheap junior labour subsidising expensive senior expertise. The developmental function: juniors learning by doing over eight to twelve years until they themselves became seniors. Capable AI obliterates the first function. A junior analyst at £50–70K is now dramatically more expensive than the AI agent that can do the same data extraction, drafting, and modelling for under £10K a year. The obvious optimisation, cut the juniors, destroys the developmental function. Result: feast for three years, famine ten years out.

The proposed alternative was a deliberate three-layer architecture: AI handles routine execution; humans focus on judgement at every level from day one; and the system itself - workflows, captured expertise, simulation libraries - becomes a unit of competitive advantage in its own right. The piece closed with an analogy. The traditional firm was a clockmaker’s workshop where apprentices polished gears for years before being trusted with assembly. The Human + Machine firm should be a flight simulator: AI handles the routine mechanics, the apprentices practise complex landings under close guidance from the start, and mastery is reached in half the time.

That was the argument. Six months later, it needs sharpening, and broadening. Here is why.

WHAT SIX MONTHS HAS DONE

In early November 2025, Claude Code did not exist in the form it does now. Opus 4.5 was on the horizon. Agentic AI - systems that take action, not merely produce text - was a research demo more than an operational reality. Today all of that is shipping in production. The category of ‘routine cognitive work’ has expanded faster than any reasonable forecast had it. First-draft documents, comps analysis, scenario modelling, code generation, contract review: AI is at parity with mid-level professionals across a widening band of bounded, reviewable cognitive tasks, and improving on a quarterly cadence.

That changes the timeline. My original framing had years 0–3 as productivity surge, 4–7 as hidden erosion, 8–12 as capability crisis. With this acceleration, the compression is significant. Firms making structural decisions today will see the consequences by 2032, not 2037. The pipeline-collapse mechanism is no longer a logical deduction. Recent work by Brynjolfsson, Chandar, and Chen (‘Canaries in the Coal Mine?’, Stanford Digital Economy Lab, October 2025) documents a roughly 16% relative employment decline for workers aged 22–25 in AI-exposed occupations. Crucially, wages in those roles rose. The pattern is unambiguous: firms are using fewer, more experienced workers. The junior tier is being thinned at exactly the rate the original argument predicted.

And then there is Pierson Ferdinand, which makes the case concrete in a way no statistic can. Founded in January 2024 with no offices and no associates in the US, it has grown from 130 partners at launch to over 270 across 26 markets in just under two years. It uses Harvey AI to do most of the work that, at a traditional firm, associates would do. Asked why, the co-chairman Michael Pierson said the firm could deliver legal practice without ‘having to train the associates on our clients’ pound’.

This is the cleanest articulation I have seen of the structural argument. The clients are no longer subsidising the apprenticeship pipeline because the firm has decided not to run one. The economic logic is impeccable. The developmental logic - what happens when these partners retire and there is no one inside the firm who has spent ten years learning how to actually practise law - is left to the next generation to solve.

A legal recruiter quoted in the same piece worried that junior lawyers might find it hard to develop professionally without being in an office setting. He was identifying a problem. He was missing that, for Pierson Ferdinand, it is not a problem. They have decided, deliberately, that other firms will handle development and they will hire mid-level laterals when needed. They have a ‘junior partner’ tier - but it starts at year five, meaning someone else trained the lawyer for the first half-decade.

This is the buy-not-build strategy applied to human capability. It works as long as enough other firms are still building. The moment they stop, and Pierson Ferdinand’s success will accelerate that moment, the strategy collapses. We will return to that.

THE BIGGER FRAME

The original piece established the supply-side case: how firms preserve and develop human capability under AI pressure. That case stands. What I want to add now is the demand-side case, which sits underneath it and changes the purpose of the exercise entirely.

This is where the argument stops being merely defensive. If AI commoditises routine cognitive production, then the premium shifts toward the work where human presence, trust, taste, care and consequence still matter. The economist Alex Imas makes this argument through what he calls the relational sector: the parts of the economy where the human element is inseparable from the value. It is also the economic foundation of my #HumanIsTheNewLuxury thesis.

In compressed form, Imas argues that as AI drives down the cost of commodity cognitive work, real incomes rise. As real incomes rise, people do not simply buy more of the same. They shift toward different categories of value: hospitality, care, education, therapy, craftsmanship, curated experience and trust-mediated services. Imas calls this the ‘relational sector’. His argument, backed by the structural-change literature and by behavioural research on mimetic preferences, is that as AI automates the commodity economy, this is where employment and expenditure migrate. Not because anyone designs it that way, but because human preferences are nonhomothetic: rich people want different things, and what they want most is precisely what automation cannot fully provide.

If you have followed any of my writing on #HumanIsTheNewLuxury and #SpaceAsAService, this will sound familiar. The CRE-specific version of the relational-sector argument is exactly what I have been writing about for years. The buildings and places that thrive will be the ones that deliver human intensity, experience premium, trust and care, identity and status, and durable community gravity. The asset classes built around these properties - hospitality, members’ clubs, curated workplace, healthcare, experiential retail, premium hospitality-led residential - are the relational sector translated into bricks and mortar.

Putting the Human + Machine architecture next to the relational-sector argument changes the purpose of the exercise. You are developing human capability because that is where the next decade’s economic value is going to live. Pipeline preservation is the by-product, not the goal. The juniors you are training are being positioned for a market that values them.

This connects supply and demand into a single argument: build people who can deliver what AI cannot, in places designed around what AI does not replace.

PHRONESIS

The original piece used the word ‘judgement’ a lot. I want to swap it out for a better one: ‘phronesis’. It is Aristotle’s term for practical wisdom: the kind of capability that cannot be taught from a manual, that lives in pattern-matching against accumulated experience, that is built case by case, mistake by mistake, consequence by consequence. The doctor who departs from protocol because something feels wrong. The investor who senses a deal start to break in a way the spreadsheet does not yet show. The negotiator who reads the moment to push or concede.

I am borrowing the term from a recent article by Harvey Lewis (’From Hierarchy to Judgement’, LinkedIn, April 2026), which is worth your time. Lewis’s argument is that hierarchy in knowledge-intensive organisations did more than route information: it was the institutional technology for producing phronesis. Apprenticeship in medicine, the Inns of Court, military command, the trading floor - all of these were ways of putting capable people into supervised contact with consequential decisions, repeatedly, until the pattern library was built.

The cognitive science backs this up. Gary Klein’s research on expert decision-making, military commanders, ICU nurses, chess grandmasters, firefighters, shows that experts do not compare options exhaustively. They pattern-match against a library of remembered cases, run a quick mental simulation of whether the first plausible option will work, and act. Phronesis is a case library that gets built over time, accumulated through real exposure to particulars under conditions where mistakes have visible consequences. There is no shortcut to building it.

This matters because it tells us what we are actually trying to develop, and what we lose if we do not. If your apprentice spends two years validating AI outputs without ever bearing real consequences, without being on the wrong side of a failed deal, without having to defend a recommendation that turned out to be wrong, they will not develop phronesis. They will develop AI-supervision skills. The aviation evidence is sobering: pilots who spend most of their time supervising autopilots get materially worse at manual flight, by margins large enough to raise risk significantly. Nicholas Carr’s The Glass Cage covers this in detail. The skills you do not use, you lose.

The implication for the architecture is direct. Junior development must include consequential decision-making from day one. AI supervision and quality-checking matter, but they are auxiliary skills. The flight-simulator analogy still holds, but only if the simulator includes scenarios where bad decisions produce visibly bad outcomes that the apprentice has to live with.

THE THREE-LAYER ARCHITECTURE, SHARPENED

The architecture stands. Three layers: an Execution Engine where AI handles systematisable routine work; a Phronesis Development layer where humans are placed into dense, supervised, consequential decision environments; and a System Stewardship layer where humans design AI workflows, externalise expertise and continuously improve the system. What changes in v2.0 is the framing of Layer 2.

Layer 2 is where phronesis is deliberately cultivated. The original framing of ‘humans focus on non-routine work’ is correct as far as it goes, but it under-specifies the design. Cultivating phronesis requires three properties the original piece did not quite name:

  • Artificial Density: Apprentices in a traditional firm see perhaps twenty serious deals in three years. A well-designed simulation library combined with structured deal exposure can put them through two hundred. Higher case-volume per unit of time compensates for lower consequence-weight per case. This is the flight-simulator element done seriously: a deliberately denser environment than the natural workflow ever produced.

  • Supervised Real Stakes: Density alone produces pattern-matchers who have never been wrong. Real-stakes work, actual deals, real clients, real consequences, under close supervision is what builds phronesis rather than knowledge. The mentor’s job is to intervene before consequences become terminal, while still letting the apprentice be wrong, find out, and feel it.

  • Deliberate Friction: Some decisions in your firm are slow on purpose. The slowness is load-bearing: it creates space for challenge, dissent, second opinions, and the formation of the case library that becomes phronesis. Automate those decisions and you remove the development substrate, even if the organisation tells itself oversight remains. The discipline of v2.0 is deciding which decisions stay slow on purpose, and protecting them from optimisation pressure.

The other two layers are largely as before. Layer 1 (The Execution Engine) should be transparent, showing reasoning to preserve learning opportunities. Layer 3 (System Stewardship) becomes more important as the firm matures, because the AI workflows themselves require curation and continuous improvement. Competitive advantage will accrue through creation; it will last through curation.

THE OBJECTION I’VE TAKEN SERIOUSLY

The strongest objection to v1.0 came implicitly through the aviation literature and explicitly through Lewis’s piece: not all human capability can be effectively cultivated in the presence of automation that is almost always right. The experience of being wrong, finding out, and bearing the consequence is the substrate. If AI handles the cases where being wrong is detectable, what is left is the cases where being wrong is undetectable until catastrophic. That is exactly the wrong domain in which to develop phronesis.

There is also what Lewis calls the ‘Mythos effect’: the human tendency for bounded competence (’this model is better than me on this’) to slide irrationally into general deference (’this model is better than me’). The mechanism is psychological. It kicks in long before any actual general superiority is established. Once your team starts treating AI output as a firm conclusion rather than a draft, the development substrate is gone, regardless of what the org chart says.

These are real challenges to v2.0; the architecture has to confront them rather than wave them away. The three responses above - density, real-stakes supervised practice, deliberate slowness - are the answer, but they require active design and constant maintenance. The default path of any organisation is for the slow decisions to get faster, the supervision to get thinner, and the deference to get more general. Resisting that requires explicit institutional commitment.

The honest claim about the 4–6 year timeline I proposed for senior capability is this: it is a hypothesis. Traditional development took 8–12 years partly because the bottleneck was the economics of running a junior tier, not the inherent difficulty of the learning. Compress the development through density, real-stakes mentorship, and good simulation, and the timeline can plausibly halve. Whether it does is an empirical question, and the firms running this experiment over the next decade will answer it.

BEYOND CRE

I write about commercial real estate, but everything in this piece applies, with local variation, to every profession and every form of knowledge work. Law (Pierson Ferdinand), accounting, consulting, banking, surveying, architecture, engineering, medicine: structurally similar pyramids, structurally similar phronesis-development requirements, structurally similar substitution pressure at the junior tier. The CRE-specific cut, investment, asset management, development, brokerage, property management, is one application of a general framework.

If you are reading this from a different profession, the question to ask is the same. What is the phronesis your seniors actually possess that the firm depends on? Where does it currently come from? And what happens to that source under AI substitution? If you cannot answer those three questions concretely, you are running on accumulated capital that nothing is replenishing.

THE SOCIAL CONTRACT

This is where firm-level strategy hits its limit. Even if individual firms read this argument and act on it, the aggregate dynamic is a textbook prisoner’s dilemma. The firms that invest in capability development bear the cost. The firms that cut juniors and rely on AI plus mid-level laterals can buy the benefits later. Pierson Ferdinand’s strategy works precisely because other firms are still developing the lawyers it may eventually hire. If enough firms adopt the same model, the lateral market does not become more efficient. It starts to empty.

That is why this cannot be treated purely as an organisational-design problem. It is also a social-contract problem. Capability development has always been partly subsidised by clients, firms, professions and the wider economy. AI makes that subsidy visible by giving firms a way to stop paying it. Left to itself, the market will let AI strip-mine the labour market the way unregulated industrial revolutions have stripped other resources - efficiently, profitably, and with the bill arriving a decade later.

The interesting fixes are mostly private. Two ideas worth taking seriously: teaching-firm models, where clients knowingly retain a ‘junior academy’ at a discount in explicit exchange for training capacity, with the lower productivity acknowledged in the fee structure rather than smuggled into the hourly rate; and modern guilds - cross-firm consortia that pool the cost of intensive simulation environments, share mentorship infrastructure, and set membership standards that signal something real to clients, peers and capital. Both have working precedents. The Investment Property Forum, INREV, ULI and the Better Buildings Partnership show that voluntary, peer-set, industry-led bodies can shape CRE practice more effectively than regulators ever have. What is missing is one tuned specifically to the capability question.

The state’s role is to tilt the economics rather than to inspect outputs. The lever is to make training the more profitable choice, not the more virtuous one: a reshaped Apprenticeship Levy that returns more than firms put in when they train juniors under genuine supervision; enhanced tax relief for capability-development spending that meets defined criteria; capability reporting alongside ESG so that boards and capital allocators can see who is investing in the next generation and who is harvesting the last. None of this is regulation in the inspection-and-punishment sense. All of it works through price. Whether nudges of this kind are strong enough to close the gradient against firms that capture very large margin gains by avoiding training entirely is a real question; if they are not, harder instruments will eventually come into view, but they should be the third move, not the first.

The policy architecture, and the business models, belong in a separate piece, and I will write one. The principle is simple enough: if AI allows firms to extract productivity from a professional system without replenishing the human capability on which that system depends, the market will not self-correct in time. The framework is firm-level. The fix cannot be only firm-level.

THE CHOICE

Pierson Ferdinand has made one. They have decided, openly, that they will not train the next generation, and that someone else will. The choice is rational at the firm level and corrosive at the system level. Every firm that follows them is making the same choice, including, importantly, every firm doing it by default rather than by design. If you are quietly cutting your graduate intake, slowing your trainee programme, or relying on AI to do work that juniors used to do without thinking about what comes next, you are echoing Pierson Ferdinand. You have simply not named the choice you are making.

The Human + Machine architecture is the alternative: an offensive positioning for an economy in which human capability, properly developed, properly deployed, properly stewarded, is going to be the scarce input that commands the premium. You build the people. You design the system around them. You take the harder route because the easier one ends in a place no one wants to be.

The pyramid has already broken. What you build in its place is, for now, still a choice.

Next
Next

CRE AI Is a Layer Cake