Extraction or expansion
In the last couple of weeks, I've sat with dozens of senior executives in a wide range of industries to help them use Claude Code to build something in ten minutes that their firm used to have a team take weeks to do. Their first reaction is excitement. What I want to talk about is their second. Cost saving.
That instinct, to go straight to the economics of team size rather than the excitement of capability, tells you where the conversation has moved. Eight weeks ago, one senior person couldn't practically replace a team. Today they can get most of the way there. That shift happened in weeks, not months.
A media executive now does the work of fifteen people. A fashion CEO we're working with proved the point concretely: five AI-generated designs were proposed to a major retailer and four went into production. An afternoon replaced three to four weeks of outsourced design work. Not a pilot. Not a demo. Products on shelves.
This week Anthropic published research that puts the gap between capability and reality into a single image.
The blue area shows the share of tasks in each occupation that language models could theoretically perform. The red area shows what people are actually doing with them. Computer and maths occupations: 94% theoretical coverage, 33% actual. In almost every category, the red is a sliver of the blue. We're still early.
The Anthropic data also shows where the displacement is entering. Not through unemployment, which hasn't risen systematically among exposed workers. Through the front door. Hiring of workers aged 22 to 25 in AI-exposed occupations has already dropped by 14%. Most companies aren't firing people. They're just not replacing them.
A longtime AI optimist I spoke to this week described a new feeling: a deep, dark undercurrent of discomfort. He's hiring a graduate and recognises it as charity, not necessity. "I do not need his labour in any way at all." No single leader is wrong to automate. But when everyone does it simultaneously, the apprenticeship pipeline that produced tomorrow's senior people disappears.
The paradox won't resolve. The human value proposition in knowledge work is narrowing towards judgment and taste. Everything else is becoming automatable. But judgment is hard to define, impossible to train in a classroom, and has historically developed through years of doing the grunt work that AI now handles. If juniors never do the work, how do they develop the judgment that makes seniors valuable?
Which means the macro answer can't just be that we all "run leaner." I've been calling this extraction versus expansion. Every leader deploying AI faces the choice. You can use these tools to extract cost from what you already do, or to expand what your organisation is capable of. Jack Dorsey at Block chose extraction. The market rewarded it instantly. But Ethan Mollick has argued that this is exactly the moment for leaders to model the alternative: to be public about using AI to expand access, to grow capability, to do things that weren't possible before. The loudest stories right now are about shrinking. The organisations that will matter in five years are the ones expanding.
The answer to the pipeline problem has to be deliberate. Pair a senior person with a junior one and flip the usual direction. The junior builds what the senior envisions. Wisdom flows down, capability flows up. This is the old apprenticeship model rebuilt for an AI age, except the knowledge transfer goes both ways. The senior person doesn't need to learn the tool. They need to direct someone who can use it. And the junior gets something no training programme provides: exposure to how experienced people actually think about problems.
I'm doing this myself. I'm hiring a student on a gap year for a year. Reporting to me. Not because I need the labour. I don't. But because I want to invest in a young person and watch them grow. Before AI, those two needs were in tension: you hired juniors because you needed their output, and the development was a byproduct. Now the output need has weakened. So the investment has to become the point.
The question nobody has answered is what happens to the pipeline. The leaders who answer it deliberately, rather than letting it dissolve by default, are the ones I'd bet on.
Three things worth knowing
1. The software market is pricing in the collapse.
Since October, software stocks have fallen roughly 30% while the broader technology index has been roughly flat. Salesforce, Adobe, ServiceNow: each down 25 to 30% since last autumn. The market isn't reacting to bad earnings. It's pricing in a structural shift. This week a professor I know built a fully functional membership system in three hours for a non-profit that had been quoted $5,000 a year for commercial software. A CTO of a major corporation told me he's considering switching from Salesforce to a simpler, AI-native competitor. The pattern is the same: the old model of paying for a hundred features to use twelve starts to crack. Andreessen Horowitz argues code was never the moat: distribution, network effects and switching costs are. But switching costs dissolve when AI can extract your data and rebuild the features you actually use.
2. Goldman Sachs can't find a macro productivity effect. But the micro gains are 30%.
Goldman titled their latest earnings analysis "AI-nxiety." A record 70% of S&P 500 management teams discussed AI on quarterly calls. Only 1% quantified its impact on earnings. At the economy-wide level, Goldman found no meaningful relationship between AI adoption and productivity. But where firms have actually measured it, the median reported gain is 30%, concentrated right now in customer support and software development. Everywhere else: nothing measurable yet. The gains are real but hyper-localised. The question isn't whether AI works. It's whether your organisation has done the work to capture it. (Fortune)
3. AI hasn't just automated legal work. It's retroactively reclassified it.
A lawyer who built his practice around language models reports that a well-instructed general-purpose model outperforms the expensive, narrowly trained legal AI products that have raised hundreds of millions in venture funding. AI is good at tireless issue-spotting, finding contradictions, fixing errors, and producing a structured first draft for human review. It is not good at fine-tuned business judgment, relationship sensitivity, or getting from 85% to 100% where every word and comma matters. But here's the uncomfortable part. Tasks that were billed at premium hourly rates for decades (formatting, precedent research, copy-pasting between documents) have been revealed as procedural, not cognitive. The professional mystique that allowed them to be charged as expertise has been stripped away. AI is acting as a truth serum for knowledge work: forcing an honest reckoning about which tasks were genuinely skilled and which were merely time-consuming and opaque.
Try this
Run AI and a human on the same task, then focus on the disagreements.
A manager this week ran the same research brief through both AI and a human team. The overlap was 78%. But the value wasn't in the overlap. It was at the edges: the surprising findings that only one method surfaced. Where human and machine agreed, the team moved fast with confidence. Where they disagreed, they'd found the questions worth investigating. The delta between AI output and human output is where insight lives. Try it on your next research task, competitive scan, or document review. Don't jump straight into using AI to replace human work. Run both, then spend your time on the gaps.
Know where your org sits on the AI tools landscape.
I've put together a short page mapping what I think of as Generation 1 versus Generation 2 AI tools, and why the distinction matters for how you invest in your people. The short version: many organisations are still stuck on constrained, default tools and most don't have anyone on Agentic / Frontier Generation 2 tools. I see two strategies working in parallel. For the many: move less-engaged people from free, constrained tools to competent use of good general-purpose applications. For the best: accelerate your top people with agentic, frontier tools and let them build entire workflows that deliver outsized impact. See the full explainer.
What readers said
Last week's piece on the hundred small things prompted readers to push the argument further. A leader at a professional services firm identified the structural problem most organisations miss: he needs a structure to learn, not just time to tinker. An events industry leader connected the junior roles question to the UK's chronic underinvestment in training and asked what happens to the pipeline when there are no juniors left to grow (see above). I also had one proper unsubscribe: someone whose world is so far ahead of ours that our content isn't relevant. His communities are debating polyphasic sleep schedules to optimise autonomous agent management and how to deploy thirty vibecoded projects built by non-engineering teams. His core points are in the full reader letters below.
Read the full letters and links readers shared below.
P.S. How I make this email
Several asked about the process behind Saturday AI thoughts. There's now a page showing exactly how it works: what's human, what's AI, and where the two overlap. More honest than most companies' AI transparency efforts :) See for yourself.