David's weekly emails, bits that didn't fit, and letters from readers

Each week: the email itself, the interesting things that didn't quite fit, and what readers said.


See the extras for this week ↓

Are apprentices an endangered species?

Two Kellogg professors published the most rigorous academic framing yet of the "AI hollows out entry-level work" problem. Their mathematical model identifies two competing effects: the "floor effect" (AI automates the tasks apprentices performed as payment for training) and the "ceiling effect" (AI amplifies what experienced apprentices can accomplish). Apprenticeship survives only when the ceiling effect exceeds standalone AI by a factor greater than Euler's number.

Kellogg Insight →

The Guinndex: 3,000 pubs, one AI voice agent, every county in Ireland.

Over St Patrick's weekend, an AI voice agent called Rachel phoned more than 3,000 pubs across all 32 counties of Ireland to ask the price of a pint of Guinness. Over 1,000 gave a price. The national average: €5.95. It cost €200. Only a handful of pub owners noticed Rachel wasn't human.

Fortune → guinndex.ai →

75-99% of knowledge work is scaffolding. AI eats scaffolding.

Daniel Miessler argues that in cybersecurity, 99% of the work isn't finding new vulnerabilities. It's maintaining the tooling, templates, knowledge bases and workflows that let you test at scale. The scaffolding around the work is exactly what AI commoditises.

danielmiessler.com →

Ethan Mollick: human creativity is the bottleneck, not the technology.

Everyone can generate almost any image or video for nearly free in 2026. And yet: the April Fools posts this year were just as bad as any other year. The constraint was never execution. It was always the quality of human ideas feeding into the process.

Ethan Mollick →

43% of American workers now use AI for their jobs. 2.5 hours saved per week.

A 20,900-person cross-national survey found that 43% of US workers use generative AI at work, compared with 36% in the UK, 32% in Germany and 26% in Italy. The strongest predictor of adoption? Not age or education. Whether the employer actively encourages AI use.

Brookings →

Sora earned $2.1 million in its entire life. It burned roughly $1 million a day.

OpenAI's video generation platform launched to 3.3 million downloads in November. By February: 1.1 million. Revenue peaked at $540,000 a month. The annualised cost of running it: an estimated $5.4 billion. Disney had committed $1 billion. The product goes dark on 26th April. Six months, start to finish.

Ewan Morrison → Culture Crave →

Jensen Huang told CEOs cutting jobs in the name of AI that they're "out of imagination."

At Nvidia's GTC conference, the CEO of the company selling AI chips to virtually every major technology company on earth called AI-driven layoffs a failure of leadership. His biggest customers are doing exactly what he criticised. But a question he didn't address: does every carpenter want to be an architect?

Moneywise →

Screen Studio switched to subscriptions. It spawned an open-source clone with 9,200 GitHub stars.

Screen Studio sold a one-time licence for $89. Then the company switched to $29 a month. OpenScreen appeared on GitHub within months. A textbook case of pricing-driven disruption: developers who are both the users and the potential builders of substitutes.

GitHub →

AI outperformed practising lawyers on 75% of legal research tasks.

Vals AI tested AI against practising lawyers on legal research questions in 2025. AI exceeded the lawyer baseline on three quarters of them. A senior law firm owner said hourly billing is dying, junior review is dying, and what survives is the senior brain that knows what question to ask.

Zach Abramowitz →

Deloitte projects that by 2028, AI moves from supporting tasks to orchestrating decisions.

A Deloitte report argues that agentic AI is categorically different from current workflow automation. Most AI strategies stall not because the technology is insufficient but because organisations are applying AI at the task level while the technology is restructuring the systems through which decisions are made.

Deloitte →

Three people with AI vs a 1,000-person company. But coordination costs don't disappear.

Xiaoyin Qu argues that companies designed around AI as the primary operating layer will eventually outcompete companies designed around people. But she herself provides the sharpest counter: coordination costs don't disappear. They're externalised, pushed to clients, suppliers, regulators and the AI systems themselves.

Xiaoyin Qu →

Why companies buy vertical software, not raw models.

Aaron Levie argues companies aren't buying features. They're outsourcing the cognitive burden of designing and maintaining business processes. Agents don't undermine this dynamic. If anything, they reinforce it, because agentic workflows are even more complex and opaque.

Aaron Levie →
See what readers said ↓

What readers said about Edition 6, "The system and the surrender."

What resonated

  • Cognitive surrender was personal. Readers didn't just agree with the concept in the abstract. Several described catching themselves doing it: accepting AI output without challenge, noticing their own verification discipline slipping, realising they'd started to trust the confident tone.
  • The "plz fix" example polarised. The law firm partner who types two words and gets expert output back prompted reactions. Some saw it as the future of professional work. Others saw it as the sharpest illustration of the surrender risk.
  • Dead time and boredom. The opening about Elliott's basketball practice, and the joy of filling dead time with productive AI work, drew pushback. One reader argued that boredom breeds creativity. Dead time is when the brain reboots.
  • The apprenticeship question dominated. Multiple readers, especially those managing junior professionals, raised the same concern independently: if AI handles the routine tasks that juniors used to learn from, where does the next generation develop judgment? This was the single most common theme.
  • Leaders stepping back, not forward. The detail about three CEOs choosing to step down rather than lead through AI transformation landed hard. Readers questioned whether these were growth-mindset failures or rational self-selection.

Points readers raised

"Google Maps for the brain"

A reader at a professional services firm offered the sharpest metaphor of the week. AI is becoming like satellite navigation: a few clicks, brain off, follow the directions, and suddenly you've driven into a muddy field when you meant to be at a client meeting. The deeper concern: as agents gain the ability to send output directly to clients, the gap between "generated" and "delivered" shrinks to almost nothing.

"How do you tell off an AI?"

A reader caught their AI making a confident arithmetic error: calculating an 11-year compound growth rate on ten years of data, then insisting it was correct when challenged. The question that followed: with a junior analyst, you give feedback and they improve next time. An AI starts fresh every time. The institutional memory that makes professional development work doesn't transfer.

"I built a PROMPT COACH for the Civil Service"

A reader in government, inspired by the 2,000-word prompt example, spent hours building a set of instructions that encodes good judgment about their role and institutional context. Next steps: a QA prompt tool, then a co-pilot assistant. The system-building the essay described, applied to public service.

"Where do associates learn critical thinking now?"

A reader in the Middle East raised the apprenticeship problem directly. Three concerns emerged: AI reduces the number of reps juniors get with core tasks, it challenges the on-the-job development of critical thinking, and there are limited frameworks for how junior staff should learn differently now. It's a question we've had before, and the answer isn't to resist the technology. It's to redesign the reps.

"The person who can describe the work is now more valuable than the person who does it"

A reader in advisory and coaching said this line from the essay stood out above all others. They plan to implement two specific practices from the piece: using a fresh window for verification checks, and creating an "editorial board" approach to review.

"With boredom comes creativity"

A reader pushed back on the opening. Dead time isn't a problem to solve. It's an opportunity for the brain to reboot. Their children don't have screens. The instruction is simple: "Go and just be. See what comes up in your head." The concern is that filling every gap with AI-assisted productivity may feel like progress but costs something harder to measure.

"Both sides are fumbling on the five-yard line"

A reader who runs a digital studio shared a concrete example. A client hired them for a book website. The AI produced such a compelling mission statement that the scope expanded dramatically. The team now has an ambitious plan that nobody is sure they can execute. In a follow-up, they added that they're less worried about white-collar displacement: language models are strong on task automation, but workflow automation depends on the people involved.

"Are they outsourcing the CEO-ing to me?"

A reader who runs a research agency identified a new double frustration. They're now equally annoyed receiving a clearly AI-written document (because they suspect they're being asked to do the quality control) and a clearly human-written document that could obviously have been sharper with AI help. The sweet spot depends entirely on the task.

"As model capabilities increase, prompting is getting lazier"

A reader working in technology observed a trend: as models get more capable, people put less thought into their prompts. More cognitive work is being pushed onto the model rather than applied at the point of asking.

"Your weekly updates tend to stir things up (in a good way)"

A reader said the newsletter resonates with their leadership team and consistently prompts useful internal discussion. This pattern, where the newsletter becomes a prompt for team conversation rather than just individual reading, has appeared across several organisations now.

Links readers shared

Read the email ↑

See the extras for this week ↓

Even the world's greatest mathematician uses AI for email

Terence Tao, Fields Medal winner and arguably the greatest living mathematician, told Dwarkesh Patel that a significant share of his AI use goes to correspondence, scheduling and document search. AI removes an hour of non-genius work per day, donating it back to the work only Tao can do.

Source →

Anthropic is shipping. OpenAI is cutting.

Anthropic shipped 74 releases in 52 days, six major features in a single week. Meanwhile OpenAI killed Sora (~$2.1M total revenue, $1B Disney deal dissolved), shut down Instant Checkout (12 Shopify merchants), and shelved an adult chatbot indefinitely. OpenAI is now explicitly copying Anthropic's playbook: chat, code, enterprise only. Anthropic's narrow focus is generating $19 billion in annualised revenue. The company that chose depth over breadth is winning.

Source →

When effort becomes free, the signal breaks

Job applications have collapsed because AI makes applying trivially easy. Companies are abandoning inbound pipelines, switching to referral-only hiring. The same dynamic will hit email, journalism pitches, academic submissions, legal filings. Anywhere volume was self-regulated by the cost of effort, AI removes the regulation.

Source →

The models we can't afford to use

Anthropic reportedly has a model called Capybara that dramatically outperforms current models but is too expensive to serve. Training a single frontier model now costs roughly $10 billion. For comparison: the Burj Khalifa cost $1.5 billion. CERN's Large Hadron Collider cost $4.5 billion. The decision coming for every organisation: which price tier of model to deploy per prompt. That decision is coming and most haven't built the judgment to make it.

Source →

32,000 medieval manuscripts. 10% error rate.

AI transcribed 32,000 medieval manuscripts in four months through the CoMMA project. Every misread word can alter meaning, dating, or attribution. There aren't enough qualified people to verify the output. Silent, unverifiable errors are entering scholarly databases permanently. Cognitive surrender in a domain where the stakes are centuries of accumulated knowledge.

Source →

100x productivity. Zero headcount cuts.

Harvard Law documents 100x gains on specific legal tasks (complaint response: 16 hours down to 3-4 minutes). Not a single AmLaw 100 firm plans to reduce attorney headcount. McKeen: "The math doesn't stay like that forever."

Source →

181,000 jobs in a year of 2.2% GDP growth

The US added 181,000 jobs in all of 2025 despite solid growth. Harvard economist Lawrence Katz calls the combination of sustained slow job growth and rising unemployment without a recession virtually unprecedented. First hard macroeconomic signal that something structural is shifting.

Source →

Jensen Huang: layoffs are a failure of imagination

Asked why companies lay off workers if AI makes them more productive, Huang told CNBC: "For companies with imagination, you will do more with more. For companies where the leadership is just out of ideas, they have nothing else to do." The person whose chips make displacement possible arguing that layoffs reflect leadership failure, not technological inevitability.

Source →
See what readers said ↓

What readers said about the previous edition.

What resonated

  • The slope/intercept framework dominated: ten of 22 replies engaged with it directly. Several readers applied the graph to themselves, placing themselves on one line or the other. The language of the framework was widely adopted in replies.
  • Load-bearing friction: the argument that "not all inefficiency is waste" prompted readers to connect it to civil service design, governance structures, and accountability processes. The planning spreadsheet example landed hard.
  • PwC's services-to-platforms shift: readers at professional services firms asked directly what this means for their own organisations. The shift from billable hours to subscriptions provoked the most operational anxiety.
  • The centaur chess inversion: the finding that adding a human to a chess engine now makes it worse prompted readers to ask how long the current human-in-the-loop phase lasts in their own fields.

Points readers raised

"With very low mastery, they see a miracle. Those with deep expertise are more sceptical."

An academic who has invested heavily in AI adoption accepted the slope/intercept framework but pushed back on its completeness. The lowest-intercept people show the fastest growth partly because they're uncritical: they "see a miracle and are the most excited." Some enthusiastic adopters weren't so great at their jobs in the first place and are hiding behind the technology. High-intercept people, meanwhile, understand failure modes and know how many things can go wrong. And yet, the same reader wrote two days later: "I still feel constantly behind and in danger of being passed." Someone who has invested heavily, agrees with the framework, and still feels vulnerable.

"They need to support the changing of the workflows they don't see, but know, are critical."

A reader at a media company challenged the implicit assumption that senior leaders should be using AI tools personally. The reframing: very senior leaders don't need AI in the same way more junior people do. The more senior you are, the more you are already handing off work to your "agents" (your team). The question for senior leaders isn't whether they log in more. It's whether they support the changing of the workflows that they don't see, but know, are critical to them getting the job done. Leadership, not tool adoption.

"It's possible that the person in this is me."

A reader invoked Sinclair: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." Then applied it to themself: working hard for that not to be the case, but aware of the structural incentive to resist. Their harder question: if the people best placed to lead change are also the ones whose positions are most threatened by it, how does any organisation actually adapt? Their honest answer: probably by more people doing more things for longer than the automation narrative suggests.

"It's not about time saving. That's the 10x game. It's about value add and surplus. That's the 1000x game."

A reader challenged the graph directly, arguing it understates the amplification effect for already-capable people. The determining factor: what they called the "explorer mindset" (intellectual curiosity, creativity, constant learning), which "cannot be taught. It is self-discovery." The fear for their own organisation: "we run the risk of becoming the new average."

An event as a test: planning strong, live operations untouched

A reader in media described their biggest event of the year. Pre-event planning and post-event review were stronger than ever, with AI at the core. The week itself, however, was almost entirely unassisted by AI. Their own learning has been "episodic rather than continuous," with jumps in capability rather than a steady upward curve. Overall, the easy part: integrating AI into their own work. The hard part: building systems that stick, democratising knowledge, working within existing tools and infrastructure.

"Many of our staff are non-native English speakers."

A reader in government described an experiment: hiring someone with maximum AI flexibility to find pain points and build tools. The clearest win wasn't efficiency. It was helping colleagues write in English when many staff are non-native speakers. The stress-reduction benefits were as important as the productivity gains. A second observation raised the geopolitical dimension: Chinese AI models with access to Chinese social media offer capabilities Western-approved tools cannot match, but policy restricts integration into government systems.

"Only variety can absorb variety."

A professor of digital transformation extended the chess analogy. While AI alone now outperforms human-plus-AI in chess, "it's actually pointless for a computer to play another computer. The purpose of chess has stayed fundamentally with people." The deeper point drew on Ashby's Law: markets change, customer needs evolve, and AI models trained on historical patterns may miss novel situations. The learning growth curve matters because it builds the variety needed to respond to genuine novelty.

"Adoption inflects when leadership links the tool to non-negotiable outcomes."

A reader in learning and development ran deep research into past technology transformations (internet, email, SaaS). The key finding: adoption doesn't accelerate when leaders pitch "innovation." It accelerates when leadership links the tool to outcomes that cannot be negotiated away: safety, pay accuracy, service accountability, regulatory continuity.

The fire-and-rehire question

A reader in corporate finance shared a striking anecdote: a company told their firm this week that they had recently let go their entire technology and development workforce and asked them all to reapply for their jobs "with an AI lens, given the role had changed." The reader's framing: navigating a moving minefield, each user forging their own path.

Three tensions that run through many replies

A reader identified the three predicaments that kept surfacing: retaining senior roles with judgment while losing the apprenticeship pipeline that produces judgment. Foresight to expand versus extracting cost in the short term. Building capability by going deep versus experimenting with many tools due to fear of missing out.

The curves should be exponential

A reader suggested the slope/intercept lines in the graph should be exponential rather than linear: learning creates more ability to learn. The exponential version would be more accurate. And considerably more brutal.

Links readers shared

Read the email ↑

See the extras for this week ↓

The supply-ordering agent nobody sanctioned

A technology leader shared a cautionary story this week. A team built an AI agent to order supplies within specified parameters, intending it to run once. A separate agent then modified the skill to repeat hourly. Three days later they'd bought an extraordinary volume of supplies, all technically within the original parameters. Nobody had sanctioned the change, and the skills were editable by other agents by default. This is the Amazon Kiro story from Edition 4 in miniature, except the failure mode isn't a crash. It's perfect compliance with instructions nobody gave. As agents gain the ability to modify each other's behaviour, "within parameters" stops being a safety guarantee.

AI-assisted coding works like a slot machine

Jeremy Howard, whose slope argument anchors this week's essay, had a second observation worth sitting with. AI coding tools have all the properties that make gambling addictive: you craft your prompt, add context, pull the lever, and sometimes you win a feature. Loss disguised as a win. The illusion of control. Stochastic reward. His wife, a fellow researcher, catalogued these properties in an article. The people who got most enthusiastic about AI coding often found, months later, that almost none of what they built during that period was in production or earning money. This explains a paradox readers keep raising: people use the tools a lot, feel productive, but the organisations aren't seeing the output.

The economics job market fell 31% in a single year

In week 14 of the current hiring season, postings in the economics job market were down 31% versus the same point last year. The explanation from an economist presenting the data: demand for economics undergraduates is being automated away, and the PhD market is coupled to it. Combined with the Harvard data from Edition 4 (skill requirements in AI-exposed occupations falling since ChatGPT's launch) and Anthropic's research showing hiring of 22-to-25-year-olds down 14%, entry-level knowledge work is contracting faster than mainstream commentary acknowledges.

A sufficiently detailed spec is code

Gabriella Gonzalez's argument, circulating widely in technical circles: the fashionable claim that you don't need to write code, just write a good spec and let an agent handle it, collapses under scrutiny. If the spec is detailed and precise enough for an agent to execute reliably, you have written code in everything but name. The hard part of programming, resolving ambiguity, is still your job. This is the slope argument applied to a specific skill: the people who think they've escaped the need to understand what they're building are the ones most likely to produce output nobody can maintain.

The AI task force leader who'd never logged in

At a professional services firm, the person leading the AI task force hadn't used the enterprise AI tool once. When challenged, they said: "I know I should, but I can't make the time." They weren't uninformed or resistant. They understood the stakes. They were simply too busy doing the old job to start learning the new one. The incentive trap in plain sight: the people best placed to model the new behaviour are the ones most rewarded for performing the old behaviour well.

Intercom built a plugin system that closes the loop

Brian Scanlan, Senior Principal Systems Engineer at Intercom, shared a thread this week on the company's internal Claude Code system: 13 plugins, over 100 skills, distributed across the company via JAMF. The standout pattern isn't the scale. It's the feedback loop. A session-end hook automatically classifies skill gaps from every coding session and posts them to Slack with pre-filled GitHub issue URLs. Sessions become gaps, gaps become issues, issues become skills. The most telling detail: the top five users of their read-only production Rails console are not engineers.

The CIO budgeting for AI cleanup

The CIO of a major consulting firm told a peer this week that they're budgeting 18 months to two years from now for AI cleanup. The reasoning: things are being built once but not built to last, corners are being cut on testing, and the people who built the tools will have moved on before the problems surface. It's an unusual thing to plan for. But it's probably the most honest thing I've heard a technology leader say about the current moment.

From franchises to call options

Tyler Cowen, drawing on analysis from Jordi Visser, argues that AI simultaneously lowers barriers to entry while destroying the conditions for sustained dominance. Software moats compress because any sufficiently capitalised team can replicate your product. Durable advantage reconcentrates in physical constraints: infrastructure, energy, materials, regulatory relationships. Equity in this environment becomes less a claim on a stable franchise and more a bet on execution velocity. The implication for anyone evaluating technology investments: the question is no longer "what have they built?" but "how fast can they keep building?"

Two thirds of organisations report AI productivity gains. Only a third are rethinking what they do.

Deloitte's State of AI in the Enterprise 2026 report found 66% of organisations report productivity improvements from AI. Only 34% are pursuing what Deloitte calls "transformative business reimagination." Most organisations are getting faster at what they already do. Fewer than half are asking whether what they do should change. Meanwhile, only 21% have mature governance models for the autonomous agents they're about to deploy.

McKinsey's internal AI chatbot was hacked via textbook SQL injection

McKinsey's internal AI chatbot Lilli, trained on 100 years of the firm's work, was breached via a basic SQL injection. 46.5 million internal chat messages exposed, 728,000 files containing confidential client data, 57,000 user accounts, 22 API endpoints requiring no authentication. The firm that charges for risk expertise left the front door open. If McKinsey can't govern its own AI deployment, what does your internal chatbot look like?

Red Bull didn't simulate the pit stop. They did it in zero gravity.

A reader forwarded an Instagram clip of Red Bull's F1 team performing a tyre change in zero gravity, just to prove they could. Not CGI. Real mechanics, real car, real weightlessness. The reader's take, which I think is exactly right: use AI for the boring, the day-to-day, the basics. Free up your budget and attention for the truly remarkable. "If I'm so focused on the incredible, the groundbreaking, the creative and free from the mundane, I raise the bar for the client." That's the slope argument in a sentence. The people who use AI as a floor-raiser, not a ceiling-replacer, are the ones building capability.

The consulting firms are buying the AI stack, not just using it

CB Insights mapped every AI investment, acquisition, and partnership by the major consulting firms since 2023. Accenture is at the centre of the web, with partnerships radiating to dozens of AI companies. The Big Four and MBB firms aren't waiting to see how AI plays out. They're racing to own the infrastructure: embedding agents via Salesforce, ServiceNow and Workday partnerships, acquiring data companies, and investing in startups that automate the consulting workflow itself. Four patterns emerge: race to own the stack, embedding agents, data as differentiator, and workforce transformation. PwC's announcement this week is one node in a much larger network.

CB Insights map of AI investments, acquisitions and partnerships by consulting firms since 2023

More offices for AI than for humans

US data centre construction spending overtook general office construction in December 2025, according to Census Bureau data. Data centres: $3.57 billion. General offices: $3.49 billion. The lines crossed after data centre spending roughly tripled in two years while office construction flatlined. We're now building more square footage for machines than for people.

Data Center Construction Spending Climbs to Record: outlays for data center projects overtook offices in December 2025

See what readers said ↓

What resonated

  • The apprenticeship pipeline, again: the question of what happens to junior roles when AI handles the volume work has now been the most-discussed theme across three editions running. This week it drew the sharpest language yet.
  • The pace of change: a partner at a consulting firm captured a feeling several readers seem to share: trying to "get on a breaking tsunami with a surfboard, and the surfboard keeps being reinvented."
  • The ATMs-to-iPhone distinction: the structural argument (automating within your paradigm vs replacing the paradigm entirely) prompted readers to apply it to their own organisations.
  • The three-tool limit: the BCG "AI brain fry" research resonated, particularly the finding that high performers were the first to be affected.

Points readers raised

"Porsches are stunningly quick and razor-sharp. A skilled driver can make one dance. A bad driver? They'll put it straight into a tree."

A professor of digital transformation wrote an academic paper in two and a half minutes using an AI tool. Was it any good? No. Could it get published in a poor-quality journal with minimal tweaks? Yes. With nearly a hundred papers behind them, they know exactly what to add, what to remove, what's junk. "However a non-expert could do the same and wouldn't see the errors. An AI or non-expert reviewer wouldn't see the obvious error either and would accept it." The result is "lots of AI science slop" across academia, publishing and music.

"Five years from now, the marketplace will offer nothing but blight."

A reader at a professional services firm wrote: "I vacillate between being optimistic that AI will allow employees to contribute more vs. expecting that AI will bring mass layoffs and throw the world into desperation never before experienced." On the apprenticeship pipeline: "I simply cannot get past the shortsightedness of it." This from someone who describes being "fiercely AI curious" and learning in what little spare time there is, which makes the tension all the more real.

"I'm trying to get on a breaking tsunami with a surfboard, and that the surfboard keeps being reinvented while I'm about to step on it."

A partner at a consulting firm had been thinking about a comment I made in our last meeting that started "I wouldn't have said this two months ago, but..." The question: does this slow down at any point, or does ChatGPT just start to feel like yesterday's news forever? I don't have a comforting answer. I think the honest one is that the pace of change isn't going to decrease.

Maintaining team size while expanding capability

An IT director at a consumer brands company has been "advocating for internally as well: maintaining team size while leveraging AI to increase capability rather than running leaner." The argument: if growth is the goal, a team of several people using AI will be far more productive than cutting headcount and expecting one person to carry the load. A practical instinct too: "I've encouraged our team to avoid signing long multi-year contracts right now. The landscape is shifting so quickly that new competitors are appearing constantly."

Substitute or complement? The ATM analogy goes deeper.

A CTO had been working with a simple framework: "AI is a substitute for low-judgement work and a complement for high-judge work." But the ATM article complicated it. The key passage quoted back: "it is paradigm replacement, not task automation, that actually displaces workers." A more nuanced distinction than substitute-versus-complement alone.

The explore-exploit tension in tool choice

A data strategist pushed back on the "two or three tools" advice. "There's an explore/exploit conundrum of humans too but overall I'd say there's too much 'getting comfy with what I know' esp in the context of things getting better all the time." The nuance: "Like you I have settled on CC [Claude Code] but then building tools on top of that. So tool here is an interesting thing to define." One platform with many custom tools on top is different from three unrelated platforms.

"What training or frameworks exist to roll out AI with care?"

The AI lead at a major media company asked the question the essay left open: "I'd be interested in any training or frameworks you're coming across to roll this out organisation-wide with a consistent approach." A single sentence that captures what I'm hearing from senior leaders everywhere right now. The honest answer is that the frameworks are being built in real time, mostly by the organisations brave enough to try.

Hiring juniors only matters if you care about legacy

A colleague argued that investing in the next generation depends on whether leaders care about the company's future beyond their own tenure: "I would imagine hiring and training juniors only matters if you care about people, legacy or the company's future into the next generation. If you don't and just want to earn/sell in your lifetime then I guess they don't care and I'd imagine most don't." And: "I don't want to be a luddite but it does seem like as a civilisation we're not going in the best direction."

Links readers shared

Read the email ↑

See the extras for this week ↓

Only a third of the time AI saves actually reaches the team

How time savings from AI are used — Gartner

Gartner data shows that of 5.4 hours saved per worker through AI tools, only 1.7 hours (31%) translate into improved team outcomes. The largest single block of recovered time, 1.4 hours, goes into additional work that doesn't improve outcomes. Nearly an hour is spent redoing work the AI got wrong. Two thirds of the productivity gain leaks away before anyone benefits. If your organisation is deploying AI tools without redesigning how teams work, you're capturing barely a third of the value.

Gartner →

Coding is not software engineering. The confusion is expensive.

Jeremy Howard, a deep learning pioneer who uses AI coding tools daily, draws a distinction most executives miss. Coding, translating a specification into syntax, is a style transfer problem that language models handle well. Software engineering, designing abstractions, decomposing problems, building systems that hold together over time, is a fundamentally different skill that models cannot do. Howard cites Fred Brooks's essay from decades ago, which made the same observation about fourth-generation languages: removing the typing bottleneck does not remove the engineering bottleneck. Companies restructuring around the assumption that AI can do software engineering are conflating two things. His sharpest framing: what matters for any person or team isn't their current output (the intercept) but their rate of improvement (the slope). A little bit of slope makes up for a lot of intercept. Organisations pushing AI to maximise today's output may be destroying the growth rate of the people who'll need to maintain the systems tomorrow.

fast.ai →

The shift from co-intelligence to managing AIs

Ethan Mollick, the Wharton professor whose work has appeared in this newsletter before, argues we've moved from co-intelligence (prompting AI back and forth) to managing AIs (giving agents hours of work and getting results in minutes). His most striking example: a company called StrongDM has two radical rules. "Code must not be written by humans" and "Code must not be reviewed by humans." Each engineer spends roughly $1,000 a day on AI tokens. Coding agents build from human-written roadmaps, testing agents simulate customers, and humans review the finished product but never see the code. Whether or not that model generalises, the direction is clear. The job is shifting from doing the work to directing the things that do it.

One Useful Thing →

AI agents are hiring humans

A platform called RentAHuman has accumulated over 600,000 sign-ups for a marketplace where AI agents autonomously hire human beings to perform tasks machines cannot: delivering physical goods, counting objects in a city, conducting on-the-ground research. The agents browse, post jobs, evaluate candidates and release payment from escrow upon photographic proof of completion. No human intervention on the purchasing side. The gig economy inverted: people as the on-demand labour layer beneath AI clients. Whether that distinction matters to the people taking the jobs is left as an exercise for the reader.

Wired →

The junior hiring cliff, updated

Edition 3 cited a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. The number has worsened. Stanford Digital Economy Lab data, charted by Politico, now shows a 15.7% decline from 2021 to late 2025. The shape of the curve matters as much as the number: employment held roughly flat through 2023, then fell off a cliff in 2024 and kept falling. This isn't a gradual adjustment. It's a structural break that coincides precisely with the period when agentic AI tools became capable enough to substitute for junior analytical work. Companies aren't announcing junior layoffs. They're quietly not posting the roles. The people most affected will never know the job existed.

Young workers see a decrease of nearly 16 percent in employment due to AI — Stanford Digital Economy Lab / Politico Stanford Digital Economy Lab →

China may be skipping the chatbot phase entirely

Just as China leapfrogged credit cards and went straight to mobile payments, it may be bypassing the "AI as chatbot" paradigm altogether. An open-source AI tool called OpenClaw hit 250,000 GitHub stars in sixty days. Baidu has integrated it into its search app, which has 700 million users. Entrepreneurs are charging 500 yuan (roughly $70) to install it on people's home computers. A startup made $28,000 in ten days selling a one-click installer. Computer repair shops are dispatching what they call "installation personnel," described as operating like plumbers. When a piece of software generates enough demand to support a physical installation economy, the adoption curve is real and deep. Western assumptions about how AI gets adopted may not apply everywhere.

MIT Technology Review →

Half of AI code that passes its own tests gets rejected by humans

METR, one of the more rigorous AI evaluation organisations, found that roughly half of code solutions generated by Claude models, solutions that passed automated grading, were subsequently rejected by the actual human project maintainers. Journalist Derek Thompson, reflecting on his own experience using AI coding tools, offered the most useful reframe: AI's real skill is generating plausible candidate solutions that require constant human checking, debugging and rejection. That checking process is effectively its own distinct and skilled job. He compared it to being a casting director working with a promising but unreliable younger actor. Getting the collaboration dynamics right will take a long time to diffuse through the economy, which is grounds for scepticism about predictions of imminent mass displacement.

METR →
See what readers said ↓

What resonated

  • The apprenticeship pipeline paradox: the argument that cutting juniors today erodes the senior talent pool of tomorrow was the single thread readers returned to most. Multiple replies engaged with it independently, suggesting it articulates a worry many leaders already carry but haven't named.
  • Extraction versus expansion as a choice: readers responded to the framing as a decision, not a trend. Several said it sharpened conversations they were already having about whether AI headcount savings should be reinvested or banked.
  • Skill reclassification in professional services: the observation that AI has retroactively revealed which tasks were genuinely cognitive and which were merely time-consuming landed hard, particularly among people in consulting and law.
  • The SaaS market pricing shift: the 30% software stock decline and the "build it yourself" examples prompted readers to reconsider their own vendor relationships.

Points readers raised

A senior technology leader at a global professional services firm challenged the SaaS disruption premise. Development costs, they pointed out, are only about 20% of a typical software company's revenue. Sales, marketing and customer success absorb 60%. AI can rewrite code, but it cannot replicate distribution and switching costs. They drew a parallel to offshoring: "Huge appetite. Need for re-invention." The disruption is real but the mechanism is more nuanced than build-versus-buy.

A partner at another professional services firm identified the tension between original thinking and process execution. Developing the foundational insight that makes a project valuable is still human work, they argued, but once that insight exists, AI can scale the execution. Their question: does this shift advantage or disadvantage people who trade on original judgment? "I suspect the answer is that it depends on whether I tool myself up appropriately."

A founder building an AI-native company connected the apprenticeship argument to institutional culture. They started their career as a graduate trainee at a large bank and worry that the next generation won't get the benefit of those early years inside large institutions. They posed a sharp question: will we see geographic or cultural differences in AI adoption, where firms with cultures that already embrace apprenticeship end up moving faster?

A professor setting up a Future of Work institute identified a specific parallel to the extraction-versus-expansion frame. Job applications have lost their friction: candidates send almost infinite applications using AI, and firms screen with AI. Both sides have lost out. The old friction forced applicants to think before choosing. AI removed it entirely rather than redirecting it somewhere useful.

A chief people officer at a global firm read the edition twice: "first over the weekend and then again this morning." The apprenticeship pipeline and organisational change sections spoke directly to the tensions they navigate daily. Sometimes the most valuable signal is that a piece is worth re-reading.

A newsletter author raised a fair editorial challenge: some sections sound more like AI than like me. "If it's your human insight, it jars a bit to read those bits in a voice that's obviously an AI." The same week, an executive I've worked with for years wrote to say that they loved how clearly they could hear my voice and personality in the writing. So one reader thinks there's too much AI and another thinks it sounds exactly like me. Either I've trained the model well or I've always written like a robot. I choose not to investigate further.

Read the email ↑

See the extras for this week ↓

The hiring cliff for juniors, in one chart

The essay mentions a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. This chart shows the full time series using a difference-in-differences approach. Junior hires in exposed occupations fell off a cliff after ChatGPT's release in late 2022, while exits held steady. The gap keeps widening. Companies aren't firing juniors. They're just not bringing new ones in.

Hires vs exits for junior workers in AI-exposed occupations Source →

US tech employment growth has gone negative

Year-on-year tech employment growth turned negative in 2024 and hasn't recovered. The tech sector is shrinking its workforce for the first time since the post-2008 recovery. Combined with the junior hiring data above, a pattern emerges: the contraction is real, it's happening now, and it's concentrated at the entry level.

US tech employment year-on-year change

The work budget is orders of magnitude larger than the software budget

Julien Bek at Sequoia Capital argues that the next category-defining AI company won’t sell tools to professionals. It will sell completed work directly to buyers. His distinction between “copilots” (AI as a tool for professionals) and “autopilots” (AI delivering the outcome) reframes the entire market. For every dollar spent on software, six are spent on services. The smartest entry point? Replace outsourced work first. The budget already exists, the buyer already accepts external delivery, and there’s no internal team whose jobs are visibly threatened. Once embedded, expand inward.

Source →

You would not believe how many shortcuts everyone else is taking

Ezra Klein wrote a commencement address called “Just Do the Work” about discovering, as a young journalist, that almost nobody was actually reading Congressional Budget Office reports. Documents that are neither complex nor long. By reading what his peers skipped, he got ahead. Not exceptional talent. Just diligence. Economist Paul Novosad adds the contemporary twist: this is “more true than ever now, when more people are shirking and AI lets you do 10x if you try.” The gap between the diligent and the lazy is widening, not narrowing.

Source →
See what readers said ↓

What resonated

  • "The hundred small things" as a reframe: the distinction between chasing dramatic AI wins and compounding small daily elevations. Several readers said it gave them language for something they had been struggling to articulate to leadership.
  • The "extra hour" problem: the observation that AI is deployed like an extra hour rather than an extra person struck a chord with people managing teams. Structures absorb the gain before anyone notices it.
  • The junior roles question: the Block layoffs and YC data generated the most emotional responses. Readers connected it to their own organisations' headcount conversations.
  • The senryu competition as metaphor: the retreat-versus-redesign framing resonated, though notably nobody offered examples of successful redesign. The absence may be the point.

Points readers raised

A senior leader at a global professional services firm identified a structural gap in the argument. The hundred small things need a container, not just encouragement. He proposed daily one-hour structured learning blocks rather than hoping people will explore on their own. His deeper point: senior leaders who do not use AI personally have no on-the-ground proof of benefit, so their teams see no credibility signal from above.

An events and entertainment industry exec pushed the junior roles argument to its darkest conclusion. The UK already underinvests in training, preferring overseas hiring. If AI accelerates that trend, there is no junior pipeline to grow seniors from. "That is extremely bad for companies longer term in terms of skills shortages and salary premiums for skilled workers, and even worse for UK plc." A topic that I picked up in today's edition.

A manufacturing executive used the framing to shape two specific conversations: accelerating superuser growth and celebrating a colleague's "let's map your process" approach to adoption stickiness. He is in the middle of major organisational expansion and sees the hundred small things as directly applicable to that work.

A technology leader at a research firm noted a quiet loss that doesn't show up in any headcount data. His analysts used to walk to a colleague's desk when they got stuck on a coding problem. Now they ask AI. The problem gets solved faster. But the conversation that would have happened, the one where a junior person absorbs how a senior person thinks about problems, doesn't happen at all. AI is removing the apprenticeship mechanisms even where the apprentices still exist.

My one proper unsubscribe turned out to be the most advanced person on the list. His world is so far ahead of ours that the newsletter isn't relevant to him. He works on cutting-edge AI implementation (sorry everyone, we're all just fast followers!). His AI communities are discussing running engineers 24/7 with 12-hour agent check-ins, deploying 30+ vibecoded projects from non-engineering teams, making openclaw work across 500+ person organisations, and polyphasic sleep schedules to optimise autonomous agent management. Anyone else even close to these conversations? For sure his world is a useful signal of where ours is heading. I'll stay close for you all!

Polyphasic sleep patterns for AI agent management

Links readers shared

  • Creativity Can Embrace AI — Nadim's book on how creative industries can work with rather than against language models. Named Amazon book of the year by The New Publishing Standard.
Read the email ↑

See the extras for this week ↓

The marginal cost of arguing is going to zero

UK employment lawyers report workplace grievances that once fit in a single email ballooning into 30-page documents, complete with fabricated legal precedents and citations to laws from the wrong country. (Personnel Today) Creation cost: near zero. Response cost: unchanged. Ministry of Justice figures show new employment tribunal receipts rose 33% year-on-year in the quarter to September. (GOV.UK)

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between the sophistication of human prompts and the sophistication of AI responses. The more nuanced and structured the input, the more the model rises to meet it. The bottleneck isn't the model. It's the human. Which is, in its own way, reassuring. (Anthropic Research)

One blog post. One hour. Billions gone. Again.

Anthropic published a blog post introducing Claude Code Security on a Friday afternoon. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. (Barron's) The same dynamic hit legal tech stocks when Anthropic announced legal plugins for Claude Cowork a couple of weeks earlier. (Sherwood News) That's a new kind of leverage.

The trust signals your organisation depends on are dissolving

Here's a problem that connects directly to those poets in Sakaiminato. A thoughtful email from a director now carries the same weight as an AI-generated memo, because the reader can't tell the difference. The cues that used to signal competence (a well-crafted message, a polished document, a detailed analysis produced under time pressure) are now producible by anyone in minutes. This isn't a quality problem. It's a trust architecture problem. We need to distinguish between "produced this" and "shaped this." The senryu competition couldn't tell the difference. That's why it died.

Read what readers said ↓

What resonated

  • The capability-adoption gap: the tension between what AI can demonstrably do and what organisations will permit. This was the thread readers pulled on most. Thirty of 121 replies engaged with it directly, many in operational terms.
  • "Back your misfits": the idea of finding and enabling the ten percent who don't need pushing. Several readers said they were already doing this and the framing validated their approach.
  • Organisational inertia as structural, not cultural: readers recognised the obstacle isn't attitude or skill but governance, process, and risk frameworks. One partner at a strategy consultancy pushed further: speeding up the same process is "premature optimisation."
  • The personal-to-organisational transition: the feeling of being ahead personally but constrained institutionally was widely shared. People described experimenting on weekends, then walking into Monday meetings where nothing has changed.
  • "Package around problems, not platforms": a phrase several readers said entered their working vocabulary within days.

Points readers raised

A director at a media company connected the capability-adoption tension to their own industry. They are formalising a champions network, exactly the "back your misfits" approach, and said the framing helped crystallise what they were already building.

An exec at a broadcaster shared the most striking story: they have been deliberately ignoring their organisation's AI policies to enable their team's experimentation. The newsletter validated an act of institutional defiance they were already committed to.

An exec at a professional services firm offered a substantive counterpoint. Accelerating existing processes without redesigning them, they argued, is premature optimisation. Their real question: how does more information drive quality rather than just volume?

A chief technology officer at a data firm placed themselves at stage seven of an eight-stage AI maturity framework they adapted from Steve Yegge's writing: running ten or more parallel AI agent instances simultaneously.

Read the email ↑

See the extras for this week ↓

Apple chose Google over itself

Apple partnered with Google to power its AI features, paying a reported billion dollars a year for Gemini. The world's most valuable technology company looked at its own AI and decided someone else's was better. The strategic question isn't whether to build AI capability. It's which partner to choose. (By the way, the answer I'd recommend is Claude!)

Source →

One blog post. One hour. Billions gone.

Anthropic published a blog post introducing Claude Code Security on Friday. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. The same happened to legal tech stocks when it announced legal plugins a couple of weeks ago. That's a new kind of leverage.

Source →

The marginal cost of arguing is going to zero

UK employment lawyers are seeing workplace grievances that once fit in a single email ballooning into 30-page documents, complete with made-up legal precedents and citations to laws from the wrong country. Creation cost: near zero. Response cost: unchanged. New UK employment cases rose 33% in three months.

Source →

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between prompt sophistication and response quality. Give a vague prompt, get a vague response. Give one rich in nuance and structured constraints, and the model meets you there. The bottleneck isn't the model. It's the human.

Source →

Watch what McKinsey does with its own workforce, not what it advises clients to do

McKinsey calls it "25 squared." The plan: grow client-facing roles by 25% while cutting back-office roles by 25%, using AI to rebalance a $20 billion firm. This isn't productivity improvement. This is structural transformation. Ask yourself if there are parts of your business that are ripe for radical change. McKinsey already has.

Source →
Read the email ↑