Emails, bits that didn't fit, and letters from readers

Each week: the email itself, the interesting things that didn't quite fit, and what readers said.


See the extras for this week ↓

Only a third of the time AI saves actually reaches the team

How time savings from AI are used — Gartner

Gartner data shows that of 5.4 hours saved per worker through AI tools, only 1.7 hours (31%) translate into improved team outcomes. The largest single block of recovered time, 1.4 hours, goes into additional work that doesn't improve outcomes. Nearly an hour is spent redoing work the AI got wrong. Two thirds of the productivity gain leaks away before anyone benefits. If your organisation is deploying AI tools without redesigning how teams work, you're capturing barely a third of the value.

Gartner →

Coding is not software engineering. The confusion is expensive.

Jeremy Howard, a deep learning pioneer who uses AI coding tools daily, draws a distinction most executives miss. Coding, translating a specification into syntax, is a style transfer problem that language models handle well. Software engineering, designing abstractions, decomposing problems, building systems that hold together over time, is a fundamentally different skill that models cannot do. Howard cites Fred Brooks's essay from decades ago, which made the same observation about fourth-generation languages: removing the typing bottleneck does not remove the engineering bottleneck. Companies restructuring around the assumption that AI can do software engineering are conflating two things. His sharpest framing: what matters for any person or team isn't their current output (the intercept) but their rate of improvement (the slope). A little bit of slope makes up for a lot of intercept. Organisations pushing AI to maximise today's output may be destroying the growth rate of the people who'll need to maintain the systems tomorrow.

fast.ai →

The shift from co-intelligence to managing AIs

Ethan Mollick, the Wharton professor whose work has appeared in this newsletter before, argues we've moved from co-intelligence (prompting AI back and forth) to managing AIs (giving agents hours of work and getting results in minutes). His most striking example: a company called StrongDM has two radical rules. "Code must not be written by humans" and "Code must not be reviewed by humans." Each engineer spends roughly $1,000 a day on AI tokens. Coding agents build from human-written roadmaps, testing agents simulate customers, and humans review the finished product but never see the code. Whether or not that model generalises, the direction is clear. The job is shifting from doing the work to directing the things that do it.

One Useful Thing →

AI agents are hiring humans

A platform called RentAHuman has accumulated over 600,000 sign-ups for a marketplace where AI agents autonomously hire human beings to perform tasks machines cannot: delivering physical goods, counting objects in a city, conducting on-the-ground research. The agents browse, post jobs, evaluate candidates and release payment from escrow upon photographic proof of completion. No human intervention on the purchasing side. The gig economy inverted: people as the on-demand labour layer beneath AI clients. Whether that distinction matters to the people taking the jobs is left as an exercise for the reader.

Wired →

The junior hiring cliff, updated

Edition 3 cited a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. The number has worsened. Stanford Digital Economy Lab data, charted by Politico, now shows a 15.7% decline from 2021 to late 2025. The shape of the curve matters as much as the number: employment held roughly flat through 2023, then fell off a cliff in 2024 and kept falling. This isn't a gradual adjustment. It's a structural break that coincides precisely with the period when agentic AI tools became capable enough to substitute for junior analytical work. Companies aren't announcing junior layoffs. They're quietly not posting the roles. The people most affected will never know the job existed.

Young workers see a decrease of nearly 16 percent in employment due to AI — Stanford Digital Economy Lab / Politico Stanford Digital Economy Lab →

China may be skipping the chatbot phase entirely

Just as China leapfrogged credit cards and went straight to mobile payments, it may be bypassing the "AI as chatbot" paradigm altogether. An open-source AI tool called OpenClaw hit 250,000 GitHub stars in sixty days. Baidu has integrated it into its search app, which has 700 million users. Entrepreneurs are charging 500 yuan (roughly $70) to install it on people's home computers. A startup made $28,000 in ten days selling a one-click installer. Computer repair shops are dispatching what they call "installation personnel," described as operating like plumbers. When a piece of software generates enough demand to support a physical installation economy, the adoption curve is real and deep. Western assumptions about how AI gets adopted may not apply everywhere.

MIT Technology Review →

Half of AI code that passes its own tests gets rejected by humans

METR, one of the more rigorous AI evaluation organisations, found that roughly half of code solutions generated by Claude models, solutions that passed automated grading, were subsequently rejected by the actual human project maintainers. Journalist Derek Thompson, reflecting on his own experience using AI coding tools, offered the most useful reframe: AI's real skill is generating plausible candidate solutions that require constant human checking, debugging and rejection. That checking process is effectively its own distinct and skilled job. He compared it to being a casting director working with a promising but unreliable younger actor. Getting the collaboration dynamics right will take a long time to diffuse through the economy, which is grounds for scepticism about predictions of imminent mass displacement.

METR →
See what readers said ↓

What resonated

  • The apprenticeship pipeline paradox: the argument that cutting juniors today erodes the senior talent pool of tomorrow was the single thread readers returned to most. Multiple replies engaged with it independently, suggesting it articulates a worry many leaders already carry but haven't named.
  • Extraction versus expansion as a choice: readers responded to the framing as a decision, not a trend. Several said it sharpened conversations they were already having about whether AI headcount savings should be reinvested or banked.
  • Skill reclassification in professional services: the observation that AI has retroactively revealed which tasks were genuinely cognitive and which were merely time-consuming landed hard, particularly among people in consulting and law.
  • The SaaS market pricing shift: the 30% software stock decline and the "build it yourself" examples prompted readers to reconsider their own vendor relationships.

Points readers raised

A senior technology leader at a global professional services firm challenged the SaaS disruption premise. Development costs, they pointed out, are only about 20% of a typical software company's revenue. Sales, marketing and customer success absorb 60%. AI can rewrite code, but it cannot replicate distribution and switching costs. They drew a parallel to offshoring: "Huge appetite. Need for re-invention." The disruption is real but the mechanism is more nuanced than build-versus-buy.

A partner at another professional services firm identified the tension between original thinking and process execution. Developing the foundational insight that makes a project valuable is still human work, they argued, but once that insight exists, AI can scale the execution. Their question: does this shift advantage or disadvantage people who trade on original judgment? "I suspect the answer is that it depends on whether I tool myself up appropriately."

A founder building an AI-native company connected the apprenticeship argument to institutional culture. They started their career as a graduate trainee at a large bank and worry that the next generation won't get the benefit of those early years inside large institutions. They posed a sharp question: will we see geographic or cultural differences in AI adoption, where firms with cultures that already embrace apprenticeship end up moving faster?

A professor setting up a Future of Work institute identified a specific parallel to the extraction-versus-expansion frame. Job applications have lost their friction: candidates send almost infinite applications using AI, and firms screen with AI. Both sides have lost out. The old friction forced applicants to think before choosing. AI removed it entirely rather than redirecting it somewhere useful.

A chief people officer at a global firm read the edition twice: "first over the weekend and then again this morning." The apprenticeship pipeline and organisational change sections spoke directly to the tensions they navigate daily. Sometimes the most valuable signal is that a piece is worth re-reading.

A newsletter author raised a fair editorial challenge: some sections sound more like AI than like me. "If it's your human insight, it jars a bit to read those bits in a voice that's obviously an AI." The same week, an executive I've worked with for years wrote to say that they loved how clearly they could hear my voice and personality in the writing. So one reader thinks there's too much AI and another thinks it sounds exactly like me. Either I've trained the model well or I've always written like a robot. I choose not to investigate further.

Read the email ↑

See the extras for this week ↓

The hiring cliff for juniors, in one chart

The essay mentions a 14% drop in hiring for workers aged 22 to 25 in AI-exposed occupations. This chart shows the full time series using a difference-in-differences approach. Junior hires in exposed occupations fell off a cliff after ChatGPT's release in late 2022, while exits held steady. The gap keeps widening. Companies aren't firing juniors. They're just not bringing new ones in.

Hires vs exits for junior workers in AI-exposed occupations Source →

US tech employment growth has gone negative

Year-on-year tech employment growth turned negative in 2024 and hasn't recovered. The tech sector is shrinking its workforce for the first time since the post-2008 recovery. Combined with the junior hiring data above, a pattern emerges: the contraction is real, it's happening now, and it's concentrated at the entry level.

US tech employment year-on-year change

The work budget is orders of magnitude larger than the software budget

Julien Bek at Sequoia Capital argues that the next category-defining AI company won’t sell tools to professionals. It will sell completed work directly to buyers. His distinction between “copilots” (AI as a tool for professionals) and “autopilots” (AI delivering the outcome) reframes the entire market. For every dollar spent on software, six are spent on services. The smartest entry point? Replace outsourced work first. The budget already exists, the buyer already accepts external delivery, and there’s no internal team whose jobs are visibly threatened. Once embedded, expand inward.

Source →

You would not believe how many shortcuts everyone else is taking

Ezra Klein wrote a commencement address called “Just Do the Work” about discovering, as a young journalist, that almost nobody was actually reading Congressional Budget Office reports. Documents that are neither complex nor long. By reading what his peers skipped, he got ahead. Not exceptional talent. Just diligence. Economist Paul Novosad adds the contemporary twist: this is “more true than ever now, when more people are shirking and AI lets you do 10x if you try.” The gap between the diligent and the lazy is widening, not narrowing.

Source →
See what readers said ↓

What resonated

  • "The hundred small things" as a reframe: the distinction between chasing dramatic AI wins and compounding small daily elevations. Several readers said it gave them language for something they had been struggling to articulate to leadership.
  • The "extra hour" problem: the observation that AI is deployed like an extra hour rather than an extra person struck a chord with people managing teams. Structures absorb the gain before anyone notices it.
  • The junior roles question: the Block layoffs and YC data generated the most emotional responses. Readers connected it to their own organisations' headcount conversations.
  • The senryu competition as metaphor: the retreat-versus-redesign framing resonated, though notably nobody offered examples of successful redesign. The absence may be the point.

Points readers raised

A senior leader at a global professional services firm identified a structural gap in the argument. The hundred small things need a container, not just encouragement. He proposed daily one-hour structured learning blocks rather than hoping people will explore on their own. His deeper point: senior leaders who do not use AI personally have no on-the-ground proof of benefit, so their teams see no credibility signal from above.

An events and entertainment industry exec pushed the junior roles argument to its darkest conclusion. The UK already underinvests in training, preferring overseas hiring. If AI accelerates that trend, there is no junior pipeline to grow seniors from. "That is extremely bad for companies longer term in terms of skills shortages and salary premiums for skilled workers, and even worse for UK plc." A topic that I picked up in today's edition.

A manufacturing executive used the framing to shape two specific conversations: accelerating superuser growth and celebrating a colleague's "let's map your process" approach to adoption stickiness. He is in the middle of major organisational expansion and sees the hundred small things as directly applicable to that work.

A technology leader at a research firm noted a quiet loss that doesn't show up in any headcount data. His analysts used to walk to a colleague's desk when they got stuck on a coding problem. Now they ask AI. The problem gets solved faster. But the conversation that would have happened, the one where a junior person absorbs how a senior person thinks about problems, doesn't happen at all. AI is removing the apprenticeship mechanisms even where the apprentices still exist.

My one proper unsubscribe turned out to be the most advanced person on the list. His world is so far ahead of ours that the newsletter isn't relevant to him. He works on cutting-edge AI implementation (sorry everyone, we're all just fast followers!). His AI communities are discussing running engineers 24/7 with 12-hour agent check-ins, deploying 30+ vibecoded projects from non-engineering teams, making openclaw work across 500+ person organisations, and polyphasic sleep schedules to optimise autonomous agent management. Anyone else even close to these conversations? For sure his world is a useful signal of where ours is heading. I'll stay close for you all!

Polyphasic sleep patterns for AI agent management

Links readers shared

  • Creativity Can Embrace AI — Nadim's book on how creative industries can work with rather than against language models. Named Amazon book of the year by The New Publishing Standard.
Read the email ↑

See the extras for this week ↓

The marginal cost of arguing is going to zero

UK employment lawyers report workplace grievances that once fit in a single email ballooning into 30-page documents, complete with fabricated legal precedents and citations to laws from the wrong country. (Personnel Today) Creation cost: near zero. Response cost: unchanged. Ministry of Justice figures show new employment tribunal receipts rose 33% year-on-year in the quarter to September. (GOV.UK)

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between the sophistication of human prompts and the sophistication of AI responses. The more nuanced and structured the input, the more the model rises to meet it. The bottleneck isn't the model. It's the human. Which is, in its own way, reassuring. (Anthropic Research)

One blog post. One hour. Billions gone. Again.

Anthropic published a blog post introducing Claude Code Security on a Friday afternoon. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. (Barron's) The same dynamic hit legal tech stocks when Anthropic announced legal plugins for Claude Cowork a couple of weeks earlier. (Sherwood News) That's a new kind of leverage.

The trust signals your organisation depends on are dissolving

Here's a problem that connects directly to those poets in Sakaiminato. A thoughtful email from a director now carries the same weight as an AI-generated memo, because the reader can't tell the difference. The cues that used to signal competence (a well-crafted message, a polished document, a detailed analysis produced under time pressure) are now producible by anyone in minutes. This isn't a quality problem. It's a trust architecture problem. We need to distinguish between "produced this" and "shaped this." The senryu competition couldn't tell the difference. That's why it died.

Read what readers said ↓

What resonated

  • The capability-adoption gap: the tension between what AI can demonstrably do and what organisations will permit. This was the thread readers pulled on most. Thirty of 121 replies engaged with it directly, many in operational terms.
  • "Back your misfits": the idea of finding and enabling the ten percent who don't need pushing. Several readers said they were already doing this and the framing validated their approach.
  • Organisational inertia as structural, not cultural: readers recognised the obstacle isn't attitude or skill but governance, process, and risk frameworks. One partner at a strategy consultancy pushed further: speeding up the same process is "premature optimisation."
  • The personal-to-organisational transition: the feeling of being ahead personally but constrained institutionally was widely shared. People described experimenting on weekends, then walking into Monday meetings where nothing has changed.
  • "Package around problems, not platforms": a phrase several readers said entered their working vocabulary within days.

Points readers raised

A director at a media company connected the capability-adoption tension to their own industry. They are formalising a champions network, exactly the "back your misfits" approach, and said the framing helped crystallise what they were already building.

An exec at a broadcaster shared the most striking story: they have been deliberately ignoring their organisation's AI policies to enable their team's experimentation. The newsletter validated an act of institutional defiance they were already committed to.

An exec at a professional services firm offered a substantive counterpoint. Accelerating existing processes without redesigning them, they argued, is premature optimisation. Their real question: how does more information drive quality rather than just volume?

A chief technology officer at a data firm placed themselves at stage seven of an eight-stage AI maturity framework they adapted from Steve Yegge's writing: running ten or more parallel AI agent instances simultaneously.

Read the email ↑

See the extras for this week ↓

Apple chose Google over itself

Apple partnered with Google to power its AI features, paying a reported billion dollars a year for Gemini. The world's most valuable technology company looked at its own AI and decided someone else's was better. The strategic question isn't whether to build AI capability. It's which partner to choose. (By the way, the answer I'd recommend is Claude!)

Source →

One blog post. One hour. Billions gone.

Anthropic published a blog post introducing Claude Code Security on Friday. Within an hour, cybersecurity stocks cratered: CrowdStrike fell 8%, Cloudflare 8%, Okta over 9%. The tool itself is a modest research preview. But a single blog post from an AI company erased billions in market value from established incumbents. The same happened to legal tech stocks when it announced legal plugins a couple of weeks ago. That's a new kind of leverage.

Source →

The marginal cost of arguing is going to zero

UK employment lawyers are seeing workplace grievances that once fit in a single email ballooning into 30-page documents, complete with made-up legal precedents and citations to laws from the wrong country. Creation cost: near zero. Response cost: unchanged. New UK employment cases rose 33% in three months.

Source →

Your prompt is the ceiling

Anthropic's latest Economic Index analysed over a million Claude conversations and found a near-perfect correlation (r > 0.92) between prompt sophistication and response quality. Give a vague prompt, get a vague response. Give one rich in nuance and structured constraints, and the model meets you there. The bottleneck isn't the model. It's the human.

Source →

Watch what McKinsey does with its own workforce, not what it advises clients to do

McKinsey calls it "25 squared." The plan: grow client-facing roles by 25% while cutting back-office roles by 25%, using AI to rebalance a $20 billion firm. This isn't productivity improvement. This is structural transformation. Ask yourself if there are parts of your business that are ripe for radical change. McKinsey already has.

Source →
Read the email ↑